Posted on Leave a comment

Crystal Clear: Our Silicon Carbide Waveguides & the Path to Orion’s Large FoV

0:00 / 0:00

Back in 2019, the Orion team prepared an important demo for Meta Founder & CEO Mark Zuckerberg, showcasing potential waveguides for augmented reality glasses—a pivotal moment when theoretical calculations on paper were brought to life. And it was a demo that changed everything.

“Wearing the glasses with glass-based waveguides and multiple plates, it felt like you were in a disco,” recalls Optical Scientist Pasqual Rivera. “There were rainbows everywhere, and it was so distracting—you weren’t even looking at the AR content. Then, you put on the glasses with silicon carbide waveguides, and it was like you were at the symphony listening to a quiet, classical piece. You could actually pay attention to the full experience of what we were building. It was a total game changer.”

Yet as clear (pun intended) as the choice of silicon carbide as a substrate seems today, when we first started down the road to AR glasses a decade ago, it was anything but.

“Silicon carbide is normally heavily nitrogen-doped,” says Rivera. “It’s green, and if it gets thick enough, it looks black. There’s no way you could make an optical lens out of it. It’s an electronic material. There’s a reason it’s that color, and it’s because of electronic properties.”

“Silicon carbide has been around as a material for a long time,” agrees AR Waveguides Tech Lead Giuseppe Calafiore. “Its main application is high-power electronics. Take electric vehicles: All EVs require a chip—but that chip must also be capable of very high power, moving the wheels, and driving this thing. It turns out you can’t do it with the regular silicon substrate, which is what makes the chips that we use in our computers and electronics. You need a platform that allows you to go through high currents, high power, and that material is silicon carbide.”

Until fairly recent discussions around renewable energy sources started to heat up, the market for these high-power chipsets was nowhere near the size of the market for chips for consumer electronics. Silicon carbide has always been expensive, and there wasn’t much incentive to bring costs down because, for the size of chip that you make for a car, the price of the substrate was tolerable.

“But it turns out that silicon carbide also has some of the properties that we need for waveguides and optics,” Calafiore says. “The refractive index is the key property that we care about. And silicon carbide has a high refractive index, which means that it’s capable of channeling in and outputting a large quantity of optical data. You can think of it as optical bandwidth—just like you have bandwidth for the internet, and you want it to be large enough so that you can send huge amounts of data through that channel. The same goes for optical devices.”

The higher a material’s refractive index, the higher its étendue, so you can send more optical data through that channel.

“The channel in our case is our waveguide, and a larger étendue translates to a larger field of view,” explains Calafiore. “The larger the refractive index of a material, the larger field of view the display can support.”

The Road to the Right Refractive Index

When Calafiore first joined what was then Oculus Research in 2016, the highest refractive index glass the team had at its disposal was 1.8—which required stacking multiple plates to achieve the desired field of view. Undesirable optical artifacts aside, the assembly line became increasingly complicated as the first two waveguides had to be perfectly aligned, and then that stack had to be perfectly aligned with a third waveguide.

“Not only was it expensive, it was immediately obvious that you couldn’t have three pieces of glass per lens in a pair of glasses,” Calafiore recalls. “They were too heavy, and the thickness was prohibitively large and ugly—nobody would buy it. So we went back to square one: trying to boost the refractive index of the substrate to reduce the number of plates needed.”

The first material the team looked into was lithium niobate, which has a refractive index of roughly 2.3—quite a bit higher than the glass at 1.8.

“We realized that we only had to stack two plates and could maybe even manage with one plate to still cover the field of view,” says Calafiore. “Almost in parallel, we started looking at other materials—which is how we figured out alongside our suppliers in 2019 that silicon carbide, in its purest form, is actually very transparent. It also happens to have the highest refractive index known for an optical application, which is 2.7.”

That’s a 17.4% increase over lithium niobate and a 50% bump over glass, for those keeping score at home.

“With a few modifications to the very same equipment that was used in the industry already, it was possible to get transparent silicon carbide,” Calafiore says. “You could just change the process, be a lot more careful, and instead of optimizing for electronic properties, you optimize for optical properties: transparency, uniformity of the refractive index, etc.”

The Potential Cost of Compromise

At that time, the Reality Labs team was the first to even attempt moving from opaque silicon carbide wafers to transparent ones. And because silicon carbide is one of the hardest known materials, it essentially requires diamond tooling to cut or polish it. As a result, the non-recurring engineering costs were very high, so the resulting substrate was quite expensive.

While more cost-effective alternatives exist, as with any technology, they each have tradeoffs. And as the field of view increases towards Orion’s industry-leading field of view of approximately 70 degrees, new issues arise like ghost images and rainbows.

“Finding the optimum solution for a wide field of view AR display is fraught with tradeoffs between performance and costs,” explains Director of Research Science Barry Silverstein. “Costs can often be driven down, but if performance doesn’t cut it, costs won’t ultimately matter.”

Ghost images are like visual echoes of the primary image being projected onto the display. Rainbows are colorful streaks of light that are created when ambient light reflects off of the waveguide. “Say you’re driving at night with moving car lights around you,” says Silverstein. “You’re going to have rainbows that move, too. Or if you’re out on the beach playing volleyball and the sun’s shining, you’re going to get a rainbow streak that’s moving with you and you’ll miss your shot. And one of the miraculous properties of silicon carbide is that it gets rid of those rainbows.”

“The other advantage of silicon carbide that none of the other materials have is thermal conductivity,” Calafiore adds. “Plastic is a terrible insulator. Glass, lithium niobate, same thing. You go to silicon carbide and it’s transparent, looks like glass, and guess what: It conducts heat.”

So in July 2020, the team determined that silicon carbide was the optimum choice for three primary reasons: It led to an improved form factor because it required only a single plate and smaller mounting structures, it had better optical properties, and it was lighter than dual-plate glass.

The Secret of Slant Etch

With the material in mind, the next nut to crack was the fabrication of the waveguides—and specifically, an unconventional grating technique called slant etch.

“The grating is the nanostructure that in-couples and out-couples the light from the lens,” explains Calafiore. “And for silicon carbide to work, the grating needs to be slant etch. Instead of being vertical, you want the lines of the grating to be sloped diagonally.”

“We were the first ones to do slant etch directly on the devices,” says Research Manager Nihar Mohanty. “The whole industry used to rely on nano imprint, which doesn’t work for substrates with such a high refractive index. That’s why no one else in the world had thought about doing silicon carbide.”

But because slant etch is an immature technology, most semiconductor chip suppliers and factories lack the necessary tools.

“Back in 2019, my manager at the time, Matt Colburn, and I established our own facility as there was nothing in the world that could produce etched silicon carbide waveguides and where we could prove out the technology beyond the labscale,” explains Mohanty. “It was a huge investment, and we established the whole pipeline there. The tooling was custom-made for us by our partners, and the process was developed in-house in Meta, though our systems are research-grade because there were no manufacturing-grade systems out there. We worked with a manufacturing partner to develop manufacturing-grade slant etch tooling and processes. And now that we’ve shown what’s possible with silicon carbide, we want others in the industry to start making tools of their own.”

The more companies that invest in optical-grade silicon carbide and develop equipment, the stronger the category of consumer AR glasses will become.

No Longer Chasing Rainbows

While technological inevitability is a myth, the stars certainly seem to be aligning in silicon carbide’s favor. And although the team continues to investigate alternatives, there’s a strong sense that the right people have come together at the right time under the right market conditions to build AR glasses using this material.

“Orion proved that silicon carbide is a viable option for AR glasses,” says Silverstein, “and we’re now seeing interest across the supply chain on three different continents where they’re heavily pursuing this as an opportunity. Silicon carbide will come out on top. It’s just a matter of time in my book.”

And a lot can happen in that time—much like the way the tables have turned since we grew our first clear silicon carbide crystals.

“All of these silicon carbide manufacturers had cranked up the supply massively in response to the expected EV boom,” notes Calafiore. “Right now, there’s an overcapacity that didn’t exist when we were building Orion. So now, because supply is high and demand is low, the cost of the substrate has started to come down.”

“Suppliers are very excited by the new opportunity of manufacturing optical-grade silicon carbide—after all, each waveguide lens represents a large amount of material relative to an electronic chip, and all of their existing capabilities apply to this new space,” adds Silverstein. “Filling your factory is essential, and scaling your factory is the dream. The size of the wafer matters, too: The bigger the wafer, the lower the cost—but the complexity of the process also goes up. That said, we’ve seen suppliers move from four-inch to eight-inch wafers, and some are working on precursors to 12-inch wafers, which would yield exponentially more pairs of AR glasses.”

These advancements should help continue to drive costs down. It’s still early days, but the future is coming into focus.

“At the beginning of any new technological revolution, you try a bunch of things,” says Calafiore. “Look at television: We started with cathode-ray tubes, and then we went to the LED plasma TVs and now microLEDs. We went through several different technologies and architectures. As you pathfind, lots of paths end up nowhere, but there are some that you just keep coming back to as the most promising. We’re not at the end of the road, and we can’t do it alone, but silicon carbide is a wonder material that is well worth the investment.”

“The world is awake now,” adds Silverstein. “We’ve successfully shown that silicon carbide can flex across electronics and photonics. It’s a material that could have future applications in quantum computing. And we’re seeing signs that it’s possible to significantly reduce the cost. There’s a lot of work left to be done, but the potential upside here is huge.”


Learn more about silicon carbide in Photonics Spectra.

For more information on Orion, check out these blog posts:

Posted on Leave a comment

Orion: True AR Glasses Have Arrived

TL;DR:

  • Today at Connect, Mark Zuckerberg unveiled Orion—our first pair of true AR glasses, previously codenamed Project Nazare.
  • With an industry-leading field of view, silicon carbide lenses, intricate waveguides, uLED projectors, and more, Orion is our most advanced and most polished product prototype to date.
  • Now, we’re continuing product optimizations and lowering costs as we drive toward a scalable consumer device that will revolutionize the way people interact with the world.

“We are building AR glasses.”

Five simple words, spoken five years ago. With them, we planted a flag on our vision of a future where we no longer have to make the false choice between a world of information at our fingertips and the physical world all around us.

And now, five years later, another five words that stand to once again change the game:

We have built AR glasses.

https://youtube.com/watch?v=2CJsnyS8u3c%3Fsi%3DTr77gOKEq3PnA7-k%26controls%3D0%3Fautoplay%3D0%26color%3Dwhite%26cc_lang_pref%3Den%26cc_load_policy%3D0%26enablejsapi%3D1%26frameborder%3D0%26hl%3Den%26rel%3D0%26origin%3Dhttps%253A%252F%252Fwww.meta.com

At Meta, our mission is simple: give people the power to build community and bring the world closer together. And at Reality Labs, we build tools that help people feel connected anytime, anywhere. That’s why we’re working to build the next computing platform that puts people at the center so they can be more present, connected, and empowered in the world.

Ray-Ban Meta glasses have demonstrated the power of giving people hands-free access to key parts of their digital lives from their physical ones. We can talk to a smart AI assistant, connect with friends, and capture the moments that matter—all without ever having to pull out a phone. These stylish glasses fit seamlessly into our everyday lives, and people absolutely love them.

Yet while Ray-Ban Meta opened up an entirely new category of display-less glasses super-charged by AI, the XR industry has long dreamt of true AR glasses—a product that combines the benefits of a large holographic display and personalized AI assistance in a comfortable, stylish, all-day wearable form factor.

And today, we’ve pushed that dream closer to reality with the unveiling of Orion, which we believe is the most advanced pair of AR glasses ever made. In fact, it might be the most challenging consumer electronics device produced since the smartphone. Orion is the result of breakthrough inventions in virtually every field of modern computing—building on the work we’ve been doing at Reality Labs for the last decade. It’s packed with entirely new technologies, including the most advanced AR display ever assembled and custom silicon that enables powerful AR experiences to run on a pair of glasses using a fraction of the power and weight of an MR headset.

Orion’s input system seamlessly combines voice, eye gaze, and hand tracking with an EMG wristband that lets you swipe, click, and scroll while keeping your arm resting comfortably by your side, letting you stay present in the world and with the people around you as you interact with rich digital content.

Beginning today at Connect and continuing throughout the year, we’re opening up access to our Orion product prototype for Meta employees and select, external audiences so our development team can learn, iterate, and build towards our consumer AR glasses product line, which we plan to begin shipping in the near future.

Why AR Glasses?

There are three primary reasons why AR glasses are key to unlocking the next great leap in human-oriented computing.

  1. They enable digital experiences that are unconstrained by the limits of a smartphone screen. With large holographic displays, you can use the physical world as your canvas, placing 2D and 3D content and experiences anywhere you want.
  2. They seamlessly integrate contextual AI that can sense and understand the world around you in order to anticipate and proactively address your needs.
  3. They’re lightweight and great for both indoor and outdoor use—and they let people see each other’s real face, real eyes, and real expressions.

That’s the north star our industry has been building towards: a product combining the convenience and immediacy of wearables with a large display, high-bandwidth input, and contextualized AI in a form factor that people feel comfortable wearing in their daily lives.

Compact Form Factor, Compounded Challenges

For years, we’ve faced a false choice—either virtual and mixed reality headsets that enable deep, immersive experiences in a bulky form factor or glasses that are ideal for all-day use but don’t offer rich visual apps and experiences given the lack of a large display and corresponding compute power.

But we want it all, without compromises. For years, we’ve been hard at work to take the incredible spatial experiences afforded by VR and MR headsets and miniaturizing the technology necessary to deliver those experiences into a pair of lightweight, stylish glasses. Nailing the form factor, delivering holographic displays, developing compelling AR experiences, and creating new human-computer interaction (HCI) paradigms—and doing it all in one cohesive product—is one of the most difficult challenges our industry has ever faced. It was so challenging that we thought we had less than a 10 percent chance of pulling it off successfully.

Until now.

A Groundbreaking Display in an Unparalleled Form Factor

At approximately 70 degrees, Orion has the largest field of view in the smallest AR glasses form factor to date. That field of view unlocks truly immersive use cases for Orion, from multitasking windows and big-screen entertainment to lifesize holograms of people—all digital content that can seamlessly blend with your view of the physical world.

For Orion, field of view was our holy grail. We were bumping up against the laws of physics and had to bend beams of light in ways they don’t naturally bend—and we had to do it in a power envelope measured in milliwatts.

Instead of glass, we made the lenses from silicon carbide—a novel application for AR glasses. Silicon carbide is incredibly lightweight, it doesn’t result in optical artifacts or stray light, and it has a high refractive index—all optical properties that are key to a large field of view. The waveguides themselves have really intricate and complex nano-scale 3D structures to diffract or spread light in the ways necessary to achieve this field of view. And the projectors are uLEDs—a new type of display technology that’s super small and extremely power efficient.

Orion is unmistakably a pair of glasses in both look and feel—complete with transparent lenses. Unlike MR headsets or other AR glasses today, you can still see each other’s real eyes and expressions, so you can be present and share the experience with the people around you. Dozens of innovations were required to get the industrial design down to a contemporary glasses form factor that you’d be comfortable wearing every day. Orion is a feat of miniaturization—the components are packed down to a fraction of a millimeter. And we managed to embed seven tiny cameras and sensors in the frame rims.

We had to maintain optical precision at one-tenth the thickness of a strand of human hair. And the system can detect tiny amounts of movement—like the frames expanding or contracting in various room temperatures—and then digitally correct for the necessary optical alignment, all within milliseconds. We made the frames out of magnesium—the same material used in F1 race cars and spacecraft—because it’s lightweight yet rigid, so it keeps the optical elements in alignment and efficiently conducts away heat.

Heating and Cooling

Once we cracked the display (no pun intended) and overcame the physics problems, we had to navigate the challenge of really powerful compute alongside really low power consumption and the need for heat dissipation. Unlike today’s MR headsets, you can’t stuff a fan into a pair of glasses. So we had to get creative. A lot of the materials used to cool Orion are similar to those used by NASA to cool satellites in outer space.

We built highly specialized custom silicon that’s extremely power efficient and optimized for our AI, machine perception, and graphics algorithms. We built multiple custom chips and dozens of highly custom silicon IP blocks inside those chips. That lets us take the algorithms necessary for hand and eye tracking as well as simultaneous localization and mapping (SLAM) technology that normally takes hundreds of milliwatts of power—and thus generates a corresponding amount of heat—and shrink it down to just a few dozen milliwatts.

And yes, custom silicon continues to play a critical role in product development at Reality Labs, despite what you may have read elsewhere. 😎

Effortless EMG

Every new computing platform brings with it a paradigm shift in the way we interact with our devices. The invention of the mouse helped pave the way for the graphical user interfaces (GUIs) that dominate our world today, and smartphones didn’t start to really gain traction until the advent of the touch screen. And the same rule holds true for wearables.

We’ve spoken about our work with electromyography, or EMG, for years, driven by our belief that input for AR glasses needs to be fast, convenient, reliable, subtle, and socially acceptable. And now, that work is ready for prime time.

Orion’s input and interaction system seamlessly combines voice, eye gaze, and hand tracking with an EMG wristband that lets you swipe, click, and scroll with ease.

It works—and feels—like magic. Imagine taking a photo on your morning jog with a simple tap of your fingertips or navigating menus with barely perceptible movements of your hands. Our wristband combines a high-performance textile with embedded EMG sensors to detect the electrical signals generated by even the tiniest muscle movements. An on-device ML processor then interprets those EMG signals to produce input events that are transmitted wirelessly to the glasses. The system adapts to you, so it gets better and more capable at recognizing the most subtle gestures over time. And today, we’re sharing more about our support for external research focused on expanding the equity and accessibility potential of EMG wristbands.

Meet the Wireless Compute Puck

True AR glasses must be wireless, and they must also be small.So we built a wireless compute puck for Orion. It takes some of the load off of the glasses, enabling longer battery life and a better form factor complete with low latency.

The glasses run all of the hand tracking, eye tracking, SLAM, and specialized AR world locking graphics algorithms while the app logic runs on the puck to keep the glasses as lightweight and compact as possible.

The puck has dual processors, including one custom designed right here at Meta, and it provides the compute power necessary for low-latency graphics rendering, AI, and some additional machine perception.

And since it’s small and sleek, you can comfortably toss the puck in a bag or your pocket and go about your business—no strings attached.

AR Experiences

Of course, as with any piece of hardware, it’s only as good as the things you can do with it. And while it’s still early days, the experiences afforded by Orion are an exciting glimpse of what’s to come.

https://youtube.com/watch?v=HkdSv3QPhNw%3F%26controls%3D0%3Fautoplay%3D0%26color%3Dwhite%26cc_lang_pref%3Den%26cc_load_policy%3D0%26enablejsapi%3D1%26frameborder%3D0%26hl%3Den%26rel%3D0%26origin%3Dhttps%253A%252F%252Fwww.meta.com

We’ve got our smart assistant, Meta AI, running on Orion. It understands what you’re looking at in the physical world and can help you with useful visualizations. Orion uses the same Llama model that powers AI experiences running on Ray-Ban Meta smart glasses today, plus custom research models to demonstrate potential use cases for future wearables development.

You can take a hands-free video call on the go to catch up with friends and family in real time, and you can stay connected on WhatsApp and Messenger to view and send messages. No need to pull out your phone, unlock it, find the right app, and let your friend know you’re running late for dinner—you can do it all through your glasses.

https://youtube.com/watch?v=el7lUVvu8Bo%3F%26controls%3D0%3Fautoplay%3D0%26color%3Dwhite%26cc_lang_pref%3Den%26cc_load_policy%3D0%26enablejsapi%3D1%26frameborder%3D0%26hl%3Den%26rel%3D0%26origin%3Dhttps%253A%252F%252Fwww.meta.com

You can play shared AR games with family on the other side of the country or with your friend on the other side of the couch. And Orion’s large display lets you multitask with multiple windows to get stuff done without having to lug around your laptop.

The experiences available on Orion today will help chart the roadmap for our consumer AR glasses line in the future. Our teams will continue to iterate and build new immersive social experiences, alongside our developer partners, and we can’t wait to share what comes next.

A Purposeful Product Prototype

While Orion won’t make its way into the hands of consumers, make no mistake: This is not a research prototype. It’s the most polished product prototype we’ve ever developed—and it’s truly representative of something that could ship to consumers. Rather than rushing to put it on shelves, we decided to focus on internal development first, which means we can keep building quickly and continue to push the boundaries of the technology and experiences.

And that means we’ll arrive at an even better consumer product faster.

What Comes Next

Two major obstacles have long stood in the way of mainstream consumer-grade AR glasses: the technological breakthroughs necessary to deliver a large display in a compact glasses form factor and the necessity of useful and compelling AR experiences that can run on those glasses. Orion is a huge milestone, delivering true AR experiences running on reasonably stylish hardware for the very first time.

Now that we’ve shared Orion with the world, we’re focused on a few things:

  • Tuning the AR display quality to make the visuals even sharper
  • Optimizing wherever we can to make the form factor even smaller
  • Building at scale to make them more affordable

In the next few years, you can expect to see new devices from us that build on our R&D efforts. And a number of Orion’s innovations have been extended to our current consumer products today as well as our future product roadmap. We’ve optimized some of our spatial perception algorithms, which are running on both Meta Quest 3S and Orion. While the eye gaze and subtle gestural input system was originally designed for Orion, we plan to use it in future products. And we’re exploring the use of EMG wristbands across future consumer products as well.

Orion isn’t just a window into the future—it’s a look at the very real possibilities within reach today. We built it in our pursuit of what we do best: helping people connect. From Ray-Ban Meta glasses to Orion, we’ve seen the good that can come from letting people stay more present and empowered in the physical world, even while tapping into all the added richness the digital world has to offer.

We believe you shouldn’t have to choose between the two. And with the next computing platform, you won’t have to.


For more information on Orion, check out these blog posts:

Posted on Leave a comment

Orion’s Compute Puck: The Story Behind the Device that Helped Make Our AR Glasses Possible

Last year at Connect, we unveiled Orion—our first true pair of AR glasses. The culmination of our work at Reality Labs over the last decade, Orion combines the benefits of a large holographic display and personalized AI assistance in a comfortable, all-day wearable form factor. It got some attention for its industry-leading field of view, silicon carbide waveguides, uLED projectors, and more. But today, we’re turning our attention to an unsung hero of Orion: the compute puck.

Designed to easily slip into your pocket or bag so you can bring it just about anywhere as you go about your day, the puck offloads Orion’s processing power to run application logic and enable a more compelling, smaller form factor for the glasses. It connects wirelessly to the glasses and EMG wristband for a seamless experience.

Set it and forget it, right?

But the puck’s backstory—even as a prototype—is involved, with a dramatic arc you’d never guess at from its appearance.

“When you’re building something like this, you start getting into the limits of physics,” explains Director of Product Management Rahul Prasad. “For the last 50 years, Moore’s Law has made everything smaller, faster, and lower power. The problem is that now you’re starting to hit limits on how much heat you can dissipate, how much battery you can compress, and how much antenna performance you can fit into a particular sized object.”

While hindsight may be 20/20, the puck’s potential wasn’t immediately obvious. When you’re the first to build something, you need to explore every possibility—leaving no stone unturned. How do you build something that some might think of as an undesirable accessory rather than a critical part of the package?

“We knew the puck was an extra device we were asking people to carry,” notes Product Design Engineering Manager Jared Van Cleave, “so we explored how to turn the bug into a feature.”

Ultimately, that ethos paid off in spades as the puck squeezes a lot of compute (and even more Meta-designed custom silicon for AI and machine perception) into a small size. This was instrumental in helping Orion go from the realm of science fiction into reality.

“If you didn’t have the puck, you wouldn’t be able to have the experiences that Orion offers in its form factor—period,” says Prasad. “A good AR experience demands really high performance: high frame rates, extremely low latency, fine-grained wireless and power management, etc. The puck and the glasses need to be co-designed to work really closely together, not just at the app layer, but also at the operating system, firmware, and hardware layers. And even if one were to co-design a smartphone to work with AR glasses, the demanding performance requirements would drain the phone battery and suck away compute capacity from phone use cases. On the other hand, the puck has its own high-capacity battery, high-performance SoC, and a custom Meta-designed AI co-processor optimized for Orion.”

Of course, the puck wasn’t designed overnight. It required years of iterative work.

“We didn’t know how people would want to interact with Orion from an input perspective—there was nothing we could draft off of in-market,” says Product Manager Matt Resman. “If you look at our early glasses prototypes, they were these massive headsets that weighed three or four pounds. And when you’re trying to build a product, it’s really difficult to understand the user experience if you’re not in the right form factor. With the puck, we were able to prototype very quickly to start understanding how people would use it.”

Codenamed Omega in the early days, the puck was initially envisioned by what was then Oculus Research as an Ω-shaped band that would go around the user’s neck and be hard-wired to the glasses…

… until some new innovations by the Reality Labs wireless team, among other things, allowed them to cut the cord. That enabled a more handheld or pocketable / in-bag form factor, which opened up a lot of possibilities.

“At that point, augmented reality calling was still a primary use case,” Van Cleave explains. “The puck was where the holographic videos would be anchored. You’d put it down on the table with the sensor bench facing you, imaging you, and then projecting who you were speaking to from the puck’s surface for the call.”

“Orion is about connecting people and bringing us together,” says Resman. “One of the initial concepts for the puck was to help create this sense of presence with other people and enable this richer form of communication.”

“It’s unlike anything you’ve ever seen—the device has the potential to create really fun and unique interactions,” adds Industrial Designer Emron Henry. “The user experience feels a bit like unleashing a genie from a bottle, where holograms seamlessly emerge from and dissolve back into the device.”

As you can probably tell by now, there’s more potential inside the puck than what ultimately got turned on as a feature. In addition to the sensors and cameras that would’ve been used for AR calling, the puck has haptics and 6DOF sensors that could enable it to be used as a tracked controller to select and manipulate virtual objects and play AR games. The team also explored capacitive and force touch input so the puck could serve as a gamepad when held in both portrait and landscape mode.

“We would talk about the need to carry this extra thing,” says Van Cleave. “How do we make it more useful? There was a whole workstream around it. And at one point, the hypothesis was that AR gaming was going to be this killer use case.”

Early on, we knew we needed to explore the possibilities of the puck doing things that phones cannot. Eventually, we landed on using eye gaze, EMG, and hand tracking for AR games, like Stargazer and Pong, but we prototyped various demos and games that used the puck as a controller in the early days of Orion.

0:00 / 0:00

0:00 / 0:00

0:00 / 0:00

Rendered explorations of the puck as a 6DOF controller for AR games like Pong.

“Because it’s not a phone, that gave us a lot of design freedom,” adds Prasad. “It can be thicker, I can make it more rounded so it fits comfortably in your hand as a controller. It’s pretty incredible that the puck is actually smaller than the average phone, but it’s more powerful than a phone because it has a Meta-designed co-processor for AI and machine perception.”

The team dug into the question of how a quality AR game might differ from a MR or console game. That meant exploring different affordances for an AR game controller, including joysticks, physical button layouts, trigger buttons, and more.

“We didn’t end up actually building any of that stuff,” Van Cleave notes. “We prototyped it, but we didn’t ever go for a full build. We wanted to keep it simple. We wanted to do these soft interfaces but not make physical, mechanical buttons.”

And while the sensors and haptics weren’t turned on in the finished product prototype, they served an integral function during development, letting teams file bugs by simply tapping on the top of the device a few times to trigger a bug report.

As AI began to take center stage as a key use case, compute came into increasingly sharp focus. In the end, the puck is home to Orion’s wireless connectivity, computing power, and battery capacity—a huge technical feat in itself—all of which helps reduce the weight and form factor of the glasses while dissipating a lot more thermals by virtue of its surface area.

Throughout its evolution, one thing has remained the same: There’s more to the puck than meets the untrained eye. Embracing unconventional ideas allowed our teams to explore, push boundaries, and build the future.

“We’re defining a category that doesn’t quite exist yet,” notes Henry. “As you’d expect with R&D, there were starts and stops along the way. How will users expect to interact with holograms? Would they prefer to use an AR remote or is hand tracking, eye gaze, and EMG sufficient for input? What feels intuitive, low-friction, familiar, and useful?”

“Rather than looking at the puck as just a rock, we asked ourselves what else we could provide to further differentiate it from phones and justify why you’d want to carry it around,” acknowledges Resman. “Does its raw compute power and practical design—which helped unlock a glasses form factor that you can realistically wear around all day—ultimately offer enough value? It’s our job to help answer these questions.”

“Early on, we shifted gears to focus on what we can do that a phone can’t do,” Van Cleave adds. “Phones need to have the screen, they need to have the certain physical button layout that users expect. And we don’t have those constraints. Our compute puck can be whatever we want it to be.”


For more information on Orion, check out these blog posts:

Posted on Leave a comment

Orion: True AR Glasses Have Arrived

TL;DR:

  • Today at Connect, Mark Zuckerberg unveiled Orion—our first pair of true AR glasses, previously codenamed Project Nazare.
  • With an industry-leading field of view, silicon carbide lenses, intricate waveguides, uLED projectors, and more, Orion is our most advanced and most polished product prototype to date.
  • Now, we’re continuing product optimizations and lowering costs as we drive toward a scalable consumer device that will revolutionize the way people interact with the world.

“We are building AR glasses.”

Five simple words, spoken five years ago. With them, we planted a flag on our vision of a future where we no longer have to make the false choice between a world of information at our fingertips and the physical world all around us.

And now, five years later, another five words that stand to once again change the game:

We have built AR glasses.

https://youtube.com/watch?v=2CJsnyS8u3c%3Fsi%3DTr77gOKEq3PnA7-k%26controls%3D0%3Fautoplay%3D0%26color%3Dwhite%26cc_lang_pref%3Den%26cc_load_policy%3D0%26enablejsapi%3D1%26frameborder%3D0%26hl%3Den%26rel%3D0%26origin%3Dhttps%253A%252F%252Fwww.meta.com

At Meta, our mission is simple: give people the power to build community and bring the world closer together. And at Reality Labs, we build tools that help people feel connected anytime, anywhere. That’s why we’re working to build the next computing platform that puts people at the center so they can be more present, connected, and empowered in the world.

Ray-Ban Meta glasses have demonstrated the power of giving people hands-free access to key parts of their digital lives from their physical ones. We can talk to a smart AI assistant, connect with friends, and capture the moments that matter—all without ever having to pull out a phone. These stylish glasses fit seamlessly into our everyday lives, and people absolutely love them.

Yet while Ray-Ban Meta opened up an entirely new category of display-less glasses super-charged by AI, the XR industry has long dreamt of true AR glasses—a product that combines the benefits of a large holographic display and personalized AI assistance in a comfortable, stylish, all-day wearable form factor.

And today, we’ve pushed that dream closer to reality with the unveiling of Orion, which we believe is the most advanced pair of AR glasses ever made. In fact, it might be the most challenging consumer electronics device produced since the smartphone. Orion is the result of breakthrough inventions in virtually every field of modern computing—building on the work we’ve been doing at Reality Labs for the last decade. It’s packed with entirely new technologies, including the most advanced AR display ever assembled and custom silicon that enables powerful AR experiences to run on a pair of glasses using a fraction of the power and weight of an MR headset.

Orion’s input system seamlessly combines voice, eye gaze, and hand tracking with an EMG wristband that lets you swipe, click, and scroll while keeping your arm resting comfortably by your side, letting you stay present in the world and with the people around you as you interact with rich digital content.

Beginning today at Connect and continuing throughout the year, we’re opening up access to our Orion product prototype for Meta employees and select, external audiences so our development team can learn, iterate, and build towards our consumer AR glasses product line, which we plan to begin shipping in the near future.

Why AR Glasses?

There are three primary reasons why AR glasses are key to unlocking the next great leap in human-oriented computing.

  1. They enable digital experiences that are unconstrained by the limits of a smartphone screen. With large holographic displays, you can use the physical world as your canvas, placing 2D and 3D content and experiences anywhere you want.
  2. They seamlessly integrate contextual AI that can sense and understand the world around you in order to anticipate and proactively address your needs.
  3. They’re lightweight and great for both indoor and outdoor use—and they let people see each other’s real face, real eyes, and real expressions.

That’s the north star our industry has been building towards: a product combining the convenience and immediacy of wearables with a large display, high-bandwidth input, and contextualized AI in a form factor that people feel comfortable wearing in their daily lives.

Compact Form Factor, Compounded Challenges

For years, we’ve faced a false choice—either virtual and mixed reality headsets that enable deep, immersive experiences in a bulky form factor or glasses that are ideal for all-day use but don’t offer rich visual apps and experiences given the lack of a large display and corresponding compute power.

But we want it all, without compromises. For years, we’ve been hard at work to take the incredible spatial experiences afforded by VR and MR headsets and miniaturizing the technology necessary to deliver those experiences into a pair of lightweight, stylish glasses. Nailing the form factor, delivering holographic displays, developing compelling AR experiences, and creating new human-computer interaction (HCI) paradigms—and doing it all in one cohesive product—is one of the most difficult challenges our industry has ever faced. It was so challenging that we thought we had less than a 10 percent chance of pulling it off successfully.

Until now.

A Groundbreaking Display in an Unparalleled Form Factor

At approximately 70 degrees, Orion has the largest field of view in the smallest AR glasses form factor to date. That field of view unlocks truly immersive use cases for Orion, from multitasking windows and big-screen entertainment to lifesize holograms of people—all digital content that can seamlessly blend with your view of the physical world.

For Orion, field of view was our holy grail. We were bumping up against the laws of physics and had to bend beams of light in ways they don’t naturally bend—and we had to do it in a power envelope measured in milliwatts.

Instead of glass, we made the lenses from silicon carbide—a novel application for AR glasses. Silicon carbide is incredibly lightweight, it doesn’t result in optical artifacts or stray light, and it has a high refractive index—all optical properties that are key to a large field of view. The waveguides themselves have really intricate and complex nano-scale 3D structures to diffract or spread light in the ways necessary to achieve this field of view. And the projectors are uLEDs—a new type of display technology that’s super small and extremely power efficient.

Orion is unmistakably a pair of glasses in both look and feel—complete with transparent lenses. Unlike MR headsets or other AR glasses today, you can still see each other’s real eyes and expressions, so you can be present and share the experience with the people around you. Dozens of innovations were required to get the industrial design down to a contemporary glasses form factor that you’d be comfortable wearing every day. Orion is a feat of miniaturization—the components are packed down to a fraction of a millimeter. And we managed to embed seven tiny cameras and sensors in the frame rims.

We had to maintain optical precision at one-tenth the thickness of a strand of human hair. And the system can detect tiny amounts of movement—like the frames expanding or contracting in various room temperatures—and then digitally correct for the necessary optical alignment, all within milliseconds. We made the frames out of magnesium—the same material used in F1 race cars and spacecraft—because it’s lightweight yet rigid, so it keeps the optical elements in alignment and efficiently conducts away heat.

Heating and Cooling

Once we cracked the display (no pun intended) and overcame the physics problems, we had to navigate the challenge of really powerful compute alongside really low power consumption and the need for heat dissipation. Unlike today’s MR headsets, you can’t stuff a fan into a pair of glasses. So we had to get creative. A lot of the materials used to cool Orion are similar to those used by NASA to cool satellites in outer space.

We built highly specialized custom silicon that’s extremely power efficient and optimized for our AI, machine perception, and graphics algorithms. We built multiple custom chips and dozens of highly custom silicon IP blocks inside those chips. That lets us take the algorithms necessary for hand and eye tracking as well as simultaneous localization and mapping (SLAM) technology that normally takes hundreds of milliwatts of power—and thus generates a corresponding amount of heat—and shrink it down to just a few dozen milliwatts.

And yes, custom silicon continues to play a critical role in product development at Reality Labs, despite what you may have read elsewhere. 😎

Effortless EMG

Every new computing platform brings with it a paradigm shift in the way we interact with our devices. The invention of the mouse helped pave the way for the graphical user interfaces (GUIs) that dominate our world today, and smartphones didn’t start to really gain traction until the advent of the touch screen. And the same rule holds true for wearables.

We’ve spoken about our work with electromyography, or EMG, for years, driven by our belief that input for AR glasses needs to be fast, convenient, reliable, subtle, and socially acceptable. And now, that work is ready for prime time.

Orion’s input and interaction system seamlessly combines voice, eye gaze, and hand tracking with an EMG wristband that lets you swipe, click, and scroll with ease.

It works—and feels—like magic. Imagine taking a photo on your morning jog with a simple tap of your fingertips or navigating menus with barely perceptible movements of your hands. Our wristband combines a high-performance textile with embedded EMG sensors to detect the electrical signals generated by even the tiniest muscle movements. An on-device ML processor then interprets those EMG signals to produce input events that are transmitted wirelessly to the glasses. The system adapts to you, so it gets better and more capable at recognizing the most subtle gestures over time. And today, we’re sharing more about our support for external research focused on expanding the equity and accessibility potential of EMG wristbands.

Meet the Wireless Compute Puck

True AR glasses must be wireless, and they must also be small.So we built a wireless compute puck for Orion. It takes some of the load off of the glasses, enabling longer battery life and a better form factor complete with low latency.

The glasses run all of the hand tracking, eye tracking, SLAM, and specialized AR world locking graphics algorithms while the app logic runs on the puck to keep the glasses as lightweight and compact as possible.

The puck has dual processors, including one custom designed right here at Meta, and it provides the compute power necessary for low-latency graphics rendering, AI, and some additional machine perception.

And since it’s small and sleek, you can comfortably toss the puck in a bag or your pocket and go about your business—no strings attached.

AR Experiences

Of course, as with any piece of hardware, it’s only as good as the things you can do with it. And while it’s still early days, the experiences afforded by Orion are an exciting glimpse of what’s to come.

https://youtube.com/watch?v=HkdSv3QPhNw%3F%26controls%3D0%3Fautoplay%3D0%26color%3Dwhite%26cc_lang_pref%3Den%26cc_load_policy%3D0%26enablejsapi%3D1%26frameborder%3D0%26hl%3Den%26rel%3D0%26origin%3Dhttps%253A%252F%252Fwww.meta.com

We’ve got our smart assistant, Meta AI, running on Orion. It understands what you’re looking at in the physical world and can help you with useful visualizations. Orion uses the same Llama model that powers AI experiences running on Ray-Ban Meta smart glasses today, plus custom research models to demonstrate potential use cases for future wearables development.

You can take a hands-free video call on the go to catch up with friends and family in real time, and you can stay connected on WhatsApp and Messenger to view and send messages. No need to pull out your phone, unlock it, find the right app, and let your friend know you’re running late for dinner—you can do it all through your glasses.

https://youtube.com/watch?v=el7lUVvu8Bo%3F%26controls%3D0%3Fautoplay%3D0%26color%3Dwhite%26cc_lang_pref%3Den%26cc_load_policy%3D0%26enablejsapi%3D1%26frameborder%3D0%26hl%3Den%26rel%3D0%26origin%3Dhttps%253A%252F%252Fwww.meta.com

You can play shared AR games with family on the other side of the country or with your friend on the other side of the couch. And Orion’s large display lets you multitask with multiple windows to get stuff done without having to lug around your laptop.

The experiences available on Orion today will help chart the roadmap for our consumer AR glasses line in the future. Our teams will continue to iterate and build new immersive social experiences, alongside our developer partners, and we can’t wait to share what comes next.

A Purposeful Product Prototype

While Orion won’t make its way into the hands of consumers, make no mistake: This is not a research prototype. It’s the most polished product prototype we’ve ever developed—and it’s truly representative of something that could ship to consumers. Rather than rushing to put it on shelves, we decided to focus on internal development first, which means we can keep building quickly and continue to push the boundaries of the technology and experiences.

And that means we’ll arrive at an even better consumer product faster.

What Comes Next

Two major obstacles have long stood in the way of mainstream consumer-grade AR glasses: the technological breakthroughs necessary to deliver a large display in a compact glasses form factor and the necessity of useful and compelling AR experiences that can run on those glasses. Orion is a huge milestone, delivering true AR experiences running on reasonably stylish hardware for the very first time.

Now that we’ve shared Orion with the world, we’re focused on a few things:

  • Tuning the AR display quality to make the visuals even sharper
  • Optimizing wherever we can to make the form factor even smaller
  • Building at scale to make them more affordable

In the next few years, you can expect to see new devices from us that build on our R&D efforts. And a number of Orion’s innovations have been extended to our current consumer products today as well as our future product roadmap. We’ve optimized some of our spatial perception algorithms, which are running on both Meta Quest 3S and Orion. While the eye gaze and subtle gestural input system was originally designed for Orion, we plan to use it in future products. And we’re exploring the use of EMG wristbands across future consumer products as well.

Orion isn’t just a window into the future—it’s a look at the very real possibilities within reach today. We built it in our pursuit of what we do best: helping people connect. From Ray-Ban Meta glasses to Orion, we’ve seen the good that can come from letting people stay more present and empowered in the physical world, even while tapping into all the added richness the digital world has to offer.

We believe you shouldn’t have to choose between the two. And with the next computing platform, you won’t have to.


For more information on Orion, check out these blog posts:

Posted on Leave a comment

EgoMimic: Georgia Tech PhD student uses Project Aria Research Glasses to help train humanoid robots

Today, we’re highlighting new research from Georgia Tech that helps train robots to perform basic everyday tasks using egocentric recordings from wearers of Meta’s Project Aria research glasses. Check out the video below, read the full story, or apply for your own Project Aria Research Kit.

Imagine having help completing everyday tasks in your home such as doing the laundry, washing dishes, and making repairs. We already use tools to help with these tasks, like washing machines, dishwashers, and electric drills. But what if you could have an even more powerful and flexible tool in the form of a humanoid robot that could learn from you and accelerate any number of physical projects on your to-do list?

Even if you had the available hardware system, teaching a robot to do everyday tasks can only be achieved through a slow and clunky data collection method called robot teleoperation. Until now. By using the Project Aria Research Kit, Professor Danfei Xu and the Robotic Learning and Reasoning Lab at Georgia Tech use the egocentric sensors on Aria glasses to create what they call “human data” for tasks that they want a humanoid robot to replicate. They use human data to dramatically reduce the amount of robot teleoperation data needed to train a robot’s policy—a breakthrough that could some day make humanoid robots capable of learning any number of tasks a human could demonstrate.

Kareer teleoperates the robot to capture co-training data for EgoMimic. Teleoperation can be difficult to scale and require significant human effort.

“Traditionally, collecting data for robotics means creating demonstration data,” says Simar Kareer, a PhD student in Georgia Tech’s School of Interactive Computing. “You operate the robot’s joints with a controller to move it and achieve the task you want, and you do this hundreds of times while recording sensor data, then train your models. This is slow and difficult. The only way to break that cycle is to detach the data collection from the robot itself.”

Today, robot policy models are trained with large amounts of targeted demonstration data specific to each narrow task at a high cost. Kareer hypothesizes that passively collected data from many researchers, like the data captured by Aria glasses, could instead be used to enable data creation for a much broader set of tasks to create more generally useful robots in the future.

Inspired by Project Aria and Ego-Exo4D which includes a massive egocentric dataset of over 3K hours of video recordings of daily-life activities, Kareer developed EgoMimic, a new algorithmic framework that utilizes human data and robot data for humanoid robot development.

“When I looked at Ego4D, I saw a dataset that’s the same as all the large robot datasets we’re trying to collect, except it’s with humans,” Kareer explains. “You just wear a pair of glasses, and you go do things. It doesn’t need to come from the robot. It should come from something more scalable and passively generated, which is us.” In Kareer’s research, Aria glasses were used to create human data for co-training the EgoMimic framework.

Kareer creates co-training human data by recording with Aria glasses while folding a t-shirt.

Aria glasses aren’t just used for human data collection in Georgia Tech’s research. They’re also used as an integral component of the robot’s real-time operation setup. Aria glasses are mounted to their humanoid robot platform just like a pair of eyes and serve as an integrated sensor package that enables the robot to perceive its environment in real time. The Aria Client SDK is utilized to stream Aria’s sensor data directly into the robot’s policy, running on an attached PC, which in turn controls the robot’s actuation. Using Aria glasses for both the data collection and the real-time perception pipeline minimizes the domain gap between the human demonstrator and the robot, paving the way for scaled human data generation for future robotics task training.

Aria glasses mounted to the top of the robot provide the system with sensor data that allows the robot to perceive and interact with the space.

Thanks to EgoMimic, Kareer achieved a 400% increase in his robot’s performance across various tasks vs previous methods with just 90 minutes of Aria recordings. The robot was also able to successfully perform these tasks in previously unseen environments.

In the future, humanoid robots could be trained at scale using egocentric data in order to perform a variety of tasks in the same way humans do.

“We look at Aria as an investment in the research community,” says James Fort, a Reality Labs Research Product Manager at Meta. “The more that the egocentric research community standardizes, the more researchers will be able to collaborate. It’s really through scaling with the community like this that we can start to solve bigger problems around how things are going to work in the future.”

Kareer will present his paper on EgoMimic at the 2025 IEEE Engineers’ International Conference on Robotics and Automation (ICRA) in Atlanta.

Posted on Leave a comment

Introducing Aria Gen 2: Unlocking New Research in Machine Perception, Contextual AI, Robotics, and More

Since its launch in 2020, Project Aria has enabled researchers across the world to advance the state of the art in machine perception and AI, through access to cutting-edge research hardware and open-source datasets, models, and tooling. Today, we’re excited to announce the next step in this journey: the introduction of Aria Gen 2 glasses. This next generation of hardware will unlock new possibilities across a wide range of research areas including machine perception, egocentric and contextual AI, and robotics.

0:00 / 0:00

For researchers looking to explore how AI systems can better understand the world from a human perspective, Aria Gen 2 glasses add a new set of capabilities to the Aria platform. They include a number of advances not found on any other device available today, and access to these breakthrough technologies will enable researchers to push the boundaries of what’s possible.

Compared to Aria Gen 1, Aria Gen 2’s unique value proposition includes:

  • State-of-the-art sensor suite: The upgraded sensor suite features an RGB camera, 6DOF SLAM cameras, eye tracking cameras, spatial microphones, IMUs, barometer, magnetometer, and GNSS. Compared to its predecessor, Aria Gen 1, the new generation introduces two innovative sensors embedded in the nosepad: a PPG sensor for measuring heart rate and a contact microphone to distinguish the wearer’s voice from that of bystanders.
  • Ultra low-power and on-device machine perception: SLAM, eye tracking, hand tracking, and speech recognition are all processed on-device using Meta’s custom silicon.
  • All-day usability: Aria Gen 2 glasses are capable of six to eight hours of continuous use, weigh about 75 grams, and have foldable arms for easy portability.
  • Interaction through audio: Users get audio feedback via best-in-class open-ear force-canceling speakers, enabling user-in-the-loop system prototyping.

Our decade-long journey to create the next computing platform has led to the development of these critical technologies. At Meta, teams at Reality Labs Research and the FAIR AI lab will use them to advance our long-term research vision. Making them available to academic and commercial research labs through Project Aria will further advance open research and public understanding of a key set of technologies that we believe will help shape the future of computing and AI.

The open research enabled by Project Aria since 2020 has already led to important work, including the creation of open-source tools in wide use across academia and industry. The Ego-Exo4D dataset, collected using the first generation of Aria glasses, has become a foundational tool across modern computer vision and the growing field of robotics. Researchers at Georgia Tech recently showed how the Aria Research Kit can help humanoid robots learn to assist people in the home, while teams at BMW used it to explore how to integrate augmented and virtual reality systems into smart vehicles.

And Aria is also enabling the development of new technologies for accessibility. The first-generation Aria glasses were utilized by Carnegie Mellon University in their NavCog project, which aimed to build technologies to assist blind and low-vision individuals with indoor navigation. Building on this foundation, the Aria Gen 2 glasses are now being leveraged by Envision, a company dedicated to creating solutions for people who are blind or have low vision. Envision is exploring the integration of its Ally AI assistant and spatial audio using the latest Aria Gen 2 glasses to enhance indoor navigation and accessibility experiences.

0:00 / 0:00

Envision used the on-device SLAM capabilities of Aria Gen 2, along with spatial audio features via onboard speakers, to assist blind and low-vision individuals seamlessly navigate indoor environments. This innovative use of the technologies, which is still in the exploratory and research phase, exemplifies how researchers can leverage Aria Gen 2 glasses for prototyping AI experiences based on egocentric observations. The advanced sensors and on-device machine perception capabilities, including SLAM, eye tracking, hand tracking, and audio interactions, also make them ideal for data collection for research and robotics applications.

Over the coming months, we’ll share more details about the timing of device availability to partners. Researchers interested in accessing Aria Gen 2 can sign up to receive updates. We’re excited to see how researchers will leverage Aria Gen 2 to pave the way for future innovations that will shape the next computing platform.

Posted on Leave a comment

Ray-Ban Meta – Style and Technology in One Frame

Ray-Ban has been an iconic name in eyewear fashion for decades. But in collaboration with Meta (formerly Facebook), they’ve launched a product that pushes boundaries – the Ray-Ban Meta smart glasses. This fusion of classic design with modern technology offers a unique experience for both fashion enthusiasts and tech lovers.

At First Glance: Classic with a Twist

At first look, they seem like classic Ray-Bans – elegant black frames, premium craftsmanship, and a stylish eco-leather case. But take a closer look and you’ll realize they hide more than just style – they’re glasses with a built-in camera, microphone, speakers, and smartphone connectivity.

What Can Ray-Ban Meta Do?

  • Video and photo capture: A discreet camera on the frame lets you capture the world from your point of view – hands-free.
  • Music and calls: With built-in directional speakers and microphones, you can take calls or listen to music without earbuds.
  • Sync with the Meta View app: Everything you record syncs automatically to the app, ready to be shared on your socials.
  • Voice control: Thanks to voice assistant support, you can control your glasses with simple voice commands – ideal for driving, sports, or on-the-go use.

Who Are They For?

Ray-Ban Meta glasses are perfect for content creators, influencers, travelers, or anyone who wants to capture moments without pulling out their phone. They’re also great for business use – ideal for quick documentation, live streams, or mobile video calls.

Final Verdict: Smart Elegance Without Compromise

If you’re looking for eyewear that blends classic fashion with top-tier technology, Ray-Ban Meta is the clear choice. They’re not just a stylish accessory – they’re smart glasses that redefine how we see and capture the world around us.

Posted on Leave a comment

Meta View Is Now the Meta AI App: Here’s What’s New & What Stays the Same

An important update for Ray-Ban Meta glasses owners today: The Meta View app is now the Meta AI app on iOS and Android. Along with a new name, your AI glasses companion app is getting an upgrade, with new features to make your experience more fun, useful, and personal than ever before. We’re here to walk you through the changes. But rest assured, the core features you’ve come to know will stay the same.

Glasses have emerged as the most exciting new hardware category of the AI era with Ray-Ban Meta glasses leading the way in defining what’s possible. We’re excited to bring the same voice experiences that have made our glasses so popular to more surfaces like the Meta AI app.

Friendly & Familiar

While the Meta AI app has a new name and look, it’s still the companion app for the Ray-Ban Meta glasses you know and love. The existing photos and videos you’ve captured with your glasses, device settings, and features you’re familiar with from Meta View are still available in the new app. We’re also automatically migrating your existing media and device settings from Meta View to the Meta AI app, so you won’t miss a beat. And just like we rolled out in our last update, you can use the Meta AI app to pair and manage up to seven pairs of glasses.

An All-In-One Companion App

New to Ray-Ban Meta glasses? The Meta AI app is your all-in-one companion to set up and manage your device. You can import, view, and share the photos and videos captured with your AI glasses directly from the app. You’ll also use the Meta AI app to configure your settings, update your software, and manage privacy options. You can set up voice commands from the app, so you can use your glasses hands-free, and even adjust your camera settings.

By integrating Meta AI and all that it offers into the companion app, we’re leveling up your Ray-Ban Meta glasses experience. You can get creative with the photos from your glasses directly in the Meta AI app—after importing your photos, ask Meta AI in the app to add, remove, or change parts of the image.

Checking out a new city? Ask Meta AI to tell you more about a local landmark. You can access and manage saved memories to improve your personalized responses from Meta AI on the app and your glasses over time. You can also start a conversation with Meta AI on your Ray-Ban Meta glasses, then access it in your history tab from the app to pick up where you left off.

New & Notable Features in the App

The Meta AI app was designed to help you find inspiration and get the most out of your glasses.

The new Discover feed lets you see how other people are using AI to get things done. It’s a great way to find new ideas. Recycle a clever prompt or remix it to make it your own. And as always, you’re in control: Your conversations with Meta AI are not public unless you choose to share them to the feed.

While speaking with AI using your voice isn’t new, we’re using Llama 4 in the Meta AI app to bring you responses that feel more personal, relevant, and conversational. The experience integrates with other features like image generation and editing, which can now all be done through a voice or text conversation with your AI assistant in the app.

We’ve also included a demo of a voice experience we’re building that’s full-duplex speech technology, that you can toggle on and off to test in the Meta AI app. This feature doesn’t have access to real-time information, so you may encounter technical issues or inconsistencies. We’ll continue to gather feedback to improve it over time.

And to make it easier to multitask, you can also continue to use Meta AI’s voice features while doing other things on your phone, with a visible icon to let you know when the microphone is in use.

Voice conversations in the Meta AI app, including the full duplex demo, are available in the US, Canada, Australia, and New Zealand to start.* To learn more about how you can manage your experience on the Meta AI app and toggle between modes, visit our help center.

Intelligence for You

The Meta AI app uses Llama 4 to help you solve problems, move throughout your day easier, and better understand the world around you. And with the ability to search across the web, it can help you get recommendations, deep dive on a topic, and stay connected with your friends and family.

We’re using our decades of experience personalizing people’s experiences on our platforms to make Meta AI more personal. You can tell Meta AI to remember specific details about you (like that you love to travel and learn new languages), and it can pick up important details from context cues. Meta AI also delivers more relevant answers to your questions by drawing on information you’ve already chosen to share on Facebook and Instagram (if they’re added to the same Accounts Center), like profile details and content you like or engage with. Personalized responses are available today in the US and Canada.

We’re Just Getting Started

This is the first version of the Meta AI app, and we plan to iterate and improve it over time. Meta AI is built to get to know you, so its answers are more helpful. It’s easy to talk to, so it’s more seamless and natural to interact with. And it’s more connected, so it can share things from the people and places you care about.

Give it a try today on iOS or Android, and learn more about the new and improved Meta AI.

*Voice conversations in New Zealand available in-app only, not on Ray-Ban Meta glasses.

Posted on Leave a comment

Fashion Forward: Amelia Gray Spotted in Ray-Ban Meta Glasses on the Met Gala Red Carpet

High couture took center stage last night on the first Monday in May. And you may have spotted some familiar fashion-forward tech on the proverbial red carpet. As part of her custom lookAmelia Gray sported a pair of Ray-Ban Meta glasses.

CREDIT: Getty Images Entertainment

The AI glasses—Shiny Black Skyler frames with Polarized Green sun lenses—were the perfect accessory to capture photos and videos throughout the day while also helping her get ready. From taking calls hands-free and listening to music or a podcast to getting answers from Meta AI on the fly, Ray-Ban Meta glasses make any pre-event routine anything but.

This isn’t the first time our AI glasses have gone high fashion. Earlier this year, Gray debuted our sold-out Ray-Ban Meta x Coperni Limited Edition frames on the runway at Paris Fashion Week. Ray-Ban Meta glasses are fashion’s most useful tech-accessory, gaining traction among editors, stylists, and models from the pages of magazines to the runway—and now on the red carpet of fashion’s biggest night of the year.

“I’m still radiating from the night!” said Amelia Gray. “And to know that I will be able to relive so many of those special moments from my own perspective is truly surreal. It’s an entirely new way to experience and share these once-in-a-lifetime moments, and I couldn’t be more excited to be part of Ray-Ban Meta’s Met Gala debut. ”

Inside the event, Gray was joined by Charli XCX, Jeff Goldblum, Jenna Ortega, Kim Kardashian, Nicki Minaj, Rosé, Shaboozey, and Taraji P. Henson as guests at Instagram’s table along with Adam Mosseri and Eva Chen.

Photographer: Quil Lemons

And off the carpet, Instagram and Threads hosted a number of icons who are driving the cultural conversation forward at The Mark Hotel for our 5th annual Watch Party. More than 50 creators and pop culture and fashion commentators gathered to watch the red carpet arrivals and share real-time looks on Instagram and Threads.

CREDIT: Huy Luong

Styles may come and go, but fashion is forever. And the future of AI glasses is looking bright.

Posted on Leave a comment

New Ray-Ban | Meta Smart Glasses Styles and Meta AI Updates

00:00

00:30

Takeaways

  • We’re expanding the Ray-Ban Meta smart glasses collection with new styles.
  • We’re adding video calling with WhatsApp and Messenger to share your view on a video call. 
  • We’re rolling out Meta AI with Vision, so you can ask your glasses about what you’re seeing and get helpful information — completely hands-free. 

Our second-generation smart glasses, in partnership with EssilorLuxottica, have been flying off the shelves — they’re selling out faster than we can make them. And just in time for sunglasses season, we’re expanding the Ray-Ban Meta smart glasses collection with new styles designed to fit more face shapes so you can find the perfect pair. We’re also adding new features, including updates to Meta AI, to make the glasses even more useful.

A Frame for Every Face

Looking for a vintage vibe? Our new Skyler frames feature a cat eye design inspired by an era of iconic jet-set style, designed to suit smaller faces. 

We’re also adding a new low bridge option for our Headliner frames. If your glasses tend to slide down your nose, sit too low on your face or press on your cheeks, this is the fit for you. 

There are hundreds of different custom frame and lens combinations on the Ray-Ban Remix platform, so you can mix and match to make the glasses your own on ray-ban.com. And our new styles are designed to be prescription lens compatible. Skyler and the new Headliner low bridge fit are available for pre-order now on meta.com and ray-ban.com. These new styles are available in 15 countries, including the US, Canada, Australia, and throughout Europe.

Photo of 6 different pairs of Ray-Ban Meta frames

We’re also introducing the first limited-edition Ray-Ban Meta smart glasses in an exclusive Scuderia Ferrari colorway for Miami 2024. Ray-Ban Meta for Scuderia Limited Edition brings together the legacy of Ferrari, timeless Ray-Ban design and cutting-edge tech from Meta, available April 24, 2024.

Share Your View on a Video Call

From a breathtaking vista on a hike to experiencing your kid’s first steps, there are some moments in life that are just meant to be shared. That’s why we’re adding the ability to share your view on a video call via WhatsApp and Messenger, completely hands-free. 

Image showing hands-free video calling on Ray-Ban Meta smart glasses - a father cooking with his child

At the grocery store and not sure which brand of kombucha to buy? Can’t tell if that pineapple is ripe? Now you can hop on a video call with your mom and get her advice based on what you see. Video calling on Ray-Ban Meta smart glasses is rolling out gradually, so if you don’t see the update right away, sit tight — it’s on the way!

Meta AI Makes Your Smart Glasses Even Smarter

From integrated audio to an ultra-wide 12 MP camera, Ray-Ban Meta smart glasses are jam-packed with tech. And in the US and Canada, you also get Meta AI — an intelligent assistant that helps you get things done, create and connect with the people and things you care about. Just say, “Hey Meta,” and ask away! You can control the glasses using voice commands, and thanks to Meta AI, even get access to real-time information.

We started testing a multimodal AI update in December, so you can ask your glasses about what you’re seeing, and they’ll give you smart, helpful answers or suggestions. That means you can do more with your glasses because now they can see what you see. Starting today, we’re rolling this functionality out to all Ray-Ban Meta smart glasses in the US and Canada in beta. 

Say you’re traveling and trying to read a menu in French. Your smart glasses can use their built-in camera and Meta AI to translate the text for you, giving you the info you need without having to pull out your phone or stare at a screen. 

Image showing a woman wearing Ray-Ban Meta smart glasses using Meta AI feature