In Focus: CREAL Offers Novel XR Light Field Solution

The Swiss startup is shaping a vertical in the XR industry to 'transform' the user's immersive experiences

7
CREAL Demo
Mixed RealityInsights

Published: October 1, 2021

Demond Cureton

One of the most crucial component of the virtual, augmented, and mixed reality (VR/AR/MR) markets is the use of light field technologies, which is a relatively untapped vertical for the industry.

According to a report from Markets and Research, the light field technology sector is expected to double to $154 million by 2026, and offers major opportunities for the maturation of the sector.

XR Today spoke with Dr Tomas Sluka, Chief Executive and Founder of CREAL, a Swiss-based startup developing light field solutions for the extended reality (XR) market.

XR Today: Why is it important to develop light field solutions? What are the components or technologies needed to boost the efficiency in augmented reality products?

Tomas Sluka: It’s true in general for all parts of the XR ecosystem or technical infrastructure, but looking back to the first virtual and later, mixed and augmented reality solutions with [companies such as] Magic Leap, they were insufficient as people generally experienced eye strain or strange visuals.

I’m probably more sensitive than others in this, which is why I started looking into these solutions, but if poeple encounter initial problems, they will later experience mild nausea after roughly 20 minutes.

There are multiple reasons for this, but one remains fundamentally unsolved: our eyes are not behaving naturally in VR/AR. You’ve probably noticed the lens in the eye is actually a very sophisticated mechanism and are constantly adjusting as they are an essential part of our vision, right?

This is completely ignored in AR/VR, so we do dimensional displays for AR, which suggests the displays you find today are unnatural. We use VR/AR because there is an expectation these tools will become the next communication platform as viewing digital information as seen in the real world is simply a more mature medium.

This is how we’re accustomed to view things, but the problem is we would like to see perfect digital images which look realistic, as if they are in our own hands, but today’s solutions are [blurred].

This is the fundamental problem: it’s not one of not reaching a sufficient display resolution or optics, but a fundamentally wrong method of displaying 3D images. The reason is for this is that you don’t actually view 3D worlds with today’s display technologies, you see two flat images, and I see a slightly different angle for each one in the virtual world.

This gives the stereoscopic illusion of this, but each is a flat image, and you eventually experience the problem with your own eyes. If you close one eye and look at the text on your screen while covering your hand, you cannot focus on both your hands and the text at the same time.

This is the massive problem for AR. We would like to see an image with dimensional depth, so when you focus closely on both at the same time. You can focus on a butterfly nearby or a tree far away. This is exactly what our company does.

If you try to adjust your eyes the same with Microsoft HoloLens or Magic Leap headsets, you can’t read the text, which may confuse your eyes and trigger nausea.

XR Today: Which kinds of industry-wide cases could you use your solution, either across the European XR industry or globally?

Tomas Sluka: This focus effect is essential to a long-term future for AR/VR. It’s not just my words, but at least of every technical person in large companies working in the field wonders when these solutions will mature and basically know they are necessary for the future.

Although there are additional possibilities, ours is super practical. Many solutions improve over time because at the moment, they are more difficult, expensive, and bulkier in the beginning.

Most importantly, when you want to see things up close in your own field of vision, you may notice your eye needs to shift focus between individual objects in the distance.

If you’re two metres away, you don’t need such precision, but when you want to visualise things up close for a long time, problems can arise, so you can imagine a range of applications here.

There’s a lot of opportunity in VR medical training for performing or rehearsing surgeries, because some of the organs are small and you don’t want to feel sick as time passes while training or preparing for the procedure, so there is a lot of demand here.

We must also solve additional problems and improve the overall quality and ergonomics of the solution, but before that, just imagine we can allow the light to appear as realistically as possible.

We can transform the light as if it were refracted in a glass of water, but with optical effects, which also means we can digitally emulate the functions of our natural lenses.

For example, whn you go to the optometrist for your glasses, there’s an unusual machine full of lenses and mechanical devices which places different ‘lenses’ in front of your eyes.

Imagine now that we can display any content with multiple adjustments at the same time. We can determine you see ‘letter D’ best and get your ‘prescription’ much more precisely and faster.

So, the current market provides devices that are super expensive and bulky, and content is basically printed paper on the wall. We would like to provide devices which are cheaper by default and smaller.

XR Today: Let’s look further into the technological aspects of your solution. How would they instal in VR or AR devices, and are there any physical components for object detection in your light field solution?

Tomas Sluka: I’m glad you mentioned the word ‘detection’ because our solution does not require detection for the focus effect itself, but does so using similar methods of spatial and hand tracking.

This is something you can buy off the shelf today, but for the focusing effort itself, you don’t need eye tracking. This is super important because so many people try to solve the same problem using eye tracking and optics adaptation, making the content very unreliable and unviable long-term.

Our solution works to recreate the light correctly, and later, it’s entirely dependent on where you are looking, who is doing what, and so on. It’s like asking how the real world knows where you are looking to provide the proper image, and our light field solution works similarly.

Looking at the footage, you can imagine many slightly different discrete viewpoints, each carrying a different perspective of what you wan to see to the eye, and it appears as a proper image.

In reality, you have an array of point blinds, executed with a reflective, high-speed modulator. You flash a light, which goes on the modulator to reflect images projected to the eyes to slide into the viewpoint.

From here, we want to see a butterfly in the tree, and if you flash another light source, we see slightly different perspectives from this point. This process would take place rapidly, around six kilohertz, and you would see the light integrated in your eye as it behaves in real life.

As for the basic components, you can see this is a system projection which delivers the image to the eye. The individual images are shot and reflected from, in this case, a holographic combiner, to the eyes via an array of viewpoints. Each of them carries a specific geometry of what you see, and altogether, recreates the scene with objects at any distance a very simple concept.

You can then focus closely on the individual images at 6 hertz, or 6,000 times per second. Working with current solutions at 60 kilohertz, if you move too fast, you will see a ‘chain’ of images, but not with us. We can distribute the individual components as needed and also provide larger fields of view.

XR Today: What are your thoughts on the overall European XR market, namely its strengths and challenges it faces, as firms build future product ecosystems?

Tomas Sluka: I think that Europe is doing again what it has normally done for a long time, which is work hard in the background on components, software, and so on, but not really in the end product.

Europe normally works in the high-end market in sectors such as medical. Similarly, we also have Siemans and Phillips going for the premium market, which is smaller than the consumer market and covers laptops, smartphones, tablets, and others in the AR market.

I think that Europe is currently waking up to capture more, namely in software, hopefully. There’s also a massive development in artificial intelligence (AI), which will eventually support the development of AI.

Companies such as Unity [which was founded in Denmark in 2004] are now based in the US, but smart glass manufacturers like Luxottica, SLR, and others may play a significant role in VR/AR, although you don’t find many massive consumer brands like Apple, Facebook, or as with major Asian firms.

XR Today: There was a report released on Friday which stated the light field sector would double up to 2026, giving a lot of potential for solutions such as yours, due to the rising popularity of stereoscopic imaging, AR/VR solutions, and other products. So I believe your solution will definitely contribute to building that ecosystem leading up to 2026.

Tomas Sluka: Yes, I hope so, but we should be clear that the likelihood is currently different than what we speak about later, as the current focus is on volumetric capturing and large screen panels for image processing. Current 3D displays must satisfy many different viewpoints, and calculate and project these images, which is the focus of many efforts in light fields technologies today.

Our processing requirements are very similar to classical stereo, and more is not required, but when this is explained to experts in the field, they treat it with a bit of skepticism and assume the limiting factor is the eye itself. So we are one of a few companies, if not the only one, working on rendering light field technologies directly from the original projections at the moment, which makes much more sense to do.

Regarding physical components, we have to make custom projection systems, but all the sensors remain the same for head-mounted devices. We’re also compatible with content engines such as Unity, and can also use anything developed on the platform, giving us interoperability in the entire ecosystem.

We should be ready to plug into Open XR very soon, which is a hub where you can connect any content source with any hardware, allowing interoperability with other devices.

Kindly visit https://www.creal.com for more information

 

 

AR Smart GlassesDesignImmersive ExperienceIndustry 4.0Mixed Reality HeadsetsWearables
Featured

Share This Post