Zappar Expands Its ZapWorks Platform with ARKit & ARCore Compatibility

Zappar, a leading global developer of Augmented Reality (AR) experiences and creative tools, today announced the launch of a slew of new features to their AR content creation platform ZapWorks Studio, providing brands, agencies and independent creators with a one-stop shop for building immersive augmented, virtual and mixed reality experiences. 

“This new update is our most impressive release to-date and makes ZapWorks Studio one of the most versatile and scalable AR platforms currently on the market, continuing our mission to build the most accessible, affordable and feature-rich toolkit for AR creators across the globe,” commented Caspar Thykier, CEO and co-founder of Zappar. 

Advancing the last iteration of the software, which prioritized lowering the barrier to entry by enabling creatives without coding knowledge or development background to build interactive short-form content for mobile, ZapWorks Studio 6 aims to put more power and functionality into the hands of the creative community. This level of control means that AR creators can build the widest range of fully customizable AR experiences possible without compromising on speed or ease of use.

Notable new features include:

World Tracking — In addition to best-in-class image tracking, ZapWorks Studio 6 gives AR developers even more creative freedom with the addition of world tracking powered by ARKit & ARCore. Build once and instantly publish across both iOS and Android.

Face Tracking — Studio now supports powerful computer vision facial tracking algorithms, enabling AR developers to create more expressive face-tracked experiences using ZapWorks Studio’s new built-in UI. 

Sketchfab integration — Access to Sketchfab’s extensive library of over two million 3D models, characters, scenes, and environments directly in ZapWorks Studio. Powered by Zappar’s support for the gITF 3D model format, creatives can search and import chosen models quickly and reliably.


Sketchfab CEO, Alban Denoyel, commented on the new Sketchfab integration: “Sketchfab’s mission has always been about making it as easy as possible to find, publish and share 3D content with the world, having Zappar (and ZapWorks Studio) as one of our key partners further expands this mission into the world of AR”. 

Thousands of creatives already use ZapWorks Studio to build AR experiences that engage audiences and level-up their brand’s mobile app strategy — from small businesses and creative agencies to big brands such as the BBC and Oath. 

Peter Maddalena, Director at VRCraftworks added: “ZapWorks Studio has become an essential tool for us to create engaging AR content for our clients — transforming their products, campaigns and ideas into interactive experiences that generate engagement, spontaneous media, and financial return. With every Studio upgrade, we are better equipped with the strategic tools to produce quality experiences while positioning ourselves as innovative and forward-thinking to our customers — a win-win situation.”

To download or learn more about ZapWorks Studio 6, please visit: https://zap.works/studio/

Learn more here http://zap.works

VR has a longer history than you might imagine (Intellectual Property)

What do The Sword of Damocles, Morton Heilig’s Sensorama, and the 18th century apparatus La Nature à Coup d’Œil all have in common?

They were in fact all early attempts at simulating an artificial reality. The present day equivalents to these inventions are termed “virtual reality” (VR), “augmented reality” (AR), mixed reality (MR), or, more broadly, “extended reality” (XR) devices and now utilise cutting-edge technology.

The concept itself, however, is by no means new. One of the earliest inventions in this field was patented by artist Robert Barker in 1787. 

La Nature à Coup d’Œil, later termed ‘The Panorama’, comprised a large landscape image painted onto a long canvas strip which was then displayed inside a circular building. The intention being that the observers would stand in an enclosure in the centre of the building and view the painting as if it were a real panorama as seen from a high viewpoint. Barker’s patent even goes on to describe the lighting, ventilation and access required to achieve maximum immersion. Despite sounding very elementary, it was reported that many visitors felt disoriented and sick as a result of the experience.

Jumping forward nearly 200 hundred years to 1962, we come to Morton L. Heilig’s patent for his Sensorama Simulator. This contraption, operating a bit more like present-day VR devices, but looking a lot more like a test you would have at the optician’s, includes means of providing a 3D visual effect, vibrations, a breeze, stereo sound effects, and even “odour-sense stimulation”.

The simulator, which ran short films such as “Belly Dancer” and “I’m a Coca-Cola Bottle”, was unable to secure funding and ultimately ended in failure.

Just six years later, computer scientist Ivan Sutherland developed what is widely considered to be the first VR Head-Mounted Display (HMD): The Sword of Damocles. This device used head-tracking technology to display a virtual overlay that changed perspective based on the user’s head position. Such an overlay meant that this device was also a precursor to augmented reality technology.

Extended reality experiences have clearly been of interest to innovators for many years, but how far have we really come from these early endeavours? And where might this technology take us?

Despite an unsuccessful reception in the late 90s, advances in technology have seen VR devices experience a resurgence of interest in the last few years. This sudden revival has culminated in the development of high-end gaming devices such as Oculus Rift, HTC Vive, and PlayStation VR.

While the broad concept has changed very little, current technology now allows users to enter completely artificial worlds in ways that had not previously been possible. By using positional tracking sensors and handheld controllers, users are able to interact fully with a simulated environment, while the HMD provides immersive visuals and 3D sound.

This refined fusion between precision tracking and high-resolution displays is allowing VR/AR technology to regain traction in the gaming industry. Beyond the mainstream developments already mentioned, AR company Magic Leap has recently released for general sale their cutting-edge AR goggles: Magic Leap One. Using a multitude of tracking cameras and what Magic Leap call a “photonic lightfield chip”, the Magic Leap One is able to display virtual objects in different focal planes relative to real-world objects. This means that if a virtual ball is sitting on your real-world coffee table and you virtually knock it off, you will see it virtually fall off the real-world table and virtually roll along the real-world floor. 

The complexity of achieving such a feat is reflected by the array of advanced technology implemented in the goggles. In a 22-step “teardown”, technology analysis and repair firm iFixit identified a range of components such as: infrared projectors and LEDs for depth sensing; magnetic sensor coils for tracking headset position; a LCOS microdisplay to provide the visuals; and a plethora of processing units to analyse the incoming image data.

But the ability to realistically simulate an artificial environment stretches well beyond the gaming and entertainment industry. Increasingly VR/AR devices are being used in other industries as genuine tools for improving experiences and efficiency and many believe this is where the future of these technologies lie.

VR is being applied in education as a means of providing VR field trips, such as to the International Space Station, or inside the human body. In the retail industry, shoppers are able to try on make-up, explore their dream kitchen, or even explore the shop itself all through VR. In manufacturing, AR technology is being used for visual inventory management (“vision picking”) and remote maintenance and inspection of machinery. Architects are using XR technology to design and demonstrate buildings. The increasing realism and precision now available through XR means that it is even being applied in healthcare, allowing surgeons to perform surgery remotely.

While it is clear that the present day XR innovations are worlds apart from the early attempts there is one thing that all of these inventions certainly have in common: Intellectual Property.

From a big painting in a round building (Patent for displaying Views of Nature, Robert Barker) to corneal sphere tracking for generating an eye model (EP3329316A1, Oculus VR LLC), it is clear that all of these individuals and companies understood the importance of protecting investment and market share.

A recent study by IPlytics found that VR/AR patent applications have increased nearly six fold between 2010 and 2018. Whether it is a registered design for the shape of a headset or a patent to protect the technology inside it, the value of intellectual property protection has never been higher in this field and we expect to see many more interesting inventions coming our way in the years to come.

See our website marks-clerk.com to learn more

Is Holographic Display the Future of Display Technology?

Holographic display is a method of displaying visual content that is only now making early steps into the market. It is based on a totally different physical principal to what you usually find in the world today. When you look at a normal display you are basically looking at a big array of pixels, where each pixel gives some colour depending on a filter that is put in front of it.

You see an image but that is not really how we see the real world. When you look at the world around you, you are not seeing a 2-D pixel array of colours, you are seeing light reflecting off surfaces and that light adding up in complex ways to give interference patterns that your eye then interprets as an image.

So say you want to create the perfect display system. You would need to mimic the physical process that natures uses in order to relay visual information to you. However, rather than relying on actually reflecting light off things, we can compute the light patterns that are reflected from objects, and represent them on a display device. We have to compute the entire wave-front reflected from a given object so that it encodes all the visual information, including all the object’s depth information.  This complex pattern is known as the hologram. We reflect light off the hologram to your eye and you see a 3D holographic image.

This contrasts with what is widely described in the ARVR industry as hologram - they often use it to mean any virtual object placed in a scene. When we say hologram, we mean in the full academic sense of the word - creating a full 3-D object using only the interference properties of light.

VividQ started off in 2017 with a very rough prototype. We managed to get some initial investment and for the last two years we have been building that into product. It turns out it is very difficult to sell just algorithms to people so we have actually had to make the entire software pipeline ourselves - everything from how you capture your 3-D data from 3-D sources like game engines and 3-D cameras, through to how you output that onto a display device that can support holographic display.

Now, an optical engineer or any researcher can take our entire software package, drop it into their system and have an end-to-end solution that allows them to connect their source data with the display and create genuine 3-D images.

The technology behind the software

There are several key components that are required for holographic display. You cannot just use a normal display like what you would find on an iPad or TV. The way we generate the hologram is by reflecting light off a display that is showing a particular pattern of phase.

This phase pattern is what we call a hologram - we reflect light off that and then the wave-front of the light is modulated by that pattern which appears as an object when you look at the display. Because of that we need a type of display known as a spatial light modulator. This enables you to create that phase mask and to update it as fast as you like so you can have a moving image.

Our software is currently built on top of an Nvidia stack so we rely heavily on GPUs. In fact our underlying code is written in very low level CUDA C – NVidia’s GPU language - and then on top of that it has a C++ layer and then a C API so that the developers can easily connect and code against it to hook up their SLM or into their content.

In terms of the other technologies we support, as I said we talked to Unity, Skyrim, and we are adding support for Unreal right now. We are compatible with a host of 3-D cameras like  Stereolab Zed cameras, the PMD Picoflexxf and a few others. We have mostly got content covered, and it should be relatively easy for engineers to implement.

What types of people or industries should use this software?

Anyone designing holographic display for basically any purpose should use this software. We have identified augmented or mixed reality as the first potential target. In our view, for MR specifically to achieve real consumer use, you need to overcome several of the display issues that it currently has.

A massive topic of the conference this week has been on something called the accommodation vergence conflict. This is the effect you get in current generation displays when your eyes are forced to converge on a certain object, but because the information your brain presents to your eyes is fundamentally two dimensional, your depth of focus is not where you are  converging your eyes.

Imagine you are always focusing here, but your eyes are being told the object is over there. This is a very confusing sensation and has caused a lot of the nausea, headaches and dislocation from reality people experience when using MR or VR. Holographic technology naturally fixes this because you are presenting the eye with a genuine 3-D image, and so you do not run into the same problems.

VividQ will transform display technology

I think that holographic display will be the dominant display technology of the next decade. VividQ will be the underlying software powering all of that. It will be in the same way that companies like ARM are basically the supplier to everybody running mobile phone chips and so forth.

You will find our software embedded in all your devices that has a holographic display. Holographic display will be the thing everyone uses, because once you have full 3-D, where you don’t need to wear glasses or any of that stuff - why would you ever have a 2-D screen again?

Visit our website here www.vivid-q.com

Source

Healthcare VR and Regulatory Challenges

Join our VRARA Healthcare Committee here

Virtual reality used to be associated solely with entertainment (mainly games) and the provision of virtual walk-throughs and immersive environments for product designers, architects, archaeologists and the like. Furthermore, apart from the need to accommodate the psychological and visual disorientation and other side effects such an immersive 3D environment has the potential to provoke, the legal, contractual and intellectual property ramifications of the technology were little different from those thrown up any other product combining hardware, software and audio-visual content.

Now, however, the VR/AR sector is colliding with the healthcare sector (which has historically been very different in terms of technology, culture and regulatory framework) as it is beginning to develop and roll out new alternatives in terms of diagnosis and treatment. These new solutions are attractive and potentially game-changing not just because of their novelty but because, insofar as they offer alternatives to filling patients with drugs or putting them under the knife, they could, if fully-realised and properly implemented, be lower risk, more cost-effective and less time-consuming for patient, healthcare professional and primary carer alike. All this presses the right buttons with a cash-strapped NHS.

Examples of particular note include:

  • Training doctors and nurses in a virtual environment and recording their performance for subsequent analysis.

  • Allowing medical students to experience operations for learning purposes in a far more immediate and immersive way than if they were just looking over the surgeon’s shoulder.

  • Using 3D virtual mapping of organs for pre-op diagnosis as an alternative to opening the patient up to take a look.

  • Treating autism and phobias by means of exposure to a bespoke virtual or augmented environment.

  • Diagnosing and treating Alzheimer’s, Parkinson’s and other neurodegenerative diseases.

  • Using games technology to improve cognitive abilities, hand-eye coordination and physical dexterity as part of rehabilitation after, say, a stroke or surgery.

  • Using VR to train athletes and treat sports injuries.

The list goes on and is always growing.

However, all this innovation presents a challenge. The increased use of AI in the medical field gives rise to dilemmas in the form of, for example, who bears the risk in the event of a catastrophic misdiagnosis or a surgical robot going haywire and shredding the patient, and how the risk of those things happening should be allocated between manufacturer and user. The regulatory framework and insurance industry will need overhauling to accommodate these challenges and NHS policy makers are already on it. The advent of VR/AR on the healthcare scene presents similar problems.

For example:

  • VR/AR methods of training, diagnosis and treatment could entail the creation, storage and manipulation of the subject’s image (or images of part of that individual’s body). What waivers will the subject need to provide and who will have the rights to that image?

  • Inevitably patient data will be stored and created. Will it be stored securely? Who will have access to it? Has the patient given the requisite informed consent to such storage and creation?

  • Who will be liable if the virtual assistance tool malfunctions and ends up being a hindrance rather than a help to surgery or if, due to an inherent defect or misuse, the therapeutic VR environment makes the schizophrenic patient worse, not better?

In these circumstances the regulations may not need reinventing as much as they do to accommodate the new AI applications already mentioned but the risks listed above will certainly need mitigating through the adoption of technical safeguards built into hardware and software, strict operating procedures and well-drafted contracts and consent forms to be signed by manufacturers, users and patients – an intriguing challenge for a technology everybody once thought was only good for playing games on.

Visit our website here

Regards,

Simon Portman sportman@marks-clerk.com

Join our VRARA Healthcare Committee here

How Virtual Reality Improves Customer Shopping Experience at Supermarkets

Through VR/AR, brands want to reach consumers and assist them on the product’s shelf. This way, users will know more about the practicalities of the products or how to use the product. VR enriches the experience by the entertaining user that cut through the confusion of traditional shelf-edge advertising. Watch the video to learn more!

And visit our website here

How Virtual Reality Improves Customer Shopping Experience at Supermarkets

The landscape of the industry continues to change after the advent of Virtual Reality in online shopping. The involvement of such advance technology transformed the tedious online shopping pattern. This forced the retailers or vendors to rethink how to manage or transform their most valuable physical asset.

Virtual reality is creating its own space in virtual layouts without relying that much on traditional environments. VR in the supermarket is helping retailers in incorporating VR featured solutions to accommodate online orders and keep in-store shoppers engaged. VR in supermarket transport customers into a 360º rendering showroom that is customizable where users can explore, customize, and buy products.


Virtual reality offering exclusive factors to supermarkets

Virtual Browsing:

VR-generated superstores let customers virtually browse the store no matter where they are. They are well-organized apart from compelling and immersive. Simple and accessible designs with 360-degree panoramas that offer an experience of moving through a physical store.

Information-fueled experience:

VR introduced in supermarket improved the ways to find and compare the different product’s details to uplift experiences in stores. VR in supermarket promises to create the differentiated brand experiences that drive conversions. This improved customer service based on individual needs resulted in repeat visits and higher revenue.

Understand Consumer Behavior:

Consumers are curious about the walk-through superstores. VR is offering brands stimuli in a comprehensive virtual environment to enhance its services. This technology offers environments of flexibility to understand the granularity of the customer’s action in the superstore. Businesses with VR can analysis to see factors on a quantitative level.

How VR is Improving Shopper Experience

Retailers can gather feedback from targeted groups using VR. To improve the qualitative level of the services, retailers need to monitor behavior differences in buying patterns of the online customer.

VR supermarkets are embedded with virtual floorplans, 3D lighting, HD items display, immersive walk-through, and many more. VR offers a storytelling platform built on customer behavior insights.

VR has the ability to closely inspect goods in the store into a VR experience. Brands and supermarkets can provide memorable experiences through VR vital elements to help maintain user encouragement

VR with real-time collaboration unlocked the strategies to overcome the challenges in the supermarket. Hence the conversion rate benefits of making consumers feel comfortable. The rate of Consumers making purchase mistakes reduced thru the return rate.

VR supermarket more-accessible platform

Supermarkets are set to completely transform the shopping experience using virtual reality (VR). Hence, it is well to say that VR is a key to success in striking the right balance in allocating resources to users with digital experiences.  

Through VR brands want to reach consumers and assist them on the product’s shelf. This way user will know more about the practicalities of the products or how to use the product. VR enriches the experience by the entertaining user that cut through the confusion of traditional shelf-edge advertising.

Several major brands in FMCG have created an app that enables people to browse and pay for products in VR. VR produces an immersive experience that informs and entertain online users. This technology plays a vital role in creating more engaging encounters of the brand’s sale conversions.

The Bottom Line  

Virtual reality technology has already been around to help businesses in specific streams. The technology becomes more accepted due to the visualize store layouts and potential traffic benefits. Moreover, it offers the concept that combines product selection and customizability. VR is beneficial in term of capitalizing various opportunities embedded with interactivity and execution.

Healium Secures Webby Nomination for Mind-Powered Mindfulness and VR/AR

A Columbia, Missouri company that makes biometrically-controlled virtual escapes for areas of social isolation took home top honors recently in two national events including a Webby nomination for "Best Use of Augmented Reality". StoryUP Studios makes Healium, a virtual and augmented reality mindfulness platform powered by the user's heart and mind via their wearables. The three-year old therapeutic media company is harnessing the power of the body's electricity to heal virtual worlds for areas of emotional pain and situational stress. 

The Webby's, the internet's most coveted award, are chosen via a public voting process. This is the link to vote for Healium. wbby.co/vote-game4. Healium also took home a top prize recently at CES's pitch competition and a 1st place finish in the XR category at SXSW. 

The Webby-nominated experience, Healium AR Butterflies, is available for download on the IOS and Google Play stores. It allows the users to hatch butterflies from a virtual chrysalis with their feelings of positivity.

Two published studies in Frontiers in Psychology and the Journal of Neuroregulation showed Healium reduced moderate anxiety and increased feelings of positivity in as little as four minutes. These mindfulness-in-nature escapes have the option to be powered by EEG and heart rate data from the user's wearables like a smartwatch or meditation headband. Positive or calm memories are empowered to change the environments with either just a mobile device or inside VR goggles.

Healium supports Honor Everywhere, a free VR tour program for aging Veterans who are no longer able to physically travel to see their WWII, Vietnam or Women's Memorial in Washington, DC. People who purchase Healium kits enable free Healium experiences for Veterans on a waiting list for virtual Honor Flights.

The Webby's winners will be announced on April 23rd and public voting continues through April 18th. 

For more information on Healium kits that allow you to grow virtual peace with your feelings, please visit tryhealium.com or email hello@story-up.com

Visit our website here

Motive.io Receives Funding to Expand their Scenario-Based Training Platform

Motive.io is thrilled to be granted funding to expand our Bionic Detective project. A first-of-it’s kind collaborative AR scenario that is not only entertaining, but has potential to increase engagement and success in training law enforcement to process crime scenes.

First launched as an entertainment piece for the VR/AR Global Summit last fall, we look forward to experimenting with the immersive training version of this work.

More info

Cherry Creek Innovation Campus looks to prepare HS students for future careers via VR/AR; looking to partner with industry to provide students real-world problems to solve

The Cherry Creek Innovation Campus (CCIC) is a new stand-alone college and career preparedness facility accessible for high school students in the Cherry Creek Schools, located in SE Denver Metro area. Opening in August 2019, the curriculum will be rooted in real-world skills and trade certifications ranging from the computer sciences to aviation to health sciences, this facility will offer students a new kind of bridge to college and viable, successful careers.

Part of the IT & STEAM Pathway at the CCIC will be Virtual Reality. The vision for the program is to help students learn how VR/AR is created, evaluate which industries do and could use VR/AR, and where there is a market for further development. VR/AR can be so beneficial in so many industries and help so many people, processes, and systems, and we are extremely excited so show students the entry to this incredible field. 

The faculty at the CCIC is interested in partnering with businesses and industries that would benefit from VR/AR development, supplying students with problem-based learning opportunities.

More info here

VRTL is a Platform Featuring Brand New Courses and Editorial on VR

VRTL is both an online editorial magazine and education platform. The Magazine runs stories, profiles and interviews with international makers & creators and hosts podcasts with top leaders in the industry.

The Academy launches with several introductory courses to give content creators and creatives insights in the VR production workflow and guide them through the transition from traditional to immersive storyteller.

Fresh design paired with high-level online courses helps people “Explore, Immerse and Learn Virtual Reality.”

More info here

Serl.io is Launching the First Mixed Reality Learning Space in Singapore School

Crescent Girls School today launched the first Mixed Reality (MR) Immersive Learning Space in a Singapore school at the 6th Digital Age Learning Conference. The space will be used to supplement class work with immersive and collaborative experiential learning using the Microsoft HoloLens. 

Over the last six months, the school in partnership with Serl.io, a MR start-up, embarked on a development project to bring MR technology into the school. With educators continuously grappling with student engagement, the team set out to evaluate the use of the technology in effective team-based and game-based learning strategies to enhance learning engagement and outcomes.

“We are really excited to see the students getting into it and engaging amongst themselves when they go through the collaborative MR experience. Beyond content, we are also pleased to be able to offer our management system for teachers to effectively run and manage class-scale MR sessions.“ says Terence Loo, Co-founder and Chief Executive Officer of Serl.io.

CGS and Serl.io are planning to produce more content even as the team at Serl.io develops AI capabilities on their MR platform to further transform learning that are not just experiential but contextual as well.

More info here

VRARA Toronto Member Dark Slope Secures $1.5 Million And Opens Up New VR Experience Center

Dark Slope, a Toronto-based virtual and augmented reality (VR and AR) developer, is creating free-roam multi-user experiences for the rapidly expanding location-based entertainment (LBE) market and for enterprise customers.

The startup company has been around for a year and has closed a seed funding round raising more than $1.5 million, and the launch of its first product, the LBE title Scarygirl Mission Maybee.

Use of funds

Funds will be used to add new staff, develop the company’s free roam technology and its enterprise product line, and to support the release of future VR and AR experiences.

As part of the seed round Dark Slope will be adding new investor Steven DeNure, co-founder and former President/COO of DHX Media, to their board. 

“Technological advancements have put location-based VR and AR applications, tools and experiences at the cusp of widespread adoption by consumers and corporations,” said Raja Khanna, Executive Chairman and co-founder of Dark Slope. “The timing is right, and the team at Dark Slope are the best of the best. I am proud to work alongside this group and equally proud to welcome our new investors, and Steven in particular, into the fold.”

Developed from the ground up for large-scale multiplayer free-roam VR environments, Scarygirl Mission Maybee embodies Dark Slope’s goal to design content that brings people of all ages together. 

Scarygirl Mission Maybee, based on the cult hit graphic novel and designer toy brand Scarygirl from acclaimed artist Nathan Jurevicius, is a first-person action game that brings up to eight players together to save the world from Dr. Maybee and his diabolical experiments. Players must work closely together to suck up infectious hazardous goo, purify it and blast it back at the hordes of creatures infesting the world.

Scarygirl Mission Maybee, based on the cult hit graphic novel and designer toy brand Scarygirl from acclaimed artist Nathan Jurevicius, is a first-person action game that brings up to eight players together to save the world from Dr. Maybee and his diabolical experiments. Players must work closely together to suck up infectious hazardous goo, purify it and blast it back at the hordes of creatures infesting the world.

Scarygirl Mission Maybee showcases our hyper-focus on developing multiplayer free-roam VR and AR experiences that bring people together in immersive worlds,” said CJ Hervey, President and co-founder at Dark Slope. “We’re excited for audiences to come and be the among the first to experience Scarygirl Mission Maybee and to witness how incredible free-roam multiplayer VR can be.”

Scarygirl Mission Maybee

To celebrate the launch of Scarygirl Mission Maybee, Dark Slope will be inviting guests to come to its Toronto studio and experience the game’s raucous multiplayer action starting Feb. 28. The facility is located at Liberty Village, 7 Fraser Avenue, Unit 2 in Toronto.

About Dark Slope

Dark Slope was founded in January 2018 by four entrepreneurs with extensive knowledge of building international companies based on emerging digital platforms.

The company’s President, CJ Hervey is a former Executive Producer at Toronto and LA-based digital content studio, Secret Location; COO Dan Fill formerly served as President and Partner at Australia-based boutique entertainment company Chocolate Liberation Front; Technical Director Ben Unsworth was President and co-founder of Toronto-based creative technology company Globacore Interactive Technologies and currently serves as co-chair of the VRAR Association’s LBE Committee, and Executive Chair Raja Khanna, was previously the CEO Television & Digital at global media company Blue Ant Media and co-founder of QuickPlay Media.

www.darkslopestudios.com

XentStudios takes on the Kitchen & Bath Industry with Virtual Reality and Augmented Reality

XentStudios, a provider of immersive virtual (VR)/augmented reality (AR) platforms for the K&B industry, will be participating in the 2019 Kitchen & Bath Industry Show (KBIS), North America's largest trade show dedicated to all aspects of kitchen and bath design. This three-day event will be held February 19-21, 2019 at the Las Vegas Convention Center, Las Vegas, Nevada. 

Exhibiting at Booth BL50015, XentStudios will be showcasing its Xentify VR and AR platforms. Xentify VR provides K&B retailers with an immersive and interactive display-as-a-service platform offering room-scale product displays in virtual reality. Used to supplement physical showrooms, the technology allows store customers to experience a significantly more robust assortment of K&B products and combinations within the confines of current floor space and rent. 

The Xentify AR platform enables K&B retail customers to view unlimited combinations of products in virtual displays within their home. A few taps on a personal device allows K&B products and product combinations to be projected on their own living spaces, helping to validate design ideas and reduce returns.

For more info visit http://www.xentstudios.com or email Davinder Kohli davinder@xentstudios.com

About XentStudios
XentStudios, established in 2017, provides K&B retailers with VR and AR technology offering a combination of high-tech with high-touch physical products – “clicks-to-bricks.” It allows K&B retailers to merge online shopping with hands-on contact, providing an enhanced in-store experience for today’s tech-savvy customers while cutting costs, reducing space and shortening sales cycles.

PhaseSpace and San Leandro Police Department Collaborate on Virtual Reality Training Program

The San Leandro Police Department (SLPD) is collaborating with local virtual reality developer PhaseSpace to see if Department of Defense (DOD) developed technology can help Police, Fire, and First Responder training. Headquartered at the Gate 510 for the past 18 years, PhaseSpace develops motion capture solutions for academic and medical research, training simulations, and the robotics, graphic arts and entertainment industries. Meet PhaseSpace Founder and CEO, Tracy McSheery, and see their technology in action in this video (see minute 9:45).

10 years ago, PhaseSpace developed training tools for the US Navy and Marine Corps under a Small Business Innovative Research Grant. Partnerships with Berkeley, Stanford, MIT, USC and other research institutions gave PhaseSpace experience in virtual reality (VR) long before it became widely available. PhaseSpace is now developing software with the intent of creating a VR training environment for law enforcement training, creating dangerous situations to hone police skills in a virtual environment, saving costs of travel and training. Over the course of the past year, the SLPD has collaborated with PhaseSpace on the development of the software by testing it and providing constructive feedback.

The significance of this public-private partnership is not to be underestimated. PhaseSpace has made leaps and bounds since its original iteration, creating environments that would traditionally carry a high liability. At present, the program addresses numerous stressful aspects of Police training, including de-escalation of tense circumstances, active shooter situations, and extreme driving scenarios.

The VR environment is very realistic, which makes the training experience that much more impactful. Furthermore, PhaseSpace has consulted with mental health professionals in an effort to address the psychological consequences of these situations, such as Post-Traumatic Stress Disorder, and incorporates unique methods to help minimize trauma. SLPD continues to test the program and to consult with PhaseSpace on further improvements.

The most significant take-away is that PhaseSpace has been able to create a program that places officers in very realistic training scenarios and enables a highly effective learning environment without the risk of real-world damage or injury. While still in development, the product is nearing an iteration ready for introduction to the Police Officer Standards and Training (POST) Commission. In the event of an endorsement from POST, it is conceivable that the program will be implemented statewide.

It is incredible to witness something so game-changing being developed right here in San Leandro. This software is of potential statewide, and even nationwide significance, and goes to show that great things are continuing to be made in San Leandro.

Learn more here

 
 

Cost Effective VR - A Quarter of a Million Reasons

There are a number of concerns around the provision and efficacy of fire safety training in the UK, especially for the managers of staff on sites where general public attend for healthcare purposes.

The National Health Service (NHS) employs an estimated 1.5 million staff across the UK and treats around 1 million patients every 36 hours across the full spectrum of human health - its an exceptional organisation and well loved by the UK people.

With the vast numbers of public attending NHS health services throughout the UK the potential damage that may be caused by fire to property and people is very high. We're working with Health Education England and St Bart's in London on a Fire Safety VR pilot to reduce and manage risk while streamlining costs. 

The forecasts for the one trust we're on pilot with is for savings of around £250,000. There are 600 trusts throughout the UK so there's huge potential behind the trial for cost saving and increased effectiveness of training - it's an exciting chance to prove VR can build knowledge in a cost effective manner.

If you'd like to find out more, drop us a line on learn@absorbreality.com

AR Smartglasses in the Works by LusoVU in Portugal

Come meet LusoVU at our upcoming VR/AR Global Summit Europe.

More info and sign up here

LusoVU’s mission is improve life by connecting people beyond the human senses and to be the catalyzer of the new human interaction paradigm.

LusoVU develops AR solutions, using a unique and innovative technology. LusoVU continues its work on natural human interfaces; haptic interface (in a very simplistic way, hands movements) or eyetracking (eyes movements). These technologies will allow for a whole new way for humans to interact with the virtual world.

LusoVU is developing AR smartglasses, uniquely elegant with a huge field of view. The smartglasses characteristics include:

  • Lightness The hI-DO concept will reduce drastically the overall mass of any head mounted display that will use this technology.

  • Comfort With reduced mass comes physical comfort. The larger field of view will also bring eye comfort. The fact that this solution can be embedded into corrective lenses will also bring added comfort to users of prescription glasses. And comfort is a key characteristic for ubiquity.

  • Elegance With very thin lenses, head mounted displays using the hI-DO technology will be as elegant as a normal pair of glasses or sunglasses.

  • Usefulness The hI-DO will allow AR to become omnipresent in our lives. Head mounted displays, opposed to mobile phones or tablets, are wearable hands-free devices. As such they will unleash all the Augmented Reality potential which in turn will originate a series of new applications and uses for this new breed of eyewear.

  • New interactions LusoVU continues its work on natural human interfaces. Be it haptic interface (in a very simplistic way, hands movements) or eyetracking (eyes movements) these technologies will allow for a whole new way for humans to interact with the virtual world.

  • Large field of view While using any head mounted display, the sensation of immersion on a virtual or virtually increased world is directly related to the device’s field of view. The major breakthrough coming out of the hI-DO technology is certainly the very large field of view. While initially set to be 70 by 50 degrees, theoretically it can as big as one wants.

LusoVU is a Portuguese startup, founded in 2013 as a spin-off of LusoSpace, a company specialized in satellites components, which operates since 2002. It has prominent clients such as the European Space Agency.

More about LusoSpace

LusoSpace is a private space company composed by multidisciplinary highly qualified engineers on physics, electronics, optics and mechanics. LusoSpace’s vision is to lead the space sector in Portugal. Furthermore, LusoSpace aim to develop terrestrial applications which result from the space experience and its success.

Since its first year of activity, LusoSpace has been in the front line of technological innovation. Being the first Portuguese company to fly critical space hardware, we are proud to be a national reference for the society. Our skills were demonstrated by establishing a high tech company in the space sector, without strong initial investments or corporate support.

Main customers are ESA - European Space Agency, EADS-Astrium and Thales Alenia Space. ESA is the major european entity driving for new developments in the space sector. EADS - Astrium and Thales Alenia Space are the main european satellites manufacturers driving the space sector as a commercial activity.

Member Precision OS Announces Series A Funding of $2.3 Million to Advance Surgical Training Platform

precisionossubmitted.jpg
1_4IAmyLuU7a0PuDIaMVCUjw (1).jpeg

Original Article from BIV, written by Tyler Orton

Precision OS announced November 15 the Series A funding round was led by AO Invest. Other undisclosed investors also participated.

“Precision OS technology is working to create a virtual language able to add depth of understanding that simply cannot be achieved using current simulation tools,” Precision OS co-founder and CEO Dr. Danny Goel, an orthopedic surgeon by trade, said in a statement.

“Adopting VR as a way of improving decision-making is directly relevant for patients and value-based care.”

PrecisionScreen04n-1140x641.jpg

Precision OS produces what it calls “the most immersive and realistic form of deliberate surgical practice.”

It’s now in the midst of developing a tool allowing surgeons to introduce images of specific patient’s anatomy to the software before conducting a procedure.

Funding is also being used to bolster distribution partnerships and create additional content for trainees.

The company, which was founded in 2017, is among the 200-plus firms developing VR applications that have sprung up in Metro Vancouver over the past three years.

Switzerland-based AO Invest is the venture capital arm of the AO Foundation, a network of more than 19,000 surgeons and scientists in orthopedics and trauma.

Motive.io Announces New VR/AR Training Solutions

Motive.io has announced the launch of two new AR and VR products for the enterprise space: the Motive.io Scenario-based Training Platform and Performance AR. Both offer a scalable way for companies to train employees in virtual and augmented reality without requiring them to write a single line of code.

Motive.io Scenario-based Training Platform - the company’s flagship enterprise product - is a versatile teaching tool. Using Motive.io’s patented authoring software, businesses are able to create their own scenarios in either AR or VR. Individuals can participate in solo learning, or use the platform’s multi-user feature for team-based or collaborative instruction.

Unlike other training products, the scenario-based training platform operates on a variety of device types, from high-end headsets like Microsoft HoloLens or HTC Vive to common handhelds like smartphones and tablets. This variety makes the platform a first-class teaching tool for any business.

“Scenario-based training is one of the most effective ways of learning, because people learn by doing,” says Ryan Chapman, CEO of Motive.io. “Our scenario-based training platform allows companies to make unique simulations that are tailored to their business, with scenarios that adapt based on trainee interactions. Organizations can create intricate situations that meet each individual’s needs, instead of relying on an instructor to guide each employee individually. With Motive.io companies can create and scale training solutions at a much faster pace.”

Motive.io is also launching Performance AR, a customizable performance support product. AR performance support is an emerging technology that has already seen significant measurable benefits in manufacturing, maintenance and repair, field support, inspection, and training. Many employees waste valuable hours leafing through hefty instruction manuals in order to fix product breakages or learn basic company processes. Performance AR provides a step-by-step, interactive guide to manufacturing and teaching organizational procedures.

Using Performance AR, workers can scan a real-world product or location to open the interactive guide. On-screen graphics are then overlaid onto their physical equivalent, giving animated directions on how to create or mend an object, or navigate a factory floor. The software runs on all major headsets as well as mobile and tablet devices, allowing employees to use its features immediately.

According to a Harvard Business Report study performance support tools have already been shown to increase productivity by 30 to 50 percent. Despite that, there are no other off-the-shelf products like Motive.io’s Scenario-based Training Platform or Performance AR that allow businesses to create their own training content. Motive.io’s patented authoring system allows non-technical individuals to create rich experiences without writing any code. Its drag-and-drop interface means that companies can bypass hiring an expensive development team to create an interactive training program.

“We’re excited to be expanding Motive.io’s reach into the enterprise and industrial space,” Chapman says. “We’re seeing a real need for software tools that give corporate training teams the power to create detailed and immersive AR and VR experiences. Our software integrates with a company’s existing learning management system using xAPI, and collects data on training progress, which makes it really easy to use. It feels great to be helping make workplaces safer and more productive.”

About Motive.io:

Motive.io empowers businesses everywhere to take advantage of augmented and virtual reality. AR and VR are revolutionizing the way we interact with the people and machines in our workplaces. Until now, cost and technological barriers have kept the majority of businesses from realizing the significant benefits of AR/VR adoption. Motive.io’s suite of software products and patented authoring platform give companies unprecedented freedom to create their own AR and VR content internally without having to rely on a team of developers. For more information visit http://www.motive.io

CTC PolyFormVR is a Smart Floor for Location-Based VR Experiences in Military & Entertainment

polyform_process_3000x692 (1).png

Concurrent Technologies Corporation (CTC) is developing PolyFormVR, a patent-pending modular smart floor designed to support location-based VR experiences with automated construction of physical infrastructure such as walls, windows, doors, stairs, and large props. The system operates using a simple 3-step process to create the physical infrastructure for locations–from measure to extrude to play–in as little as less than a minute.

First, the heights of objects in a target virtual world are measured. Then, the height measurements guide automatic extrusion of the target virtual world into the real world using a grid of moving vertical columns. Finally, users put on the VR equipment of their choice to experience the target virtual world with correlated visuotactile support provided by the columns. PolyFormVR is being developed as middleware in support of operators within the location-based VR military training and entertainment markets.

PolyFormVR is designed to alleviate pain points associated with custom infrastructure design, fabrication, placement, test, teardown, and storage and to accommodate rapid rotation of VR content and its supporting virtual-physical correlated infrastructure. While designed for custom infrastructure automation, PolyFormVR is ideal for algorithmically generated automation in which modular virtual worlds are randomly assembled and then extruded into the real-world, creating unlimited replay value. The smart floor size is modular and customizable and is sufficient for supporting tons of weight in users and in the placement of existing operator props. 

The idea for PolyFormVR arose from a brainstorming exercise to imagine next-gen virtual physical correlated training and simulation capabilities for NASA and US military dismounted soldier training and soon expanded to include entertainment venues. When we looked at the location-based VR industry we found great visuals and responsive tracking. What was missing was the ability to stand up and tear down virtual-physical correlated infrastructure in a manner as agile, timely, and responsive as the selecting and loading VR scenes themselves. The inspiration for its design came from the idea that a synthetic physical environment could be generated on demand, as portrayed in the video game Portal and in the Star Trek series. With PolyFormVR, CTC is making the virtual REAL.

We want feedback from location-based VR operators to help us improve the design of PolyFormVR. Please click the following link to participate in a short market survey.
https://www.surveymonkey.com/r/polyformvr

CTC Website here