The VR/AR Association publishes the AR Cloud White Paper

Members: download it from our Members GDrive or email


The world is moving towards a fundamental shift where our physical reality will soon blend with a virtual one. This idea opens up an entirely new frontier in which our experiences and our realities will be extended in ways we could have never imagined. In this near future, the possibilities for AR are endless.

Brands can attract and engage customers with more immersive and interactive experiences not bounded by physical constraints. Employees can learn how to operate equipments more effectively in complex assembly lines, reducing cost and risks for businesses. Students can visualize complicated diagrams in 3D, improving academic performance. Consumer products, instruction manuals and textbooks are just a small fraction of static objects that can be brought to life.

For mass adoption of AR to occur, content must persist in the real world across space, time and devices. The 3D virtual art will “live” in that space as if it’s really there and will not disappear between different app sessions. Multi-user, occlusion are two additional functions that are key to augmented reality adoption. To enable these abilities and a streamlined experience, the “AR Cloud” is needed.

Alex Chuang, Shape Immersive
Amy LaMeyer, EnteringVR
Colin Steinmann, Bent Image Lab / youAR
Gabriel Rene, VERSES
Mikko Karvonen, Immersal
Sam Beder, Ubiquity6
Steven Swanson, VERSES
Matt Miesnieks,
Kris Kolo, VRARA

Table of Contents
1.1 Definition(s) of AR Cloud
1.2 Tech Giants Are Investing in AR
1.3 Building the AR Cloud
2. Use Cases
2.1 Gaming: Niantic
2.2 Indoor Navigation: Immersal
2.3 Productivity: YOUAR
2.4 Social and Gameplay: Ubiquity6
2.5 Events: Geogram
2.6 Location and Tracking: Fantasmo
2.7 AR real estate: SuperWorld
3. Conclusion

Call for Sponsors: AR Cloud

If you’re interested in sponsoring this publication, email

The VR/AR Association AR Cloud Committee has produced this white paper and we are seeking a sponsor to help with the Editing and Production costs. If you’re interested in sponsoring this publication, email


The world is moving towards a fundamental shift where our physical reality will soon blend with a virtual one. This idea opens up an entirely new frontier in which our experiences and our realities will be extended in ways we could have never imagined.

In this near future, the possibilities for AR are endless. Brands can attract and engage customers with more immersive and interactive experiences not bounded by physical constraints. Employees can learn how to operate equipments more effectively in complex assembly lines, reducing cost and risks for businesses. Students can visualize complicated diagrams in 3D, improving academic performance. Consumer products, instruction manuals and textbooks are just a small fraction of static objects that can be brought to life.

Up until now, AR experiences have been rudimentary and siloed primarily because they were hard to develop, hard to distribute and had no real demand. Most so-called AR experiences have been merely a simple 2D digital overlay on the real world with no real connection between the virtual content and our physical world. For example, in the original Pokémon Go, the AR characters do not understand the spatial context of the surrounding area.

In order to consume context-aware AR content in the physical world, it is necessary to understand the precise location and orientation of the viewer’s device. For mass adoption of AR to occur, content must persist in the real world across space, time and devices.

Table of Contents:

Introduction to the AR Cloud

Definition(s) of AR Cloud

Tech Giants Are Investing in AR

Building the AR Cloud

Use Cases

Indoor Navigation: Immersal

Productivity:  YOUAR

Social and Gameplay: Ubiquity6

Events: Geogram



Alex Chuang, Shape Immersive

Sam Beder, Ubiquity6

Mikko Karvonen, Immersal

Amy LaMeyer, WXR Fund  

Gabriel Rene, VERSES

Colin Steinmann, Bent Image Lab / youAR

Steven Swanson, VERSES


Matt Miesnieks,   

Kris Kolo, VRARA

Screen Shot 2019-03-11 at 2.46.24 PM.png
Screen Shot 2019-03-09 at 12.25.09 PM.png

If you’re interested in sponsoring this publication, email


Who Owns Augmented Realities?

Join our AR Cloud Industry Committee & Initiatives here 


This article originally appeared in Venture Beat by Mike Boland, SF Chapter Lead & Chief Analyst of ARtillry Intelligence

Screen Shot 2018-06-28 at 9.09.20 AM.png

Last year, a seldom-discussed event started to raise important questions about augmented reality’s geographic boundaries. A group of artists digitally “vandalized” Snapchat’s AR overlays of Jeff Koons’ sculptures throughout Central Park. It turns out that the AR revolution has revolutionaries.

Screen Shot 2018-06-28 at 9.11.41 AM.png

To be clear, they didn’t hack Snap servers to vandalize the graphics within Snapchat’s UI. Rather, they re-created and altered a separate static image. The protest was nonetheless to illustrate a point that public spaces shouldn’t be an open canvas for private companies to affix AR graphics.

But the bigger question this raises is: Who owns augmented realities? Ultimately, AR graphics aren’t happening in public spaces but in app renderings of those spaces. So technically, it’s not an issue of public domain, because anyone uninterested in specific AR graphics can simply not use those apps.


A scarce resource

But the concept this all leads to is scarcity. As examined by Super Ventures partner Matt Miesnieks, scarcity could be a source of value in AR, just like it is in the real world. This is because the geography that defines some AR graphics’ physical-world placement renders them relatively finite.

This geographic positioning for AR will be done primarily to add value through location-based relevance, nearby commerce, or local pride/emotion. But the secondary effect of that localization will be the same physical limitations that apply to real estate. Grounding AR in physical world relevance also adds value that’s analogous to the location-based and temporal relevance of a live event. It’s boosted by aggregate interest in a specific time and place. And it’s bound by finite atoms rather than infinite bits.

Pokémon Go has already tapped into this concept, as has its forbear, Ingress. And consumer AR apps developed in the coming months will likely find similar value in geographic and temporal scarcity. After all, this principle is fitting to AR’s inherent melding of the digital and physical.

Most of all, this contrasts the digital real estate that’s flooded and devalued lots of content in the internet and smartphone eras. Without scarcity, banner ads for example have been commodified by expanding ad networks and fill rates, thus driving down CPM value (and effectiveness).

And there’s a lot on the line. We at ARtillry Intelligence project consumer AR revenues to grow to $18.7 billion by 2022. That will mostly consist of in-app revenue for mobile AR experiences, which is the primary way that Pokémon Go has raked in over $1.4 billion to date.


The AR Cloud

In fairness, it should be noted that AR’s scarcity has a limit. Physical world real estate can only be exhausted on a per-app level. So more AR apps means less scarcity. And within a given app, there can be “layers” and filters (such as social graphs) that further expand or restrict digital inventory. “For this to work we’ll need a system of filters, because otherwise everything will be talking to you at once,” said Metaverse author Charlie Fink recently. “What’s useful in AR is very specific things that augment the world, showed in a time and in a way that you want so that it’s contextual.”

This all leads to the latest big topic in the AR/VR universe: the AR Cloud. In short, it’s a 3D map of the world that sits in the background. It defines spatially-anchored and persistent graphics, which  can be detected and shown by AR devices depending on what app you’re using.

Because 3D mapping data for the physical world is too extensive to store on device, the AR Cloud offloads that burden. It can dynamically feed AR devices with scene mapping and object recognition blueprints so they know what they’re looking at, then can overlay graphics in the right spots. This makes the AR cloud a sort of upgrade to Google’s mission statement to “organize the world’s information.” But instead of a search index delivered through typed queries, the AR cloud delivers information about an item on that item. All you need to do is point a camera at it (millennial-friendly).

And it’s not just a matter of consuming the AR cloud, but also creating it. That can happen through a sort of crowdsourced approach, where all of these outward facing cameras capture data to create a  visual map. So it perpetually builds over time, sort of like Google’s web index but for the real world.

In fact, Google already could have a head start through its Street View cars. And there are other mini-AR clouds such as Pokemon Go. But a true AR app economy could require a more universal and open AR cloud that’s tapped and fed by billions of phones. This is what is building via API.

Screen Shot 2018-06-28 at 9.20.14 AM.png


Nine-tenths of the law

This all gets back to the question of who owns AR graphics. Whether its a shared AR cloud or a proprietary one, there will likely be a centralized authority to define and enforce ownership. That could be a web-like entity (think: ICANN and DNS), but it will more likely be blockchain-based.

Without going too far down the buzzword rabbit hole, blockchain capability aligns well with the construction, maintenance and authentication needs of the AR Cloud. But until then, the system of establishing and enforcing AR graphics ownership could just be good old common law.

Case in point: A class-action suit was filed last year by property-owners across several states that were seeking damages from trespassing. The trespassers had one thing in common… they were playing Pokémon Go. But interestingly, the defendant in the suit was the game maker, Niantic. “The plaintiffs are actually alleging that the Niantic committed a form of ‘virtual trespassing,’ said Foley & Lardner Attorney Lucas Silva at January’s ARIA Conference. “The theory being that Niantic can control where these elements are placed and [they] have GPS coordinates.”

This may seem silly, but it’s important. At AR’s early stages of adoption and cultural assimilation, case law will set precedent. And for a sector that’s already a bit fragile in its infancy, legal impediments could stunt growth further. And that could impact the way the AR cloud operates. “The court had a chance to dismiss the case early on and did not, suggesting that maybe this claim does have a little more legs than some people would have thought,” said Silva. “I think this is a case that has potentially far-reaching implications for augmented reality.”

It will be particularly contentious wherever money is changing hands, such as AR advertising. Questions will face courts such as ownership of digital ad inventory when there are AR overlays on private property (or on other ads). There could be similar gray area in retail & commerce. “If you are in a Lowe’s store and you’re using a wayfinding app, what if the owner of that store, presumably Lowe’s, rents space from the owner of a strip mall?” Silva posed. “Does that strip mall owner potentially have to sign off on the placement of these virtual elements?”

Whether it’s shopping or vandalized art, legal governance of AR “ownership” will be a moving target over the coming years. Meanwhile, decisions could defer to legal precedents that rule physical property ownership. Possession could end up being nine-tenths of the law in AR too.

Disclosure: The author is an analyst for ARtillry Intelligence, an independent research firm whose data was cited in this article. He has no other  financial stake in the companies mentioned in this post, nor received payment for its production. Disclosure and ethics policy can be seen here.

Mike Boland is Chief Analyst of ARtillry Intelligence, San Francisco lead for the VR/AR Association and former tech journalist.


Recap of our AR Cloud Webinar and Q&A (The Spatial Web)

Join our AR Cloud Industry Committee here


On May 2nd, we hosted a webinar on the “AR Cloud” with an incredible panel moderated by Charlie Fink. and featuring presentations from Ori Inbar of Super Ventures and AWE, and pioneering startups working to enable the creation and population of the AR cloud including Anjanay Midha (Ubiquity6), Matt Miesnieks (, Ghislan Fouodji (, and Ray DiCarlo & David (YouAR).

Perhaps the biggest shift the introduction of AR-enabled mobile phones  brought is the use of the camera as the interface. But the camera needs something to detect, or "see". We have come to think of this geolocated content as "The AR Cloud". The implications of SLAM capable geolocated content are profound; the world will be painted with data. The technology to enable this dramatic development is in its infancy, although there are several promising startups tackling it right now. Some, like Ubiquity6 and YouAR, offer complete solutions, while others, such as offer key technologies that would enable developers to create their own apps.

The presentations explored key concepts around the AR cloud. Such as:

  • The role of computer vision, AI, and sound.
  • The function and form of the universal visual browser (how will it enable all AR content to be found and could existing browsers play a role).
  • Will there be open standards (so enterprises and individuals can populate the AR cloud)?
  • If a dozen developers painted data on a landmark,  like the Golden Gate Bridge, how would a user sort through it?
  • Will there be a Google for visual search? How would that work? How important are filters?
  • What are the opportunities today for developers, enterprises, and individuals?


Q & A

How do you capture spatial data that is aligned to real world coordinates if GPS isn't accurate enough? What is the math behind it?

It's too hard to get into here. There is no single solution that covers all use cases. Doing some googling on "outdoor large scale localization" (add "gps denied environments" for more fun) will uncover a lot of papers and all the math you can handle.

In layman terms, the GPS system is used to get in the general area, then computer vision takes over. Either via AI recognition, and/or point cloud matching, the system determines the exact 3D coordinates of the phone relative to the real world (ground truth)

The trick is to handle cases where there is no pre-existing computer vision data to match against, and to make it work from all sorts of angles and lighting conditions, and with/without GPS. It's still a very active domain in computer vision research. - Matt Miesnieks

There are several CV solutions that localize devices within point clouds. The trick is to understand the relationship between one "localization" and another.  Our approach involves positioning AR content and devices with coordinates tethered to multiple trackable physical features. As devices use our system, they calculate and compare relative positioning data between trackable features, generating measurements (and associated uncertainties). These trackable features get organized into emergent hierarchical groups based on those measurements. Our system then uses statistical methods to improve its confidence of relative position data based on additional measurements made between the trackable features organized into a group.

In plainer terms, imagine that you had a marker taped to a table to create a rudimentary form of persistent AR but that marker also knew the relative position of every other marker taped to every other table in the whole world. We call our version of this system LockAR. - Ray DiCarlo


How will ALL of these different AR Cloud providers work together? Are they all compatible?

Good question. Some of us are trying to figure out Open Standards, but honestly I think that's premature. We need to show value to the market that these enablers are valuable. The market is nascent and we are all working to grow the market. Interop just isn't a problem anyone has right now. Down the road, who knows. Some forms of data will probably be "open" and others proprietary, this will probably be use-case dependent, and we don't know the use-cases yet - Matt Miesnieks

We are in a great innovation period! The giants will buy up the companies they like, and leverage their network effects to push them into the consumer world. Compatibility will exist only when its value outweighs the profit of closed systems. We'll see. The decision to give away proprietary technology when you are ahead is a hard one. But often the right one, with a big enough vision and umpteen billions of dollars to help prop up AR Cloud SaaS models. - David


How will you handle point cloud data sets in mobile on existing 3G networks? does everything on device, as close to real-time as possible (inc generating point-clouds & meshes). We minimize the data upload & download. We are targeting wifi and LTE networks initially. 3G will work, but will be slower *unknown if too slow. - Matt Miesnieks

Clouds can be sparse; they do not have to be that "heavy".  They will be cached, and load as a device gets within range. We can reduce the amount of polygons and limit the level of RGB fidelity — but it's all going to be better on 5G.  5G is coming soon, and its arrival will transform the AR Cloud into a mass media platform accessible by all, with infinite data, and crazy high bandwidth. - David & Ray from YOUAR


What is the threat you pose of one company controlling access to "the" AR Cloud. Doesn't that assume there can be only one? And isn't it the case that the number of potential AR clouds is infinite--just multiple layers over the same physical space?

This question confuses the enabling infrastructure and the content. There can be infinite content in one place. The enabling infrastructure is too early to tell how the market will emerge. It's unlikely that one company will control everything. Nearly all tech markets have a dominant leader and a strong #2, then lots of small players. The ARCloud market will eventually fit this model, but what services and products will that be, who knows, the term ARCloud is too broad right now. - Matt Miesnieks

We agree! There will be many disparate AR Clouds at first, each using their own CV methods to understand that physical space. At some point, protocols may be developed that make some clouds obsolete, and allow others to coexist.  We incorporate otherwise incompatible CV localizations on a common map. Most likely, Apple, Google, and Microsoft will continue to develop in their separated, siloed ecosystems for a while. Everyone will be searching for a near-term solution to the table as we wait for the giants to open their store of feature sets for common use. --Ray & David from YOUAR


Why the ARCF? We all share the roads, why not share an AR Cloud? Wouldn’t we all be better off with a generally common one?  

Different AR Clouds will begin to pop up; we must unify this somehow, or at least agree to index them together coherently.

Get in a room (or chat room) and reach out to everyone.  Do simple things first, realize basic goals, test out the first collection of applications.

Sign up for our ARena SDK and use it as a way to populate the ARCF's persistent, global map! The ARCF holds a collection of dynamic and versatile ".6dof" files — openly available SLAM maps. These files are generated by providing end-users with a way to "scan" environments, saving the data in a way that other devices on our network can use to "see" the same space. 

In the future how do we create experiences that don't care what device you are using? is working to solve this, we intend to support all major AR platforms & hardware. Partially this is also a factor of the creation tools being cross-platform (eg Unity) and the platforms being open (ie not Snap). - Matt Miesnieks

Our approach was to build an SDK (available soon), so developers can immediately start building AR applications in Unity with the ability to interact with each other on ARCore and ARKit enabled devices. A key goal was to mitigate complex 3D interactions, and to allow both devices to see AR content together in a common space. If developers would like to stay iOS or Android native, they can use our soon-to-be-available UberCV SDK, or other companies soon-to-come out equivalent which will enable you to be in a common space without using any particular backend solution.

Here is an example from YOUAR


What are the panels thought as far as conflict resolution? What if multiple entities try to place persistence objects in the same public place & what about property owners having control over what virtual objects are on their property?

It'll be up to the user to choose what app to use, and that will display the content for that app. No one will force you to look at content you don't want (though I expect there will be completely open/public free-for-all AR content apps, which will quickly die due to abuse). - Matt Miesnieks

We believe in a layer-based filter system. Similar to current content filters on apps and websites like Reddit; users will filter their visual content based on any number of attributes, such as by author, rating, maturity level, or location. -- George from YOUAR



Do you believe that with new Google ARCore developments other AR SDKs and clouds would probably disappear?

No. Each startup will need to figure out how to work with the big players and how to bring differentiated value. None of us want to compete head-on with the distribution power of the big platforms - Matt Mieskieks

Is AR a vitamin or a painkiller? Besides use cases in entertainment/games, training/education or retail/tourism, what are the world-changing use cases that will dramatically improve people's lives?

AR right now is like smartphones in 2004 (I know, I used to work for Openwave who invented the smartphone web browser). It's a cool feature but not a painkiller apart from very specific instances. But all tech infrastructure being built now will also run on whatever AR glasses come along later and supplant/support/enhance the glass rectangle form factor. AR will be a feature on glass rectangles, but will be core to glasses. - Matt Miesnieks

It is a disruptive medium that will change the way we think about, gather and process information. The age of spatial information - David from YOUAR


In your opinion, is there a place for the term Mixed Reality vs Augmented Reality? Ie, is the Pokemon behind the pole, with shadows correctly cast, does that make it MR?

Magic Leap confused everyone by calling "AR with occlusion & physics" MR. MR refers to a superset of both AR and VR. - Matt Miesnieks


How important do you believe patent portfolios, IP,  will be in the development/commercialization of the AR cloud?

Somewhat but it will be distribution and user retention that becomes the sustainable advantage in this domain - Matt Miesnieks

With the concern of fragmentation across AR Cloud solutions, what standards are being discussed so the upstarts are able to work together as their unique approaches evolve?

See above. Cloud interop isn't a problem that anyone has right now. It's premature to try and solve an imaginary problem. Providing end user benefit needs to be solved first. - Matt Miesnieks

Fragmentation is a usual first step in bleeding-edge tech — just look at how many automobile and internal combustion engine patents existed by 1900. It was the assembly line that changed everything, not the patent designs.

We hope that an archival 6D standard can be agreed upon in the near future. This would mean that advancement in technology and CV localization methods will not be limited by current scans; this should be to everyone's advantage. -- Carlo and Ray from YOUAR

1st: We can now run multiple different fast algorithms to create 3-D convex hulls as bounding areas, thus being able to produce real-time object classification. How long before physical properties can be classified in real-time?

2nd: Part:How many months multiple camera feeds filming a person in their everyday lives would it take an AI, Neural Net or other, to learn a person's mannerisms to a point of seeming like the real person;  but not necessarily be able to pass the Turing test?

I don't know. I believe the first part (physical properties) has been somewhat solved by Adobe Research, I assume they've published something on it. - Matt Miesnieks


Can you give some examples of current open-source opportunities to collaborate in a major global project?

Open AR Cloud is an excellent effort toward open geo-pose. Help them!

There needs to be an advisory board on blockchain issues that will be essential to the AR Cloud and the AR economy. (Join our Blockchain committee here )

Incentivized "scanning" needs to be directed toward high-value data targets guided by a decentralized group. Sharing a common map, with privacy of data and an open AR net is a responsibility we all have to our collective future.


Join our AR Cloud Industry Committee here