Recap of our XR Security / Cybersecurity

Thank you Cindy Mallory and Damon Shackelford for hosting our Online Session with 20+ professionals!

During the meeting, participants discussed the threat landscape for virtual reality and augmented reality in enterprise settings. Concerns were raised about the weaponization of AI in the threat landscape, as well as the unique vulnerabilities of XR technology. The use of AI in cybersecurity was also a topic of concern, with discussions on access controls, potential exploitation of AI for malicious purposes, and security implications in immersive worlds. Participants expressed concerns about social engineering, deep fakes, compliance and regulatory frameworks, enterprise policies, integrating new capabilities, and risk assessments for immersive projects. Tim expressed interest in penetration tests against AI and VR implementations, and discussed tactics, flaws, data manipulation, and the need for security tools to catch up with AI.

Topics & Highlights

2. Threat landscape and AI

  • The discussion highlighted concerns about the weaponization of AI in the threat landscape, particularly in relation to social engineering. The lack of expertise in running security research engagements for custom-built applications that use both AI and VR was also mentioned.

3. XR technology and vulnerabilities

  • The discussion highlighted concerns about the unique aspects of XR technology from a vulnerability standpoint. This includes the different use cases and exposure of VR applications compared to standard web applications, as well as the challenges faced by security researchers in testing the less-tested frameworks used in XR development.

4. AI in cybersecurity

  • Participants raisesed concerns about the use of AI in cybersecurity and questions if it can also be a threat. She also asks about other emerging technologies that enterprises should be aware of.

  • Participants discussesed the importance of access controls when using enterprise AI tools and highlights the potential risks of AI-powered tools having access to sensitive documents. He mentions the need to take access control seriously and the challenges companies face in implementing fine-grained access controls.

  • Participants raisesed the question of whether AI can be exploited and used for malicious purposes. He mentions the possibility of AI agents running autonomously on the internet and the impact it may have on security. Tim agrees and discusses the potential for AI to be used in social engineering attacks and emphasizes the need for user education and staying ahead of these attacks.

  • Participants discussed the potential security implications of AI in immersive worlds and virtual reality. They mention the risks of identity theft and social engineering in virtual worlds and the need for user awareness and vigilance.

5. Security Concerns with Immersive Tech

  • The participants expressed concerns about social engineering, deep fakes, and the ability to copy someone's likeness through video, voice, and text messages.

  • The discussion touched upon leaving test servers on and the potential risks associated with it.

  • The participants discussed the compliance and regulatory frameworks related to immersive tech, including CMMC, and how it might impact the adoption of AI.

  • The conversation focused on the establishment of enterprise policies for immersive tech and how it will eventually be governed securely.

  • The participants discussed the impact of integrating new capabilities into existing software and its potential security implications.

  • The discussion touched upon the risk assessments for immersive projects and how they can be aligned with existing frameworks.

6. Penetration Test against AI

  • Participants mentioned the interest in penetration tests against AI and VR implementations.

  • Participants asked about the inputs, tactics, flaws, and level of access attackers would exploit in AI applications.

  • Participants discussed the possibility of exfiltrating and manipulating data from AI implementations.

  • Participants highlighted the need for security tools to catch up with AI and custom-fit their tactics for these applications.