13:52:47

Can Fiction Inform Responsible XR?

Can Fiction Inform Responsible XR?

January 2026



The Rise of Technopoly

Humans have always utilised technology to some extent. In his 1993 book, Technopoly, Neil Postman categorises the relationship between humans and technology into three types: tool- using cultures, where technology is used as mere means to an end; technocracies, where technique becomes more important than consequences; and technopolies, where technology supersedes and drives human culture to fit the broader societal machine’s needs.


Postman’s perspective on the third stage mirrors Marshall McLuhan’s famous phrase from his 1964 book, Understanding Media: The Extensions of Man, which states that “the medium is the message.” This suggests that a medium’s impact is its true message, not its content. McLuhan also noted that people are always largely unprepared to encounter new technologies, like television; “the native of Ghana is unprepared for literacy that separates them from their tribal world”. This similarly applies to all later technology; while content can remain the same, each medium changes how individuals act and interact, influencing their culture and placement in their environment.


An example of this can be adapted from Ivan Illich’s critiques of cars in Tools for Conviviality (1973) for creating a “radical monopoly”: a technology meant to connect people across distance instead disperses communities, reshapes cities around its needs, and builds dependence, gradually leaving those without cars excluded. Communicative media extends this pattern, bridging distances virtually while eroding local bonds; users, like bees pollinating flowers (Butler S., 1872), unwittingly propagate the system's growth into global ‘network states’ (Srinivasan, 2022) over what would’ve traditionally been geographically gathered locales and organically interacting communities.


Our gradual integration into our technology in such a way has led to what McLuhan termed “cybernation”; our nervous systems and consciousness merge with man-made “electrical society” as a unified symbiosis between us and our environments. We’ve already merged to an extent, as seen in the personal computer and smartphone, where we use them similarly to the ‘exocortex’ from Charles Stross’ 2004 story Accelerando; extending our minds and capacities like memory, processing, or navigation, relying on technology as an external appendage. While this can still be considered tool-use and indeed be very helpful, as our devices become integrated into us and become smarter with increasing capabilities of autonomous action, they, like our organs, may begin to steer us just as much as we delude ourselves into thinking that we wield and control them.


Google is an example actor in this, openly aiming to “organise the world’s information”. While this may seem noble, its AdSense tool collects as much “exhaust data” as possible from its technological stack, including hardware (Pixel, Chromebook), software (Android, Chrome, Maps, etc.), and even content (YouTube, Google search, AI engagement). “Exhaust data” here refers to metadata (data about data, not contents, but the location, timestamp, device identifiers, etc.) that is then used to fuel “surveillance capitalism”. In such system, users become collations of data points to be sold to the highest bidder for targeted advertising, optimising likelihood of engagement and purchase (Zuboff, S., 2019).


Meta’s algorithmic suggestion feeds work similarly, siphoning a user’s social network, locations, inferred interests, and personality to match content to perfect audiences. While this can be justified as empowering connection, documentaries like The Social Dilemma (2019) reveal that such services are intentionally designed them to keep users engaged for as long as possible. Many involved often express regret that the business model never truly aligned with providing users with a more fulfilled life, despite naive aims. Dark patterns, such as infinite scrolling, persuasive dialogue, and intentional menu placement, have been increasingly baked into services to psychologically reinforce desired habits, retain usage, stimulating addiction and compulsive behaviour, even with weak user control tools like time limits (Seyson, S. & Willett, W., 2025).


Figure 1 - Instagram’s navigation bar is intentionally designed for quick access to Reels on the bottom right near to where the thumb rests if right handed, and users have no control over its positioning. [Digital Information World, 2023]

Alongside profit motivations, deeply intimate data is often also utilised for (debatably) even worse harmful purposes. In 2013, Edward Snowden’s NSA documents revealed that tech corporations frequently collaborate with governments to provide intelligence data on users, both internationally and domestically, under guise of ‘national security’ or ‘national interests’, often obtained without warrant or transparency.


Data is often taken indiscriminately via dragnets to gain as large a total picture as possible, to the point of leading to “bulk data failures” from the sheer volume of hoarding (in orders of exabytes, thousands of terabytes) which cannot be processed quickly enough (Whittaker, Z., 2016). Both Google and Meta, for instance, are alleged to even have received initial funding from agency investment fronts like In-Q-Tel (Ahmed, N., 2015) due to shared interests in social graphing, mapping, or data collection efficiency aligning to government goals of ‘Total Information Awareness’ (Hill, G., 2024).


Figure 2 - The PRISM collection program revealed deep corroboration between Western government agencies and Big Tech corporations for accessing intelligence data in almost all mediums/services [The Guardian, 2013]

As McLuhan also warned, “Every new technology requires a new war” (1968); a view echoed in his 1970 statement that “World War III is a guerrilla information war with no division between military and civilian participation”. Contemporary thinking, such as François du Cluzel’s 2021 NATO-sponsored exploration into cognitive warfare, suggest a dynamic is already unfolding: extensive personal data enables highly targeted social engineering, nudging, and narrative manipulation which can be used to destabilise societies and individuals without any traditional combat. Our technological landscape resembles a highly volatile cognitive domain, raising profound questions about user autonomy amid increasingly immersive, always-active, and personalised technologies which can influence people in ways they likely struggle to anticipate.


The Cambridge Analytica scandal of the 2010s perfectly highlights such issues in practice. Using Facebook data, the group targeted and manipulated voters with political posts during the 2016 US election and Brexit referendums, influencing opinions without knowledge or consent. This has had a lasting impact on Western society, with irreversible cultural, economic, and social consequences. Some could argue that democracy can no longer be trusted to produce a consensus due to algorithms that push people’s beliefs to short-sighted emotional populism (Zarrelli P., 2025). Zuboff states, “It’s no longer enough to automate information flows about us; the goal now is to automate us,” which raises profound concerns about the future of us and society.


Even without considering human biases and influence campaigns through legacy media, Postman’s technopoly is already here - “A technocracy does not have as its aim a grand reductionism in which human life must find its meaning in machinery and technique. Technopoly does.” How can we truly have free will if everything we see is highly personalised by algorithms on overwhelming echo-chambers, mixed with AI ‘slop’ media and botnets fuelling a ‘dead internet’ of fake users manufactured to guide us to synthesised consensus?


Whilst it’s easy to dismiss this as conspiracy theory, laws like the 2001 Patriot Act in the US and the 2016 Investigatory Powers Act in the UK show how legal access to user information is achieved without transparency. The ‘data economy’ and its powers value information for manipulation, using technology to achieve this. Pushed towards digital activities, we don’t critique the potential manipulation achieved through our usage and increasing dependence on them, and how there is a designed element in how it affects us.


Naturally, this leads to users feeling exploited. Every action is likely logged in an NSA mass database in Utah, searchable by some tech worker behind closed doors. Unless you live in the woods as a hermit covered in tin foil, you can’t realistically stop using emails for work, phones for staying in contact with friends and family, or the Internet for accessing information without major inconvenience. This issue likely seems unfixable due to the sheer scale affecting so much everywhere all at once. Yet, perhaps more crucially, knowing we cannot undo the past; what are issues that may happen next and how can we work towards fixing them?


The Future

If asked about Silicon Valley's agenda, critics such may accuse leaders such as Mark Zuckerberg (Meta), Elon Musk (Tesla, SpaceX, X), Eric Schmidt (ex-Google), or Larry Ellison (Oracle) as ‘cyborg theocrats’ pursuing a singularity where technology culminates in an AI superintelligence ‘god’ (Allewelt, B., 2025), inevitably prophesied to surpass human cognition and aid us in surpassing biology with tools for transcendence (Kurzweil, R., 2005). Interim steps are cognitive enhancers like smartphones, followed by worn and embodied devices such as smartwatches, VR (virtual reality) headsets, and smart glasses, the latter two forming XR/MR (extended/mixed reality) (Milgram & Kishino, 1994), before leading to any truly Strossian cognitively-integrated exocortex for directly augmented data immersion.


XR is a technology in development since the 1980s, and has gained niche but mainstream awareness since the early 2010s. It has evolved from PC-dependent headsets like the original Oculus Rift and Valve Index towards thinner glasses, starting with the controversial Google Glass of the 2010s and more refined present iterations like the Meta Ray-Ban Display glasses of 2025. Whilst users already merge with smartphones, a problematic though disembodied integration, XR devices can offer even more effortless user experiences through inputs such as hand gesture or eye-tracking selection. The naturalness afforded by embodied input feels like a higher-fidelity extension of one’s mind due to the reflexiveness of thumb microgestures or eye tracking rendering even touchscreen interfaces cumbersome and slow; latency can be far lower with, say, the latter, as people don’t usually think to consciously aim and look at an interface target; they just do it.


But despite usual futuristic promises of connectivity, convenience and productivity, XR’s potential to threaten privacy, autonomy, and wellbeing remains along with much of the previously mentioned issues of surveillance or manipulation. XR has already been predicted to be likely to utilise even more intimate dark patterns to manipulate users; speculation suggests always- enabled cameras and linked displays could be used to virtually block access to environmental spaces, overlay personalised ads, or instigate negative virtual emotional stimuli, for example (Meinhardt et al., 2025). The ability to alter reality towards ends becomes limitless in an extended one.


Given this knowledge, it’s crucial we consider this before widely integrating it into society so to avoid problems and changes that may not benefit us. With such technology, how can we, say, prevent reflexive eye movements from being logged, targeted with ads based on this data, or ensure any privacy when cameras and microphones are constantly attached to ourselves and others?


Similar critics who also forecast this on the horizon, like Tristan Harris and James Poulos, have attempted to testify before the U.S. Congress about dangers of algorithms and persuasive technology as well as future trajectories about this developing evolution, warning that the cyborg theocrats’ belief that technology will inevitably surpass humanity is greatly harming and will harm us, democracy, and our cultures in the process. These expanding trajectories have largely not been held legally or democratically accountable, as legislation lags behind such rapid technological advancements. Few are articulate enough to even accurately express concerns; societal discourse often focuses on trivial and lobbied implications over truthfully underlying technological and evolutionary risks, which can be hard to ratify due to their depth and complexity.


However, conveniently, a lot seemingly terrible and dangerous ideas have already been accurately speculated in fictional media. For instance, the show Black Mirror has often prophetically warned about XR issues regarding autonomy, external threat actors, and by-design control through episodes like Entire History of You (2011) or Men Against Fire (2016), both of which will be discussed more extensively as case studies later on. Fiction's speculative predictions often are more successful due to being able to embedding devices in temporary and plausible ‘what if' worlds, visualising problems through characters' direct, contextually clear interactions.


This process, termed “design fiction,” was first used by Bruce Sterling in his 2005 manifesto Shaping Things, where he traced object evolution from handmade artefacts to machined objects, then gizmos, and proposed “spimes” as the next stage: programmable, trackable objects with digital identities, fabricated on-demand, recyclable, and data-integrated, characteristic of the ‘Internet of Things.’ Sterling used this to critique consumerism and advocate designer involvement in sustainable, ubiquitous futures. However, it was in his later 2009 article for Interactions magazine, titled Design Fiction, where he elaborated to define the term as “the deliberate use of diegetic prototypes to suspend disbelief about change.”


The term ‘diegetic prototype’ is derived from film scholar David A. Kirby, who introduced it in his 2009 paper, Diegetic Prototypes and the Role of the Film Genre in Future Technologies, to describe fictional technology embedded in a story’s world as an influence for real technological innovation. Kirby argues that technological progress is not linear or inevitable, but is shaped by social, economic, and cultural barriers such as public scepticism or funding gaps. Fiction through mediums such as films, meanwhile, uses speculative technologies as fully realised interactions within a story's narrative, making them feel viable and thus sparking real-world desire, investment, and innovation.


Unlike traditional prototypes, diegetic ones are hypothetical and can perform any mock functionality through plot, dialogue, and character interactions; their focus is on normalising the tech as an everyday artefact. Often, he argues, this technique is also used to stimulate desire or make viewers comfortable with ideas about what may be in the works ahead of true viability.


This idea of design fiction is built on further by Julian Bleecker in his 2009 essay Design Fiction: A Short Essay on Design, Science, Fact, and Fiction, where he defines the concept as a blend of the three used to explore how technology within fictional case studies shapes life, ideally to provoke thought on a future’s implications without needing the full functionality of a prototype or letting it come to pass. One case study Bleecker focuses on is the gesture interface in Steven Spielberg’s 2002 Minority Report, designed by John Underkoffler to fit the narrative as a believable way to interact with a future computer.


Figure 3 - Minority Report (2002)’s hand-gestural computer interface is a widely cited example of a design fiction artefact, and it has had a significant impact on XR’s development [Slashfilm, 2010]

Unlike, say, Johnny Mnemonic (1995), where Keanu Reeves merely waves his hands without a real logical basis, Underkoffler’s prototype was designed to feel realistic, and it led to a high public interest along with many studies into gestural interface design. This eventually led to Underkoffler starting Oblong, a contractor for Raytheon, to produce functional gestural interface technologies for the US military. Underkoffler’s prototype for the movie still continues to inform gestural interaction design for XR to this day.


Later works like Anthony Dunne and Fiona Raby’s Speculative Everything (2013) then developed design fiction as a critical tool beyond commercial products, framing it as a discussion tool. They advocated for designing ‘what if’ scenarios with provocative artefacts to question norms, spark debate on preferred futures, and challenge assumed roles of technology in automatically leading to a dark future. Nick Foster’s The Future Mundane (2013) also proposed thinking to help ground speculative fiction in reality by illustrating it as half-broken - “The future will include taxes, illness, weather, transport delays, and allergies. Things will break, things will fail to perform as promised, things will need fixing. Rendering the future as a partly broken space gives an audience something to hold onto, something relatable.”


By the mid-late 2010s, design fiction had become an academically established philosophy, effectively summarised by Joseph Lindley’s widely cited definition: “design fiction (1) creates a story world, (2) prototypes something in it, (3) to open discursive space”.


With this in mind, there are many fictional cases of XR beyond Minority Report which have demonstrated potential issues and warnings matching the privacy, autonomy, and manipulation speculations outlined earlier on. The following case studies of Snow Crash (1996), Black Mirror (2011-), and Cyberpunk 2077 (2020) have been chosen for their prototypes’ dystopian scenarios, allowing us to analyse harmful stories involving XR and thus begin to speculate mitigation strategies for better design which can guide convivial alternatives.


Case Study 1: Snow Crash


The Metaverse

Neal Stephenson’s 1996 novel Snow Crash is one of the earliest and famous fictional representations of XR in mainstream media. The story follows ‘Hiro Protagonist’, an amateur hacker who wrote significant code for the early ‘metaverse’, the immersive VR successor to the Internet - “a collective virtual hallucination” many enter as a form of escapism from their corporatised and dissatisfactory lives.


Early chapters of the book describe the Metaverse as a 65,536 km planet-sized world with a virtual “Street” encircling a black void, accessed via goggles, headphones, a computer, and fibre optic. The public street is accessible to all, and everyone possesses a virtual avatar. Avatar quality depends on equipment, leading to monochrome and grainy avatars on public terminals or low- performance setups, with better setups providing full colour and accurate rigging. This avatar inequality is already seen to an extent in existing ‘metaverses’ like VRChat, where PC+VR users can access better worlds, and specific body tracking hardware can rig better avatars.


Corporate ads flood the Street’s landscape, targeting unpaid users. The starting area is ‘Port 0’, with ports from 1 to 65,536, each representing addresses along the equator. Users can travel to each via a metro system, to privatised virtual locations. The closer to Port 0, the more expensive the virtual land, similar to Google’s search result ranking system where advertisers pay more to be seen first on the index’s real estate. Prime ports are owned by digitally wealthy early adopters like Hiro (who is a legacy member of the Black Sun bar at Port 127), or major hyperbrands like Mr. Lee’s Greater Hong Kong, who benefit heavily from advertising space in the virtual economy.


The metaverse protocol is designed to technically lack a single point of ownership, but is heavily managed by the Global Multimedia Protocol Group (GMPG), a monopoly. GMPG manages the address system, port allocation, fibre optic network, software clients, and ban enforcement, making it no longer an open free-for-all protocol.

Unlike, say, Johnny Mnemonic (1995), where Keanu Reeves merely waves his hands without a real logical basis, Underkoffler’s prototype was designed to feel realistic, and it led to a high public interest along with many studies into gestural interface design. This eventually led to Underkoffler starting Oblong, a contractor for Raytheon, to produce functional gestural interface technologies for the US military. Underkoffler’s prototype for the movie still continues to inform gestural interaction design for XR to this day.


Later works like Anthony Dunne and Fiona Raby’s Speculative Everything (2013) then developed design fiction as a critical tool beyond commercial products, framing it as a discussion tool. They advocated for designing ‘what if’ scenarios with provocative artefacts to question norms, spark debate on preferred futures, and challenge assumed roles of technology in automatically leading to a dark future. Nick Foster’s The Future Mundane (2013) also proposed thinking to help ground speculative fiction in reality by illustrating it as half-broken - “The future will include taxes, illness, weather, transport delays, and allergies. Things will break, things will fail to perform as promised, things will need fixing. Rendering the future as a partly broken space gives an audience something to hold onto, something relatable.”


By the mid-late 2010s, design fiction had become an academically established philosophy, effectively summarised by Joseph Lindley’s widely cited definition: “design fiction (1) creates a story world, (2) prototypes something in it, (3) to open discursive space”.


With this in mind, there are many fictional cases of XR beyond Minority Report which have demonstrated potential issues and warnings matching the privacy, autonomy, and manipulation speculations outlined earlier on. The following case studies of Snow Crash (1996), Black Mirror (2011-), and Cyberpunk 2077 (2020) have been chosen for their prototypes’ dystopian scenarios, allowing us to analyse harmful stories involving XR and thus begin to speculate mitigation strategies for better design which can guide convivial alternatives.


Case Study 1: Snow Crash


The Metaverse

Neal Stephenson’s 1996 novel Snow Crash is one of the earliest and famous fictional representations of XR in mainstream media. The story follows ‘Hiro Protagonist’, an amateur hacker who wrote significant code for the early ‘metaverse’, the immersive VR successor to the Internet - “a collective virtual hallucination” many enter as a form of escapism from their corporatised and dissatisfactory lives.


Early chapters of the book describe the Metaverse as a 65,536 km planet-sized world with a virtual “Street” encircling a black void, accessed via goggles, headphones, a computer, and fibre optic. The public street is accessible to all, and everyone possesses a virtual avatar. Avatar quality depends on equipment, leading to monochrome and grainy avatars on public terminals or low- performance setups, with better setups providing full colour and accurate rigging. This avatar inequality is already seen to an extent in existing ‘metaverses’ like VRChat, where PC+VR users can access better worlds, and specific body tracking hardware can rig better avatars.


Corporate ads flood the Street’s landscape, targeting unpaid users. The starting area is ‘Port 0’, with ports from 1 to 65,536, each representing addresses along the equator. Users can travel to each via a metro system, to privatised virtual locations. The closer to Port 0, the more expensive the virtual land, similar to Google’s search result ranking system where advertisers pay more to be seen first on the index’s real estate. Prime ports are owned by digitally wealthy early adopters like Hiro (who is a legacy member of the Black Sun bar at Port 127), or major hyperbrands like Mr. Lee’s Greater Hong Kong, who benefit heavily from advertising space in the virtual economy.


The metaverse protocol is designed to technically lack a single point of ownership, but is heavily managed by the Global Multimedia Protocol Group (GMPG), a monopoly. GMPG manages the address system, port allocation, fibre optic network, software clients, and ban enforcement, making it no longer an open free-for-all protocol.


Figure 4 - VRChat, the most similar parallel to Stevenson’s Metaverse, is navigated via flat UI panels to smaller isolated worlds rather than truly spatially via a literal street/metro system in order to reduce latency [Mogura VR News, 2023]

Despite being written 30 years prior, the prediction of the modern Internet and VR experience remains rather accurate. Whilst we don’t yet all use headsets and navigate a skeuomorphic street as our day-to-day Internet experience (‘flatspace’ still exists in Snow Crash too), Stevenson has clearly influenced existing metaverse experiences like the already mentioned VRChat, where users can travel to virtually owned ‘world’ locations made and maintained by their fellow users (more elegantly traversing them through a list menu rather than communal metro).


A key aspect about Stevenson's Metaverse which sticks out as uncanny, however, is its resemblance to the fate of the World Wide Web in structure; what was initially built as an open protocol (HTTP) by Tim Berners-Lee at CERN in 1989 has fragmented into centralised and corporate-controlled sub-networks. This is evident in the decline in traffic to peer-to-peer forums in favour of closed-source, self-contained social media networks; ~97% of all traffic goes to ~116 domains (Xavier, H. S., 2024), and ~33% of all websites are hosted via Amazon Web Services (Richter, F., 2025), with almost 20% routed through Cloudflare (W3Techs, 2025). The latter of which frames its purpose as protecting against bots, but it could be said in spite of the convenience offered, all of these services exist as middlemen and pose massive vulnerabilities to centralised censorship and widespread service outages should they be attacked or taken offline.


To refine focus on XR, by default services like VRChat are also largely centralised; as of present, no mainstream peer-to-peer/decentralised alternative version yet exists, primarily due to challenges of bandwidth and latency to communicate and process such extensive data in real time. The few attempts, such as Resonite or JanusXR, have tried/are building based on direct web-hosting inspirations, but remain ever niche and technically difficult to onboard, if even publicly accessible at all.


It is somewhat ironic that tech companies like Zuckerberg’s Meta, renamed from Facebook after his brief(?) push for building the same ‘metaverse’ Stevenson envisioned, has similar ideas and practices for the present and future of the internet. Meta has built and continues to construct centralised hardware (i.e. Quest headset), services (i.e HorizonOS and Horizon Worlds), and has a grip on many legacy social networks (Facebook, WhatsApp, Instagram), each with features poached from competition (i.e. stories from Snapchat, Threads copying X) to entrap as many people into their ecosystem as possible. Snow Crash has also often been named inspiration to Google’s Sergey Brin, Amazon’s Jeff Bezos, and developers like Tim Sweeney of Epic Games, each aiming to create immersive networking services, each with unified stacks and ecosystems, each, in a way, independent network state islands.


While it’s arguable that the privatisation and corporatisation of the Internet and fictional metaverse seem inevitable as businesses seek profitable systems to build self-serving infrastructure in global markets, the predicted warning were present, yet the trajectory still largely occurred despite people knowing due to the drawbacks of decentralisation and sovereignty being undercut by costs and convenience.

Despite being written 30 years prior, the prediction of the modern Internet and VR experience remains rather accurate. Whilst we don’t yet all use headsets and navigate a skeuomorphic street as our day-to-day Internet experience (‘flatspace’ still exists in Snow Crash too), Stevenson has clearly influenced existing metaverse experiences like the already mentioned VRChat, where users can travel to virtually owned ‘world’ locations made and maintained by their fellow users (more elegantly traversing them through a list menu rather than communal metro).


A key aspect about Stevenson's Metaverse which sticks out as uncanny, however, is its resemblance to the fate of the World Wide Web in structure; what was initially built as an open protocol (HTTP) by Tim Berners-Lee at CERN in 1989 has fragmented into centralised and corporate-controlled sub-networks. This is evident in the decline in traffic to peer-to-peer forums in favour of closed-source, self-contained social media networks; ~97% of all traffic goes to ~116 domains (Xavier, H. S., 2024), and ~33% of all websites are hosted via Amazon Web Services (Richter, F., 2025), with almost 20% routed through Cloudflare (W3Techs, 2025). The latter of which frames its purpose as protecting against bots, but it could be said in spite of the convenience offered, all of these services exist as middlemen and pose massive vulnerabilities to centralised censorship and widespread service outages should they be attacked or taken offline.


To refine focus on XR, by default services like VRChat are also largely centralised; as of present, no mainstream peer-to-peer/decentralised alternative version yet exists, primarily due to challenges of bandwidth and latency to communicate and process such extensive data in real time. The few attempts, such as Resonite or JanusXR, have tried/are building based on direct web-hosting inspirations, but remain ever niche and technically difficult to onboard, if even publicly accessible at all.


It is somewhat ironic that tech companies like Zuckerberg’s Meta, renamed from Facebook after his brief(?) push for building the same ‘metaverse’ Stevenson envisioned, has similar ideas and practices for the present and future of the internet. Meta has built and continues to construct centralised hardware (i.e. Quest headset), services (i.e HorizonOS and Horizon Worlds), and has a grip on many legacy social networks (Facebook, WhatsApp, Instagram), each with features poached from competition (i.e. stories from Snapchat, Threads copying X) to entrap as many people into their ecosystem as possible. Snow Crash has also often been named inspiration to Google’s Sergey Brin, Amazon’s Jeff Bezos, and developers like Tim Sweeney of Epic Games, each aiming to create immersive networking services, each with unified stacks and ecosystems, each, in a way, independent network state islands.


While it’s arguable that the privatisation and corporatisation of the Internet and fictional metaverse seem inevitable as businesses seek profitable systems to build self-serving infrastructure in global markets, the predicted warning were present, yet the trajectory still largely occurred despite people knowing due to the drawbacks of decentralisation and sovereignty being undercut by costs and convenience.


Figure 5 - Meta’s sub-brands form a comprehensive ecosystem of existing users, and high capacity to shape the next stage of technology via XR [agenciaBrazil, 2025]

Yet, in a scenario where virtually shared reality is controlled by a single corporation, users are at the mercy of the entity’s influence and infrastructure, similar to how existing companies like Apple control both hardware and software to not only ensure, but prioritise compatibility with their own devices before others as a form of protectionism (IPBA Connect, 2025). However, an entire virtual reality system involving one's identity, places, or social network and communications under the control of a single entity creates a significant dependency and vulnerability, and could be further abused to hinder freedoms of ‘base reality’ (natural reality) if a platform were so inclined or incentivised to begin influencing mixed/augmented realities.


Cory Doctorow coined the term ‘enshittification’ in 2023 to describe the gradual decline of platforms due to selfish interests, leading to a deterioration in quality and emergence of better alternatives. However, XR presents unique challenges beyond the issues of addictive social media interfaces and data surveillance. User body movements, virtual essence, and parallel ‘virtual life’ are at risk if they lack control or autonomy, dependent on platforms that are not managed or owned by them. In reality, despite being subject to nation state laws and corporate workplace/ community guidelines, individuals are still largely autonomous and can always choose to opt out (to some extent, discrediting national laws). Barriers to independence in reality are mirrored even stricter in finding virtual independence; technical knowledge, accessibility, and self-desire for freedom are often undermined by convenience, interoperability, and network effects of fellow users.


More broadly, this is a growing topic of discourse, demonstrated by the rise of Internet alternative companies like Urbit who wish to show that there are indeed ways that one can reserve sovereignty of a digital identity and selectively plug into services like modular extensions rather than have many fragmented identities at the mercy of services’ control. Startups like Octra further propose forms of [homomorphic] encryption which allow for computation of sensitive data without a service ever being able to read the raw information. Additionally, 21e8, an Urbit-invested company, also somewhat aligns the two concepts to propose a decentralised protocol using Bitcoin-style Proof-of-Work to establish a verifiable peer-to-peer information market where users own their ‘likes’, ranking content collaboratively and exchanging compute energy for data access, bypassing proprietary platforms like Google or Facebook which can easily be rigged and manipulated centrally (Wilcox, M., 2020).


Whilst such systems definitely still remain in their early days, one can hope it may become possible to learn from the mistakes of the present Internet (often termed ‘Web 2.0’) to build ‘Web 3.0’ sovereignty into XR by default as there becomes a greater focus on digital selfhood and our devices/services merge deeper into us.


IV - The Snow Crash Virus

Yet, in a scenario where virtually shared reality is controlled by a single corporation, users are at the mercy of the entity’s influence and infrastructure, similar to how existing companies like Apple control both hardware and software to not only ensure, but prioritise compatibility with their own devices before others as a form of protectionism (IPBA Connect, 2025). However, an entire virtual reality system involving one's identity, places, or social network and communications under the control of a single entity creates a significant dependency and vulnerability, and could be further abused to hinder freedoms of ‘base reality’ (natural reality) if a platform were so inclined or incentivised to begin influencing mixed/augmented realities.


Cory Doctorow coined the term ‘enshittification’ in 2023 to describe the gradual decline of platforms due to selfish interests, leading to a deterioration in quality and emergence of better alternatives. However, XR presents unique challenges beyond the issues of addictive social media interfaces and data surveillance. User body movements, virtual essence, and parallel ‘virtual life’ are at risk if they lack control or autonomy, dependent on platforms that are not managed or owned by them. In reality, despite being subject to nation state laws and corporate workplace/ community guidelines, individuals are still largely autonomous and can always choose to opt out (to some extent, discrediting national laws). Barriers to independence in reality are mirrored even stricter in finding virtual independence; technical knowledge, accessibility, and self-desire for freedom are often undermined by convenience, interoperability, and network effects of fellow users.


More broadly, this is a growing topic of discourse, demonstrated by the rise of Internet alternative companies like Urbit who wish to show that there are indeed ways that one can reserve sovereignty of a digital identity and selectively plug into services like modular extensions rather than have many fragmented identities at the mercy of services’ control. Startups like Octra further propose forms of [homomorphic] encryption which allow for computation of sensitive data without a service ever being able to read the raw information. Additionally, 21e8, an Urbit-invested company, also somewhat aligns the two concepts to propose a decentralised protocol using Bitcoin-style Proof-of-Work to establish a verifiable peer-to-peer information market where users own their ‘likes’, ranking content collaboratively and exchanging compute energy for data access, bypassing proprietary platforms like Google or Facebook which can easily be rigged and manipulated centrally (Wilcox, M., 2020).


Whilst such systems definitely still remain in their early days, one can hope it may become possible to learn from the mistakes of the present Internet (often termed ‘Web 2.0’) to build ‘Web 3.0’ sovereignty into XR by default as there becomes a greater focus on digital selfhood and our devices/services merge deeper into us.


IV - The Snow Crash Virus


Figure 6 - ‘Snow’ on a TV as a result of electric static noise is where the term “Snow Crash” originates [Mysid, 2006]

The book’s title originates from the story’s ‘Snow Crash’ virus, which affects Hiro’s friend Da5id and forms the central plot point. ‘Snow Crash’ is a flashing image which contains a pixelated bitmap (like a QR code) which instantly crashes both the user’s computer and mind. The mind aspect is a hyper-precise neurolinguistic hack based on ancient Sumerian ‘me’, which reprograms the brainstem’s language-processing centre. Once exposed to the visual information, the mind loses higher reasoning, and the victim becomes an obedient “wirehead” who can only speak in tongues and follow a versed leader’s commands. There’s also a chemical version in the story, which is a white snowy powder.


Naturally, L. Bob Rife, the owner of the GMPG, created Snow Crash by reverse-engineering recovered Sumerian ‘nam-shub’ texts of instructions into a modern binary/drug virus. In Stevenson’s fiction, Sumerian priests historically used ‘nam-shubs' (uttered ‘spells’ which accessed such part of the brainstem) to control people and build civilisation. However, an upheaval was caused by Enki’s (a Sumerian god) ‘nam-shub’, a verbally spread ‘anti-virus’ which fragmented language into unprogrammable babble, leading to later stories like the Biblical myth of Babel and the reformation of language’s capacity for individual thinking. Hiro discovers Rife’s plan to distribute this virus both digitally via his metaverse platform and as drugs through his church franchise, forming a single, universal cult language that only he as its high priest can command, centralising human cognition into a hive mind.


Stevenson’s ideas were likely influenced by real concepts like Noam Chomsky’s work in the 50s-60s which found that every baby is born with a ‘language operating system’ in the brainstem, and all 7,000 languages are versions of a deep code found in the FOXP2 gene. Mutations can cause severe language disorders, and so naturally one could assume disrupting deep grammar capacities destroys higher thought (Enard, W. et al., 2001).


Furthermore, he was likely influenced by Richard Dawkins’ infamous 1976 The Selfish Gene concept of memes as tangible closed units of information (akin to genes), as well as Julian Jaynes’ bicameral mind theory. The latter posits that pre-1000 BCE humans perceived auditory- linguistic hallucinations from processing their environment as divine messages, with the evolution of these into language leading to modern consciousness when the two hemispheres of the brain allegedly merged into a unified system.

The book’s title originates from the story’s ‘Snow Crash’ virus, which affects Hiro’s friend Da5id and forms the central plot point. ‘Snow Crash’ is a flashing image which contains a pixelated bitmap (like a QR code) which instantly crashes both the user’s computer and mind. The mind aspect is a hyper-precise neurolinguistic hack based on ancient Sumerian ‘me’, which reprograms the brainstem’s language-processing centre. Once exposed to the visual information, the mind loses higher reasoning, and the victim becomes an obedient “wirehead” who can only speak in tongues and follow a versed leader’s commands. There’s also a chemical version in the story, which is a white snowy powder.


Naturally, L. Bob Rife, the owner of the GMPG, created Snow Crash by reverse-engineering recovered Sumerian ‘nam-shub’ texts of instructions into a modern binary/drug virus. In Stevenson’s fiction, Sumerian priests historically used ‘nam-shubs' (uttered ‘spells’ which accessed such part of the brainstem) to control people and build civilisation. However, an upheaval was caused by Enki’s (a Sumerian god) ‘nam-shub’, a verbally spread ‘anti-virus’ which fragmented language into unprogrammable babble, leading to later stories like the Biblical myth of Babel and the reformation of language’s capacity for individual thinking. Hiro discovers Rife’s plan to distribute this virus both digitally via his metaverse platform and as drugs through his church franchise, forming a single, universal cult language that only he as its high priest can command, centralising human cognition into a hive mind.


Stevenson’s ideas were likely influenced by real concepts like Noam Chomsky’s work in the 50s-60s which found that every baby is born with a ‘language operating system’ in the brainstem, and all 7,000 languages are versions of a deep code found in the FOXP2 gene. Mutations can cause severe language disorders, and so naturally one could assume disrupting deep grammar capacities destroys higher thought (Enard, W. et al., 2001).


Furthermore, he was likely influenced by Richard Dawkins’ infamous 1976 The Selfish Gene concept of memes as tangible closed units of information (akin to genes), as well as Julian Jaynes’ bicameral mind theory. The latter posits that pre-1000 BCE humans perceived auditory- linguistic hallucinations from processing their environment as divine messages, with the evolution of these into language leading to modern consciousness when the two hemispheres of the brain allegedly merged into a unified system.


Figure 7 - A Sumerian writing tablet with pictographic pre-Cuniform script, currently held in the Louvre [Mbzt, 2013]

Snow Crash can be seen therefore as a form of memetic warfare (Goldenberg & Finklestein, 2025), where Rife engineers a harmful ‘nam-shub’ infohazard and abuses his influence over his platforms. While the story claims the bitmap is written in “the machine code of the human visual/ linguistic brainstem” and affects users via that, the more accurate aspect is the social engineering behind the attack on the user. In Da5id’s case, a mysterious hazy-avatar’d user hands him a digital hypercard (link) in the metaverse, which hides executable code to flash the virus image on his headset screen. This data enters his retina, remotely crashing his computer and mind into the vegetative state.


The technology for hidden remote executables indeed exists, such as invisible pixel trackers embedded into emails to log metadata; one could easily combine this with studies on subliminal visual priming affecting linguistic biases, as demonstrated by Anthony Marcel’s 1983 study. Marcel flashed words for 1-10ms, and viewers’ answers were influenced even though some never recognised the words. The human eye can only recognise objects or the gist of a scene in about 13ms (Potter et al., 2014), which is one frame of a 90Hz display.


Current headsets like the Apple Vision Pro or Meta Quest 3 have even higher refresh rates of up to 120Hz to present XR even smoother. Intense strobe lighting at 10-25Hz or alternating colours can already overexcite the occipital cortex via the thalamus, causing seizures in 3-5% of epileptics, or 1 in 4,000 overall (Wilkins et al., 2022). Even if such concepts are based on loose deduction and fictional ideas like Stevenson’s story, this is only one such potential new attack surface XR could indeed possess.

Snow Crash can be seen therefore as a form of memetic warfare (Goldenberg & Finklestein, 2025), where Rife engineers a harmful ‘nam-shub’ infohazard and abuses his influence over his platforms. While the story claims the bitmap is written in “the machine code of the human visual/ linguistic brainstem” and affects users via that, the more accurate aspect is the social engineering behind the attack on the user. In Da5id’s case, a mysterious hazy-avatar’d user hands him a digital hypercard (link) in the metaverse, which hides executable code to flash the virus image on his headset screen. This data enters his retina, remotely crashing his computer and mind into the vegetative state.


The technology for hidden remote executables indeed exists, such as invisible pixel trackers embedded into emails to log metadata; one could easily combine this with studies on subliminal visual priming affecting linguistic biases, as demonstrated by Anthony Marcel’s 1983 study. Marcel flashed words for 1-10ms, and viewers’ answers were influenced even though some never recognised the words. The human eye can only recognise objects or the gist of a scene in about 13ms (Potter et al., 2014), which is one frame of a 90Hz display.


Current headsets like the Apple Vision Pro or Meta Quest 3 have even higher refresh rates of up to 120Hz to present XR even smoother. Intense strobe lighting at 10-25Hz or alternating colours can already overexcite the occipital cortex via the thalamus, causing seizures in 3-5% of epileptics, or 1 in 4,000 overall (Wilkins et al., 2022). Even if such concepts are based on loose deduction and fictional ideas like Stevenson’s story, this is only one such potential new attack surface XR could indeed possess.


Figure 8 - Example of an epilepsy pre-warning used on a YouTube video [Mark Robotham, 2014]

While pre-warning users about potentially harmful content, like epilepsy-inducing material, can mitigate such risk, unsuspecting individuals can still fall victim to attacks like Da5id’s; people very often click malicious links if they appear trustworthy (Terranova Security, 2024). In the ‘metaverse’, similar to how a stranger could give you an poison-laced item in reality, hidden executable concepts could be used through seemingly benign objects.


For example, the Israeli ‘Intellexa’ spyware used advertisements to remotely load malware to devices without any interaction (Naprys, E., 2025). Loading an ad, metaverse place, or 3D asset image with a hidden executable load on a device could hypothetically infect a user and cause an embodied-style attack. In this particular scenario, however, much like how browser extensions such as NoScript can prevent unwanted JavaScript code loading, something similar may be built for XR in response. To further mitigate unwanted interactions attacks, a system-wide consent mechanism, like such existing in VRChat against VR groping and personal space intrusions, could be implemented (Baldry et al., 2024).


While highly advanced exploits/bugs always remain challenging, the simultaneously increasing advancement of AI/LLM systems for creating malware will likely also lead to stronger AI/LLM defences against malware, as suggested by Nvidia’s Jensen Huang. It’s fairly reasonable to assume that such defence will also be applied to XR devices in time, as will be expanded on more later.


Case Study 2: Black Mirror


Black Mirror, a now Netflix-owned TV show created by Charlie Brooker, further explores hypothetical scenarios involving technology and society’s interactions through one-off short stories. It’s known for its cynical and often uncomfortable viewing, with unfavourably tragic endings to leave viewers with more questions than answers, similar to the aims of design fiction (albeit aiming for shock-factor over Dunne/Raby’s optimism).


The show’s cultural popularity lies in its realistic storytelling, which presents moral dilemmas viewers can relate to, and its use of fictional artefacts and devices/interfaces which draw parallels with real-world ones, making scenarios feel plausible and tangible.


The Entire History Of You (S1, E3)

While pre-warning users about potentially harmful content, like epilepsy-inducing material, can mitigate such risk, unsuspecting individuals can still fall victim to attacks like Da5id’s; people very often click malicious links if they appear trustworthy (Terranova Security, 2024). In the ‘metaverse’, similar to how a stranger could give you an poison-laced item in reality, hidden executable concepts could be used through seemingly benign objects.


For example, the Israeli ‘Intellexa’ spyware used advertisements to remotely load malware to devices without any interaction (Naprys, E., 2025). Loading an ad, metaverse place, or 3D asset image with a hidden executable load on a device could hypothetically infect a user and cause an embodied-style attack. In this particular scenario, however, much like how browser extensions such as NoScript can prevent unwanted JavaScript code loading, something similar may be built for XR in response. To further mitigate unwanted interactions attacks, a system-wide consent mechanism, like such existing in VRChat against VR groping and personal space intrusions, could be implemented (Baldry et al., 2024).


While highly advanced exploits/bugs always remain challenging, the simultaneously increasing advancement of AI/LLM systems for creating malware will likely also lead to stronger AI/LLM defences against malware, as suggested by Nvidia’s Jensen Huang. It’s fairly reasonable to assume that such defence will also be applied to XR devices in time, as will be expanded on more later.


Case Study 2: Black Mirror


Black Mirror, a now Netflix-owned TV show created by Charlie Brooker, further explores hypothetical scenarios involving technology and society’s interactions through one-off short stories. It’s known for its cynical and often uncomfortable viewing, with unfavourably tragic endings to leave viewers with more questions than answers, similar to the aims of design fiction (albeit aiming for shock-factor over Dunne/Raby’s optimism).


The show’s cultural popularity lies in its realistic storytelling, which presents moral dilemmas viewers can relate to, and its use of fictional artefacts and devices/interfaces which draw parallels with real-world ones, making scenarios feel plausible and tangible.


The Entire History Of You (S1, E3)


Figure 9 - Liam with his eyes ‘activated’ whilst replaying a memory [The Mirror, 2018]

The earliest episode involving XR is The Entire History of You, aired in 2011. It follows Liam, a

failing lawyer, in a world where everyone has a ‘Grain’ - a small rice-sized implant behind the ear which stores everything one sees or hear, and can replay footage internally (on the eyes) or externally (cast to a screen).


After a rocky appraisal, Liam attends a dinner party where he suspects his wife, Ffion, is flirting with another man, Jonas. Obsessed with his captured memories, he interrogates her, cross- referencing her behaviours until he spirals into an alcoholic rage to confront Jonas the next day. Following a skirmish, Liam forces Jonas to present his memory evidence onto the TV; Ffion has indeed been unfaithful. Liam then confronts his wife and forces her to replay her recollection, revealing she had unprotected sex with Jonas, leaving Liam uncertain about his daughter’s paternity. The episode ends with Liam in an empty house, replaying memories of Ffion and his daughter as escapism. He then cuts out his Grain implant with a razor blade.


It’s unclear what happens after; earlier on, Hallam, another party attendee, seems fine after she says hers was forcibly removed, yet we never find out Liam’s fate.


Throughout the episode, the Grain implants are shown to be a crucial and ubiquitous part of Liam’s world. After his appraisal, he reviews his performance during a taxi ride as easily as one would use a smartphone. To pay, he verifies his ID simply by looking at a display, which wirelessly authenticates him per his Grain, returning his name and details.


This is similar to concepts like bank card contactless/Apple Pay merged with speculative concepts like World ID, a project which [controversially] aims to utilise iris scanning to provide everyone with a cryptographically secure and unique biometric identity and “proof of humanity” (helping to combat online bots, fraud, etc. but also risking an death to privacy).

The earliest episode involving XR is The Entire History of You, aired in 2011. It follows Liam, a

failing lawyer, in a world where everyone has a ‘Grain’ - a small rice-sized implant behind the ear which stores everything one sees or hear, and can replay footage internally (on the eyes) or externally (cast to a screen).


After a rocky appraisal, Liam attends a dinner party where he suspects his wife, Ffion, is flirting with another man, Jonas. Obsessed with his captured memories, he interrogates her, cross- referencing her behaviours until he spirals into an alcoholic rage to confront Jonas the next day. Following a skirmish, Liam forces Jonas to present his memory evidence onto the TV; Ffion has indeed been unfaithful. Liam then confronts his wife and forces her to replay her recollection, revealing she had unprotected sex with Jonas, leaving Liam uncertain about his daughter’s paternity. The episode ends with Liam in an empty house, replaying memories of Ffion and his daughter as escapism. He then cuts out his Grain implant with a razor blade.


It’s unclear what happens after; earlier on, Hallam, another party attendee, seems fine after she says hers was forcibly removed, yet we never find out Liam’s fate.


Throughout the episode, the Grain implants are shown to be a crucial and ubiquitous part of Liam’s world. After his appraisal, he reviews his performance during a taxi ride as easily as one would use a smartphone. To pay, he verifies his ID simply by looking at a display, which wirelessly authenticates him per his Grain, returning his name and details.


This is similar to concepts like bank card contactless/Apple Pay merged with speculative concepts like World ID, a project which [controversially] aims to utilise iris scanning to provide everyone with a cryptographically secure and unique biometric identity and “proof of humanity” (helping to combat online bots, fraud, etc. but also risking an death to privacy).


Figure 10 - Liam at airport security having his last 48 hours of memory reviewed [IMDB, 2011]

In the next scene, Liam reaches the airport and is checked through a priority lane, separating Grain users from non-users. He looks into a sensor, and his ID is similarly shown to security staff instantly. They ask him to replay his last 24-48 hours and briefly explain his activities. This dystopian concept is similar to how facial recognition (i.e. UK ePassport gates) allow faster border controls and build on the trade-off of privacy for efficiency and ease. Often, this split creates a two-tier society where those who don’t share details face a harder path with greater inconveniences, and therefore most will choose to give in.


The concerns of such a system have been around for a long time. Among the oldest prophecies is the “mark of the beast” from Revelation 13:11-18, where the antichrist creates a system with a similar two-tier scenario enforced by a mark on the hand or forehead (read: an implant or chip) to control who can buy, sell, or participate in society. As mentioned, another character, Hallam, functions without a Grain, but she similarly is semi-ostracised socially, losing the benefits and ease of sharing memories with peers. The best parallel today would be like someone without a smartphone, social media account, or email address. It’s doable, but very inconvenient.


In time, if XR devices do become as commonplace as smartphones, the inherently embodied nature of wearing (or potentially later implanting them in the same way) will likely create an even greater divide between users and non-users, forming, in a way, two sub-species of ‘human’ and ‘cyborg’, building on Kurzweil’s previously mentioned predictions.

In the next scene, Liam reaches the airport and is checked through a priority lane, separating Grain users from non-users. He looks into a sensor, and his ID is similarly shown to security staff instantly. They ask him to replay his last 24-48 hours and briefly explain his activities. This dystopian concept is similar to how facial recognition (i.e. UK ePassport gates) allow faster border controls and build on the trade-off of privacy for efficiency and ease. Often, this split creates a two-tier society where those who don’t share details face a harder path with greater inconveniences, and therefore most will choose to give in.


The concerns of such a system have been around for a long time. Among the oldest prophecies is the “mark of the beast” from Revelation 13:11-18, where the antichrist creates a system with a similar two-tier scenario enforced by a mark on the hand or forehead (read: an implant or chip) to control who can buy, sell, or participate in society. As mentioned, another character, Hallam, functions without a Grain, but she similarly is semi-ostracised socially, losing the benefits and ease of sharing memories with peers. The best parallel today would be like someone without a smartphone, social media account, or email address. It’s doable, but very inconvenient.


In time, if XR devices do become as commonplace as smartphones, the inherently embodied nature of wearing (or potentially later implanting them in the same way) will likely create an even greater divide between users and non-users, forming, in a way, two sub-species of ‘human’ and ‘cyborg’, building on Kurzweil’s previously mentioned predictions.


Figure 11 - Liam is warned by his Grain against driving whilst drunk when it recognises the two activities in sequence of one another [FilmAffinity, 2025]

To build on existing privacy concerns, the episode also illustrates how such a system could utilise always-active recording for automatic recognition of anything deemed ‘inappropriate’. After excessively drinking, Liam faces a warning against driving home, with his implant stating he appears to be “engaging in an unsuitable activity”. The automated voice also warns him of potential liability and insurance voiding.


Despite manually overriding the notification and driving anyway (crashing and waking up later hungover), his memory of the event (which he plays back on waking) could likely be used in court as evidence for his drunk driving and vehicle destruction. Furthermore, while manual override is possible, in the case it wasn’t permitted, such a system could also be used for more extensive abuse and full cutoff, similar to a social credit system. Automated flagging could theoretically later be linked to, say, debanking for certain actions, behaviours, or expressed opinions, etc. If such a system were then enforced by law, it de facto kills freedom and privacy in one fell swoop.


A real example of this is in electric cars, which can excessively log driver data for similar monitoring. For instance, Chinese company Yuwei’s ‘driver fatigue monitor system’ utilises AI analysis of dashcam footage to detect micro-expressions and labelled “risky behaviours” like smoking, phone use, or gaze deviation to evaluate performance with an alleged <0.5% false- detection rate. Although primarily designed for public-service drivers of trucks, buses, and commercial fleets, similar ‘drowsy detection’ systems could also be enforced to civilian vehicles. However, for now, these systems are largely integrated into the cars rather than the drivers, and though laws may ensure compliance (Wang, Y., 2023), it would still be easier to circumvent an external system over an implant like the Grain.


Whilst we do have mainstream camera glasses like the Meta Ray-Bans, which are deemed somewhat socially acceptable, earlier attempts like the Google Glass (released soon after this Black Mirror episode in 2013/14) faced pushbacks due to concerns over discrete recording and playback, to the point where Glass users were banned from bars in San Francisco (Levy, K., 2014). Still, despite the re-emergence of such camera glasses, a solid demographic remains rightfully opposed to them (Anzolin & Lo Nostro, 2025).


However, most abuses still persist and arguably have even developed further. One such example is from 2024, where a duo of Harvard students used Meta's Ray-Bans to build a research application which captured faces of students on campus and sent them to a smartphone for facial recognition, AI-searching/generating a dossier of matched personal information. While fortunately a transparent demo, any true threat actors may develop/use such tools discretely, even eventually adding processed data onto a lens screen for teleprompting on devices like EvenReality’s G2 glasses (which are built using the open-source MentraOS).


Implementing effective laws to ban such devices would eventually become challenging should there become widespread usage, and risks straddling the line of authoritarianism. Well-intentioned but subversive laws for things like ‘safe driving’ can be used as false flags to coax compliance and shift norms, leading to widespread surveillance and chilling of autonomy, similar to the mentioned 2001 US Patriot Act. Crisis engineering often uses pretexts like “it’s a matter of national security” or "think of the children” - e.g., the 2023 Online Safety Act in the UK or the controversial 2025 EU “Chat Control” proposal(s), which can be seen as leading to censorship of any subjectively deemed ‘adult content’. In the UK’s case, enforced ID verification allows easier tracking to arrest people involved in ‘hate speech’ opinions and discourse, no matter how grounded in truth or common frustration such may be (Reiners, M., 2025).


Men Against Fire (S3, E5)


Another XR-relevant episode of Black Mirror which explores an even more extreme scenario is

Men Against Fire, first aired in 2016 as part of season 3.

To build on existing privacy concerns, the episode also illustrates how such a system could utilise always-active recording for automatic recognition of anything deemed ‘inappropriate’. After excessively drinking, Liam faces a warning against driving home, with his implant stating he appears to be “engaging in an unsuitable activity”. The automated voice also warns him of potential liability and insurance voiding.


Despite manually overriding the notification and driving anyway (crashing and waking up later hungover), his memory of the event (which he plays back on waking) could likely be used in court as evidence for his drunk driving and vehicle destruction. Furthermore, while manual override is possible, in the case it wasn’t permitted, such a system could also be used for more extensive abuse and full cutoff, similar to a social credit system. Automated flagging could theoretically later be linked to, say, debanking for certain actions, behaviours, or expressed opinions, etc. If such a system were then enforced by law, it de facto kills freedom and privacy in one fell swoop.


A real example of this is in electric cars, which can excessively log driver data for similar monitoring. For instance, Chinese company Yuwei’s ‘driver fatigue monitor system’ utilises AI analysis of dashcam footage to detect micro-expressions and labelled “risky behaviours” like smoking, phone use, or gaze deviation to evaluate performance with an alleged <0.5% false- detection rate. Although primarily designed for public-service drivers of trucks, buses, and commercial fleets, similar ‘drowsy detection’ systems could also be enforced to civilian vehicles. However, for now, these systems are largely integrated into the cars rather than the drivers, and though laws may ensure compliance (Wang, Y., 2023), it would still be easier to circumvent an external system over an implant like the Grain.


Whilst we do have mainstream camera glasses like the Meta Ray-Bans, which are deemed somewhat socially acceptable, earlier attempts like the Google Glass (released soon after this Black Mirror episode in 2013/14) faced pushbacks due to concerns over discrete recording and playback, to the point where Glass users were banned from bars in San Francisco (Levy, K., 2014). Still, despite the re-emergence of such camera glasses, a solid demographic remains rightfully opposed to them (Anzolin & Lo Nostro, 2025).


However, most abuses still persist and arguably have even developed further. One such example is from 2024, where a duo of Harvard students used Meta's Ray-Bans to build a research application which captured faces of students on campus and sent them to a smartphone for facial recognition, AI-searching/generating a dossier of matched personal information. While fortunately a transparent demo, any true threat actors may develop/use such tools discretely, even eventually adding processed data onto a lens screen for teleprompting on devices like EvenReality’s G2 glasses (which are built using the open-source MentraOS).


Implementing effective laws to ban such devices would eventually become challenging should there become widespread usage, and risks straddling the line of authoritarianism. Well-intentioned but subversive laws for things like ‘safe driving’ can be used as false flags to coax compliance and shift norms, leading to widespread surveillance and chilling of autonomy, similar to the mentioned 2001 US Patriot Act. Crisis engineering often uses pretexts like “it’s a matter of national security” or "think of the children” - e.g., the 2023 Online Safety Act in the UK or the controversial 2025 EU “Chat Control” proposal(s), which can be seen as leading to censorship of any subjectively deemed ‘adult content’. In the UK’s case, enforced ID verification allows easier tracking to arrest people involved in ‘hate speech’ opinions and discourse, no matter how grounded in truth or common frustration such may be (Reiners, M., 2025).


Men Against Fire (S3, E5)


Another XR-relevant episode of Black Mirror which explores an even more extreme scenario is

Men Against Fire, first aired in 2016 as part of season 3.


Figure 12 - Stripe’s MASS implant point of view showing him intel on his squadron’s target before an operation [Black Mirror Fandom, 2025]

Men Against Fire follows a soldier named Stripe, equipped with a ‘Mass’ eye implant, who embarks on a mission to find a Christian old man sheltering and feeding ‘diseased zombie people’, termed “roaches”. The implant aids him by displaying aim analysis over his gun, providing intelligence data on his target, and showing a 3D model of the building layout before the strike. Stripe kills his first two targets, one by shooting and the other in hand-to-hand combat. However, the latter “roach” flashes a light into his face, and Stripe experiences hallucinations and headaches the following day. Despite raising concerns, he is dismissed.


On his second mission, his implant visuals worsen, and he realises he can smell grass. Distracted, his squad leader is killed, and he loses control of his emotions. He and his squadmate enter the building of the shooter; Stripe finds a woman, but his squadmate kills her. Confused, Stripe then finds another woman and child. When his squadmate tries to kill them too, Stripe stops her, knocking her out to save the civilians, but gets shot in the process. He leaves with the civilians in the army car, fainting mid-drive, so they take him to their dwelling.


As Stripe recovers, he’s confused: the “roaches” are regular people. The woman explains to him that after ‘the war’, everyone’s DNA was entered into a screening database, which evolved into a eugenic cleansing. Media propagated the lie that “roaches” were diseased so everyone accepted it. The flashlight the “roach” shone into Stripe’s eye was a weapon that transmitted a virus to disable his Mass implant (a la Snow Crash, but wireless), allowing him to ‘see reality’. Before we learn more, his squadmate returns, killing the woman and knocking Stripe out, returning him to base.


Stripe then wakes up in a blank interrogation room; his director (Arquette) enters and repeats the truth: the Mass implant aids in visually dehumanising the enemy as a literal monster and dampening sensory feedback so that soldiers have reduced PTSD and moral issues. The war on “roaches” is justified by them being people with undesirable genes of higher cancer, muscular dystrophy, criminal tendencies, sexual deviancy rates, or lower IQ.

Men Against Fire follows a soldier named Stripe, equipped with a ‘Mass’ eye implant, who embarks on a mission to find a Christian old man sheltering and feeding ‘diseased zombie people’, termed “roaches”. The implant aids him by displaying aim analysis over his gun, providing intelligence data on his target, and showing a 3D model of the building layout before the strike. Stripe kills his first two targets, one by shooting and the other in hand-to-hand combat. However, the latter “roach” flashes a light into his face, and Stripe experiences hallucinations and headaches the following day. Despite raising concerns, he is dismissed.


On his second mission, his implant visuals worsen, and he realises he can smell grass. Distracted, his squad leader is killed, and he loses control of his emotions. He and his squadmate enter the building of the shooter; Stripe finds a woman, but his squadmate kills her. Confused, Stripe then finds another woman and child. When his squadmate tries to kill them too, Stripe stops her, knocking her out to save the civilians, but gets shot in the process. He leaves with the civilians in the army car, fainting mid-drive, so they take him to their dwelling.


As Stripe recovers, he’s confused: the “roaches” are regular people. The woman explains to him that after ‘the war’, everyone’s DNA was entered into a screening database, which evolved into a eugenic cleansing. Media propagated the lie that “roaches” were diseased so everyone accepted it. The flashlight the “roach” shone into Stripe’s eye was a weapon that transmitted a virus to disable his Mass implant (a la Snow Crash, but wireless), allowing him to ‘see reality’. Before we learn more, his squadmate returns, killing the woman and knocking Stripe out, returning him to base.


Stripe then wakes up in a blank interrogation room; his director (Arquette) enters and repeats the truth: the Mass implant aids in visually dehumanising the enemy as a literal monster and dampening sensory feedback so that soldiers have reduced PTSD and moral issues. The war on “roaches” is justified by them being people with undesirable genes of higher cancer, muscular dystrophy, criminal tendencies, sexual deviancy rates, or lower IQ.


Figure 13 - Stripe is shown a video of himself agreeing to the Mass implant - yet, given modern AI developments, one could argue it could also have been deepfaked as an further way to manipulate him... [TV Obsessive, 2018]

Stripe, naturally disgusted by this, is then shown a video of himself signing a waiver to agree to the implant, which also erased his memory of the conversation. When Stripe, in denial, resists and becomes violent, Arquette remotely blinds him and plays the unfiltered version of his operation onto his vision, showing him massacring civilians. The playback is then threatened as looped torture - Stripe must choose between returning to soldierhood with his memory wiped, or being incarcerated.


We don’t ever find out what his choice is, but the episode ends with him returning home in military uniform. Whilst he sees a brightly saturated and colourful, traditional home with the woman of his dreams welcoming him, it turns out the house is grimy and graffiti’d, with no one actually there. This ending leaves the viewer with a lot to think about, suggesting that Stripe's real punishment was the false impression of being dismissed, and a fate of instigated delusion.

Stripe, naturally disgusted by this, is then shown a video of himself signing a waiver to agree to the implant, which also erased his memory of the conversation. When Stripe, in denial, resists and becomes violent, Arquette remotely blinds him and plays the unfiltered version of his operation onto his vision, showing him massacring civilians. The playback is then threatened as looped torture - Stripe must choose between returning to soldierhood with his memory wiped, or being incarcerated.


We don’t ever find out what his choice is, but the episode ends with him returning home in military uniform. Whilst he sees a brightly saturated and colourful, traditional home with the woman of his dreams welcoming him, it turns out the house is grimy and graffiti’d, with no one actually there. This ending leaves the viewer with a lot to think about, suggesting that Stripe's real punishment was the false impression of being dismissed, and a fate of instigated delusion.


Figure 14 - Thermal vision to see through walls with data overlays on the EagleEye military helmet [Anduril, 2025]

Even after just ten years, we see similar military XR systems being developed, like Anduril’s EagleEye helmet, which has many similar logistical features and even looks like the Mass’ interface.


Given advances in AI-generated imagery replacement, like InpaintFusion (Mori, S., et al., 2020), which can edit footage in real-time (albeit slowly and with low quality, for now), it’s reasonable to assume that this technology will eventually perform well enough and become applicable on worn XR and eventually implanted XR. Similar to AI image/video models, one can assume they’ll also allow for any specified input prompt, from innocent “make my vision look like anime” filters to similar replacement of military enemies as dehumanised monsters.


From the dark pattern categories theorised in XR so far (Meinhard et al., 2025) such as false urgency, forced registration, information hiding, and emotional/sensory manipulation, Men Against Fire likely qualifies for all four. While occlusion can be beneficial in some instances (e.g., Stijn Spanhove’s XR ad-blocker for Snap Spectacle glasses), malicious use on items or environments deemed harmful could negatively impact users’ reality if they have no autonomy in the matter (again, building off of the aforementioned issues of a central tyrannical authority and hidden zero- click exploits).

Even after just ten years, we see similar military XR systems being developed, like Anduril’s EagleEye helmet, which has many similar logistical features and even looks like the Mass’ interface.


Given advances in AI-generated imagery replacement, like InpaintFusion (Mori, S., et al., 2020), which can edit footage in real-time (albeit slowly and with low quality, for now), it’s reasonable to assume that this technology will eventually perform well enough and become applicable on worn XR and eventually implanted XR. Similar to AI image/video models, one can assume they’ll also allow for any specified input prompt, from innocent “make my vision look like anime” filters to similar replacement of military enemies as dehumanised monsters.


From the dark pattern categories theorised in XR so far (Meinhard et al., 2025) such as false urgency, forced registration, information hiding, and emotional/sensory manipulation, Men Against Fire likely qualifies for all four. While occlusion can be beneficial in some instances (e.g., Stijn Spanhove’s XR ad-blocker for Snap Spectacle glasses), malicious use on items or environments deemed harmful could negatively impact users’ reality if they have no autonomy in the matter (again, building off of the aforementioned issues of a central tyrannical authority and hidden zero- click exploits).


Figure 15 - Mock up of a potential dark pattern in XR design, where a restaurant is blocked with virtual elements to promote a paid experience, ruining the environment’s real ambience [Meinhardt, 2025]

Currently, it remains challenging to control XR software due to input method limitations; with hand gestures, eye tracking, or even BCIs, separation becomes harder to distinguish. Overriding a hijacked system is much harder when non-tactile inputs like hand tracking could theoretically be digitally disabled should a system be hacked or remotely controlled.


Allowing such intimate access to the eyes and vision from a central authority remains rather risky, and with such great power comes great responsibility. XR in the form of overlaid data could be very easily used to construct a misguided reality, leading users to harmful actions. For instance, a stalker could manipulate someone to not see themselves walking into a bad situation, or a bad actor could artificially frame someone for actions they didn’t realise they were doing because it didn’t look like it to them.


In situations where XR involves implanted devices, memory or implant technology, there’s likely going to be a sense of permanence that could make opting out difficult once integrated.


Case Study 3: Cyberpunk 2077


Cyberpunk 2077, a 2020 video game adapted by CD Projekt RED from its earlier tabletop RPG (Role-Playing Game), is set in a dystopian future where life is familiar yet different. As the player, you experience the world through the mercenary ‘V’ in Night City, a US-independent neon megalopolis in California. Night City is a place of corporate overlords, brutal street gang violence, and extensive cybernetic body modification technology.


Kiroshi Optics


The Kiroshi Optics are an eye replacement which overlay data onto the retina; a crucial in-game implant. The Kiroshi, like most other implants in the game, are originally intended for medical purposes and later evolved into a desirable consumer product to ‘get ahead’ of those without implants - people often replace their organic eyes with Kiroshi ones which are deemed superior.


Your Kiroshi is linked to your ‘cyberdeck’, a BCI implant which functions as your personal ‘operating system’. This forms a later stage of the exocortex concept, which, compared to our current disembodied cognitive symbiosis with our personal computers, the Kiroshi + cyberdeck combination could be said to be the post-smartphone smartphone.


The game offers many chances to demonstrate natural and normalised use of this integrated system throughout the story, such as:


  • The game’s main HUD interface (health bar, mini-map, current objective)

  • Smartphone-like equivalents for calls and notification pop-ups

  • Taking point-of-view photos (integrated with the game’s photo mode)

  • Intelligent scanning of people and items for context information

  • Connecting vision to CCTV feeds, turrets, or drones to become peripheral extensions

  • Zooming to see distant objects

Currently, it remains challenging to control XR software due to input method limitations; with hand gestures, eye tracking, or even BCIs, separation becomes harder to distinguish. Overriding a hijacked system is much harder when non-tactile inputs like hand tracking could theoretically be digitally disabled should a system be hacked or remotely controlled.


Allowing such intimate access to the eyes and vision from a central authority remains rather risky, and with such great power comes great responsibility. XR in the form of overlaid data could be very easily used to construct a misguided reality, leading users to harmful actions. For instance, a stalker could manipulate someone to not see themselves walking into a bad situation, or a bad actor could artificially frame someone for actions they didn’t realise they were doing because it didn’t look like it to them.


In situations where XR involves implanted devices, memory or implant technology, there’s likely going to be a sense of permanence that could make opting out difficult once integrated.


Case Study 3: Cyberpunk 2077


Cyberpunk 2077, a 2020 video game adapted by CD Projekt RED from its earlier tabletop RPG (Role-Playing Game), is set in a dystopian future where life is familiar yet different. As the player, you experience the world through the mercenary ‘V’ in Night City, a US-independent neon megalopolis in California. Night City is a place of corporate overlords, brutal street gang violence, and extensive cybernetic body modification technology.


Kiroshi Optics


The Kiroshi Optics are an eye replacement which overlay data onto the retina; a crucial in-game implant. The Kiroshi, like most other implants in the game, are originally intended for medical purposes and later evolved into a desirable consumer product to ‘get ahead’ of those without implants - people often replace their organic eyes with Kiroshi ones which are deemed superior.


Your Kiroshi is linked to your ‘cyberdeck’, a BCI implant which functions as your personal ‘operating system’. This forms a later stage of the exocortex concept, which, compared to our current disembodied cognitive symbiosis with our personal computers, the Kiroshi + cyberdeck combination could be said to be the post-smartphone smartphone.


The game offers many chances to demonstrate natural and normalised use of this integrated system throughout the story, such as:


  • The game’s main HUD interface (health bar, mini-map, current objective)

  • Smartphone-like equivalents for calls and notification pop-ups

  • Taking point-of-view photos (integrated with the game’s photo mode)

  • Intelligent scanning of people and items for context information

  • Connecting vision to CCTV feeds, turrets, or drones to become peripheral extensions

  • Zooming to see distant objects


Figure 16 - The HUD of the Kiroshi Optics is also the game's HUD [Game UI Database, 2020]

The blurred line between game UI and speculative interface allows players to experience the Kiroshi system in a way that regular passive storytelling through 2D movies or writing can’t quite stimulate. The entire game is experienced via first-person perspective, with dialogue scenes unfolding like real moments rather than interrupted cutscenes. The HUD elements intelligently hide depending on context, and you, the player, meld into your role of V through your capacity to influence in-game choices.


In a way, the game itself becomes a prototype for such optical implants, without CD Projekt RED having to build an actual functional version of an XR/BCI implant - it is a very extensive diagenetic prototype which can be used through virtual roleplaying rather than merely envisioning.

The blurred line between game UI and speculative interface allows players to experience the Kiroshi system in a way that regular passive storytelling through 2D movies or writing can’t quite stimulate. The entire game is experienced via first-person perspective, with dialogue scenes unfolding like real moments rather than interrupted cutscenes. The HUD elements intelligently hide depending on context, and you, the player, meld into your role of V through your capacity to influence in-game choices.


In a way, the game itself becomes a prototype for such optical implants, without CD Projekt RED having to build an actual functional version of an XR/BCI implant - it is a very extensive diagenetic prototype which can be used through virtual roleplaying rather than merely envisioning.


Figure 17 - The EvenReality G2 glasses provide a functionally similar HUD experience, connected to one’s smartphone via Bluetooth instead of a ‘cyberdeck’ [EvenReality, 2025]

The seamlessness of the Kiroshi's in-game interactions could be considered a plausible goal for XR by 50 years’ time in 2077. Devices like the EvenReality G2 or VIRTURE Luma glasses already use smartphones as connected ‘cyberdeck’ bases for their HUD/screen data via Bluetooth or wired USB-C. Ironically, VIRTURE has even collaborated with Cyberpunk 2077 for a limited edition version of their glasses, further melding the line between fiction and reality.


Standalone versions of such glasses like Meta’s Orion remain prototypes due to technical complexity and high production costs, but economies of scale will likely reduce these limitations over the coming decade(s). Less transparent projects like DARPA’s PARC contract also aim to integrate similar processes seen in the Kiroshi implants to guide users along gamified tutorial-like objectives using AI recognition, so they can complete unfamiliar tasks. Such is also being worked on by Amazon for delivery driver/warehouse augmentation.


It is likely the Kiroshi will form a major inspiration for later XR devices and user interfaces, despite being fictional. Even video games can be a viable medium for design fiction, and interactive prototypes can demonstrate a speculative artefact as close as possible to a real working version, especially when a user can be immersed in a fictional world in which the prototype is appropriately situated.


Hacking and ICE


However, like any technology, implants like the Kiroshi can be utilised for harmful purposes. By 2077, Kiroshi have a similar attack surface scale to smartphones, and hacking such devices is a major aspect of in-game combat.


Paired with one’s cyberdeck, the player can use the Kiroshi optics to scan enemies, tag them (highlight with a red outline), and select from a list of black-market-purchased ‘quickhacks’ to execute onto a target. Some common examples of these include:


  • Ping”, which briefly highlights all connected devices in an area.

  • Short Circuit”, which overloads a device or electrocutes a person.

  • Overheat”, which sets something on fire.

  • Reboot Optics”, which briefly blinds someone’s Kiroshis akin to a flashbang grenade.

The seamlessness of the Kiroshi's in-game interactions could be considered a plausible goal for XR by 50 years’ time in 2077. Devices like the EvenReality G2 or VIRTURE Luma glasses already use smartphones as connected ‘cyberdeck’ bases for their HUD/screen data via Bluetooth or wired USB-C. Ironically, VIRTURE has even collaborated with Cyberpunk 2077 for a limited edition version of their glasses, further melding the line between fiction and reality.


Standalone versions of such glasses like Meta’s Orion remain prototypes due to technical complexity and high production costs, but economies of scale will likely reduce these limitations over the coming decade(s). Less transparent projects like DARPA’s PARC contract also aim to integrate similar processes seen in the Kiroshi implants to guide users along gamified tutorial-like objectives using AI recognition, so they can complete unfamiliar tasks. Such is also being worked on by Amazon for delivery driver/warehouse augmentation.


It is likely the Kiroshi will form a major inspiration for later XR devices and user interfaces, despite being fictional. Even video games can be a viable medium for design fiction, and interactive prototypes can demonstrate a speculative artefact as close as possible to a real working version, especially when a user can be immersed in a fictional world in which the prototype is appropriately situated.


Hacking and ICE


However, like any technology, implants like the Kiroshi can be utilised for harmful purposes. By 2077, Kiroshi have a similar attack surface scale to smartphones, and hacking such devices is a major aspect of in-game combat.


Paired with one’s cyberdeck, the player can use the Kiroshi optics to scan enemies, tag them (highlight with a red outline), and select from a list of black-market-purchased ‘quickhacks’ to execute onto a target. Some common examples of these include:


  • Ping”, which briefly highlights all connected devices in an area.

  • Short Circuit”, which overloads a device or electrocutes a person.

  • Overheat”, which sets something on fire.

  • Reboot Optics”, which briefly blinds someone’s Kiroshis akin to a flashbang grenade.


Figure 18 - One can hack others around them using their Kiroshis via quickhacks on a list menu, and also scan them for additional information [Game UI Database, 2020]

There are many further hacks, but in these examples, hacks are all mere evolutions of existing possibilities. Ping is a real terminal-based tool to list devices on a network. Short Circuit or Overheat are similar to the Israeli pager attack against Hezbollah in late 2024 (though it must be noted the devices were intercepted and pre-rigged, but the core explosion was remotely executed across all simultaneously).


Reboot Optics, which temporarily blinds a target, meanwhile, is a case where XR forms a new attack vector, similar to the interrogation scene in Men Against Fire from Black Mirror. In all cases, showcasing this capacity can teach players about potential dangers of optical implants through narrative and play.


Likewise, enemies can also initiate counter-hacks, adding another layer of battle over melee and gunfights. This would be classed as part of ‘5th Generation Warfare’, which involves cyberattacks and cognition, frontiers beyond traditional land/sea/air/communication warfare. In Cyberpunk 2077, however, malware defence is mitigated through “ICE” (Intrusion Countermeasure Electronics, taken from William Gibson’s Neuromancer), which is essentially a form of intelligent firewall preventing undesired connections by enemies. ICE can go as far as killing attackers in defence, resulting in a painful death of one’s brain being literally fried.


Yet, one can install ‘Self-Ice’ to line a cyberdeck with protection and actively thwart intruders from attacking. Likewise, one such example in the game where you encounter enemies who cannot be hacked is Adam Smasher, who not only stops your attempts but parries a hack of his own (rebooting your optics).

There are many further hacks, but in these examples, hacks are all mere evolutions of existing possibilities. Ping is a real terminal-based tool to list devices on a network. Short Circuit or Overheat are similar to the Israeli pager attack against Hezbollah in late 2024 (though it must be noted the devices were intercepted and pre-rigged, but the core explosion was remotely executed across all simultaneously).


Reboot Optics, which temporarily blinds a target, meanwhile, is a case where XR forms a new attack vector, similar to the interrogation scene in Men Against Fire from Black Mirror. In all cases, showcasing this capacity can teach players about potential dangers of optical implants through narrative and play.


Likewise, enemies can also initiate counter-hacks, adding another layer of battle over melee and gunfights. This would be classed as part of ‘5th Generation Warfare’, which involves cyberattacks and cognition, frontiers beyond traditional land/sea/air/communication warfare. In Cyberpunk 2077, however, malware defence is mitigated through “ICE” (Intrusion Countermeasure Electronics, taken from William Gibson’s Neuromancer), which is essentially a form of intelligent firewall preventing undesired connections by enemies. ICE can go as far as killing attackers in defence, resulting in a painful death of one’s brain being literally fried.


Yet, one can install ‘Self-Ice’ to line a cyberdeck with protection and actively thwart intruders from attacking. Likewise, one such example in the game where you encounter enemies who cannot be hacked is Adam Smasher, who not only stops your attempts but parries a hack of his own (rebooting your optics).


Figure 19 - Adam Smasher has his own branded “Smasher ICE”, suggesting those technical enough could develop personalised defence measures to prevent and counter general-level attacks [carbonatedshark55, 2024]

AI agents are already being developed for similar cybersecurity purposes to actively scan systems for threats. However, most implementations are certainly basic compared to the speculative defences of Cyberpunk 2077’s ICE (again, we are about 50 years behind). Present-day solutions like Darktrace use self-learning AI models to monitor network behaviour and flag anomalies for real-time threat hunting and danger visualisations.


Microsoft also offers AI agents in their Defender suite for autonomous detection and risk isolation. Both processes notify administrators and provide summaries of predicted risks, ideally before an attack occurs or reaches detrimental stages. As AI and XR develop together to form such embodied systems, these two aspects will likely become increasingly intertwined in a similar way, potentially leading to systems or products similar to the speculated 2077 concepts of Kiroshi and ICE.


In the game, these tools are often used by you to commit hitjobs, theft, or combat/self-defence scenarios (since you, V, are a mercenary) and so are typically bought from black market dealers. Many implants have a military origin, from private-military corporations like Arasaka or Militech, costing thousands of €$, therefore technology like this is unlikely to be accessible to everyone. Contextually, Night City is an sovereign city-state in America, and therefore exists as an extreme case of the free market, hence why one can access trafficked/stolen military-grade implants.


Despite the lack of a direct equivalent in our reality, the Anduril helmet is the closest publicly known parallel to Kiroshi. However, as time progresses, it is likely that older and safer versions of what is now cutting-edge technology will become available to regular consumers, as has also occurred with many previously military-grade technologies like microwave ovens, GPS, or the Internet/ARPANET.


As technologies become more common, a guarantee will be needed to ensure everyone is adequately protected. Yet, culture has gradually adapted to previous technologies, as McLuhan suggested - we continually evolve in response to the mediums of our time. Even though their consequences inevitably affect us, it’s natural to assume this will happen with XR as new markets will provide new solutions to new problems.


Cyberpunk 2077 effectively demonstrates a hypothetical future through its fictional world construction and player interaction capabilities. It plausibly depicts a gradual evolution of XR technology and its context in 50 years, and offers a immersive prototype of how XR could work in that future not just imaginatively, but literally through becoming a part of the game.


Conclusion


Whilst not much literature exists specifically regarding the intersection of XR’s potential problems, nor using XR fiction as a basis for advocating preferred futures, this essay has attempted to distill solutions, mitigating strategies and principles to uphold if one desires to retain freedom, autonomy and a future where the individual remains in control of the technology they use, and not a slave to it. Some may, with more extensive discussion and popularity within mainstream discourse, gain enough momentum to eventually create measures baked into XR to secure some of these ideals.


This dissertation should be seen more so as a springboard for further research, theories and writings about XR-specific solutions, as the field remains new and fast paced. Concepts like the self-governing identity systems of Urbit, sovereign-hosted WebXR protocols for direct access to content like JanusXR, or the homomorphic encrypted information processing capacities of Octra to combat data leverage and surveillance of centralised platforms, all of which ensure some degree of sovereignty, could potentially be applied to XR with time. However, there is no denying that first they will likely require more extensive development, awareness and accessibility to users who are not technical enough to know or understand them. It is therefore a duty of designers involved in XR and technology more broadly to ensure responsible principles and ideas are communicated and translated effectively.


It should be said that no technology in the fiction discussed in this dissertation ever truly has a solution, with its use and problems simply becoming a fact of life, which appears to us an unfortunate and grim future we wouldn’t really want to live. More research should be made into better articulating specific and more extensive proposals regarding direct issues within XR, as this limited dissertation is a more generalised case-by-case compilation to articulate how fiction can present issues about emerging technology.


As dangers and violations using XR grow greater and more commonplace, it is important that future discourse does not trend towards permitting regulatory capture like, say, United Kingdom internet speech or EU encryption laws, which seem to struggle to accurately understand risks to user freedoms, often lobbied towards even greater harms. The ability to sovereignly navigate autonomy using one’s own tools, particularly when these tools do become integrated into the body as worn devices or potentially implants, remains crucial to preventing tyranny unlike anything historically seen. Even if ideals discussed here never reach mass-adoption due to incentives of powerful players stacked against them to cover and distort them, an element of technical literacy and critical understanding of knock-on consequences is encouraged at bare minimum.


Alongside XR, the development of AI may shift the playing board as discussed in the Cyberpunk 2077 case study; offence may grow, but so may defence. Automated coding may become so

easy to do, anyone may be able to code a tool to help themselves. AI’s emerging possibilities also should be studied further in conjunction, as its real implications also remain largely under-discussed due to its contemporaneity, technical complexity as a subject, and ideological biases in its framing (both for, as investors, and against, as sceptics).


Designed fiction for promoting ideas, as have been seen here as case studies, can also traditionally lag in production timelines - AI/LLMs used for creation should not be discredited in their potential surrounding future media depicting XR development, and therefore the area of AI- led design fiction is also another area worth researching further, if it even could be said to exist yet.


Additionally, BCIs are also an even newer emerging field which was briefly mentioned, and as a trinity in conjunction with XR and AI, it may form trajectories even Cyberpunk 2077 could have overlooked as one of the few mainstream depictions of it. More research should also be compiled regarding ways current timelines interact with XR and where they more broadly fit within its field.


Regardless, the takeaway of this dissertation is ultimately that XR’s future is in the hands of designers, as well as engineers, programmers, marketers, and businesses who will steer the direction it will unfold. Whilst the true outcome remains difficult to predict, fictitious depictions of the future can most certainly guide, influence, and educate on issues in advance of creation and production; it is only a hope that we may heed warnings and find the resolve to effectively organise and responsibly work on solutions. The potential negatives of allowing XR to devolve us may prove costly to not only our cultural ways of living, but also to ourselves and others around us.

AI agents are already being developed for similar cybersecurity purposes to actively scan systems for threats. However, most implementations are certainly basic compared to the speculative defences of Cyberpunk 2077’s ICE (again, we are about 50 years behind). Present-day solutions like Darktrace use self-learning AI models to monitor network behaviour and flag anomalies for real-time threat hunting and danger visualisations.


Microsoft also offers AI agents in their Defender suite for autonomous detection and risk isolation. Both processes notify administrators and provide summaries of predicted risks, ideally before an attack occurs or reaches detrimental stages. As AI and XR develop together to form such embodied systems, these two aspects will likely become increasingly intertwined in a similar way, potentially leading to systems or products similar to the speculated 2077 concepts of Kiroshi and ICE.


In the game, these tools are often used by you to commit hitjobs, theft, or combat/self-defence scenarios (since you, V, are a mercenary) and so are typically bought from black market dealers. Many implants have a military origin, from private-military corporations like Arasaka or Militech, costing thousands of €$, therefore technology like this is unlikely to be accessible to everyone. Contextually, Night City is an sovereign city-state in America, and therefore exists as an extreme case of the free market, hence why one can access trafficked/stolen military-grade implants.


Despite the lack of a direct equivalent in our reality, the Anduril helmet is the closest publicly known parallel to Kiroshi. However, as time progresses, it is likely that older and safer versions of what is now cutting-edge technology will become available to regular consumers, as has also occurred with many previously military-grade technologies like microwave ovens, GPS, or the Internet/ARPANET.


As technologies become more common, a guarantee will be needed to ensure everyone is adequately protected. Yet, culture has gradually adapted to previous technologies, as McLuhan suggested - we continually evolve in response to the mediums of our time. Even though their consequences inevitably affect us, it’s natural to assume this will happen with XR as new markets will provide new solutions to new problems.


Cyberpunk 2077 effectively demonstrates a hypothetical future through its fictional world construction and player interaction capabilities. It plausibly depicts a gradual evolution of XR technology and its context in 50 years, and offers a immersive prototype of how XR could work in that future not just imaginatively, but literally through becoming a part of the game.


Conclusion


Whilst not much literature exists specifically regarding the intersection of XR’s potential problems, nor using XR fiction as a basis for advocating preferred futures, this essay has attempted to distill solutions, mitigating strategies and principles to uphold if one desires to retain freedom, autonomy and a future where the individual remains in control of the technology they use, and not a slave to it. Some may, with more extensive discussion and popularity within mainstream discourse, gain enough momentum to eventually create measures baked into XR to secure some of these ideals.


This dissertation should be seen more so as a springboard for further research, theories and writings about XR-specific solutions, as the field remains new and fast paced. Concepts like the self-governing identity systems of Urbit, sovereign-hosted WebXR protocols for direct access to content like JanusXR, or the homomorphic encrypted information processing capacities of Octra to combat data leverage and surveillance of centralised platforms, all of which ensure some degree of sovereignty, could potentially be applied to XR with time. However, there is no denying that first they will likely require more extensive development, awareness and accessibility to users who are not technical enough to know or understand them. It is therefore a duty of designers involved in XR and technology more broadly to ensure responsible principles and ideas are communicated and translated effectively.


It should be said that no technology in the fiction discussed in this dissertation ever truly has a solution, with its use and problems simply becoming a fact of life, which appears to us an unfortunate and grim future we wouldn’t really want to live. More research should be made into better articulating specific and more extensive proposals regarding direct issues within XR, as this limited dissertation is a more generalised case-by-case compilation to articulate how fiction can present issues about emerging technology.


As dangers and violations using XR grow greater and more commonplace, it is important that future discourse does not trend towards permitting regulatory capture like, say, United Kingdom internet speech or EU encryption laws, which seem to struggle to accurately understand risks to user freedoms, often lobbied towards even greater harms. The ability to sovereignly navigate autonomy using one’s own tools, particularly when these tools do become integrated into the body as worn devices or potentially implants, remains crucial to preventing tyranny unlike anything historically seen. Even if ideals discussed here never reach mass-adoption due to incentives of powerful players stacked against them to cover and distort them, an element of technical literacy and critical understanding of knock-on consequences is encouraged at bare minimum.


Alongside XR, the development of AI may shift the playing board as discussed in the Cyberpunk 2077 case study; offence may grow, but so may defence. Automated coding may become so

easy to do, anyone may be able to code a tool to help themselves. AI’s emerging possibilities also should be studied further in conjunction, as its real implications also remain largely under-discussed due to its contemporaneity, technical complexity as a subject, and ideological biases in its framing (both for, as investors, and against, as sceptics).


Designed fiction for promoting ideas, as have been seen here as case studies, can also traditionally lag in production timelines - AI/LLMs used for creation should not be discredited in their potential surrounding future media depicting XR development, and therefore the area of AI- led design fiction is also another area worth researching further, if it even could be said to exist yet.


Additionally, BCIs are also an even newer emerging field which was briefly mentioned, and as a trinity in conjunction with XR and AI, it may form trajectories even Cyberpunk 2077 could have overlooked as one of the few mainstream depictions of it. More research should also be compiled regarding ways current timelines interact with XR and where they more broadly fit within its field.


Regardless, the takeaway of this dissertation is ultimately that XR’s future is in the hands of designers, as well as engineers, programmers, marketers, and businesses who will steer the direction it will unfold. Whilst the true outcome remains difficult to predict, fictitious depictions of the future can most certainly guide, influence, and educate on issues in advance of creation and production; it is only a hope that we may heed warnings and find the resolve to effectively organise and responsibly work on solutions. The potential negatives of allowing XR to devolve us may prove costly to not only our cultural ways of living, but also to ourselves and others around us.

References


Images

  1. Anwar, H. (2023) Instagram revamps its user interface with minor tweaks to layout and removal of shop tab from homescreen - https://www.digitalinformationworld.com/2023/01/ instagram-revamps-its-user-interface.html

  2. The Guardian (2013) NSA Prism program slides - https://www.theguardian.com/ world/interactive/2013/nov/01/prism-slides-nsa-document

  3. Sciretta, P. (2010) VOTD: Real life Minority Report user interface demonstration - https://www.slashfilm.com/509384/votd-real-life-minority-report-user-interface- demonstration/

  4. MoguLive (2023) 「VRChatβ版アプデでUIが一部日本語化。アイトラッキングにも対応開始, - https://www.moguravr.com/vrchat-39/

  5. Thereso, P. (2025) Audiência pública discute desinformação e direitos nas redes -https://agenciabrasil.ebc.com.br/es/node/1627238

  6. Mysid (2006) TV noise.jpg - https://commons.wikimedia.org/wiki/File:TV_noise.jpg

  7. Rama (n.d.) P1150884 Louvre Uruk III tablette écriture précunéiforme AO19936 rwk.jpg - https://en.wikipedia.org/wiki/Sumer#/media/File:P1150884_Louvre_Uruk_III_tablette_écriture_précunéiforme_AO19936_rwk.jpg

  8. Robotham, M. (2016) 136.10Hz - Energy Balance - Epilepsy Warning - https://www.youtube.com/watch?v=04CYcxQQ0tQ

  9. Mirror (2018) The Entire History of You ends Black Mirror's first season in a hell where memories aren't private - https://www.mirror.co.uk/tv/tv-news/blackmirror- thirdepisode-theentirehistoryofyou-12260563

  10. IMDb (n.d.) Still from Black Mirror: The Entire History of You - https://www.imdb.com/title/tt2089050/mediaviewer/rm3090830848/?ref_=ttmi_mi_1_1

  11. FilmAffinity (n.d.) Image from Black Mirror: The Entire History of You (2011) - https://www.filmaffinity.com/us/movieimage.php?imageId=728384824

  12. Black Mirror Wiki (n.d.) MASS - https://black-mirror.fandom.com/wiki/MASS

  13. Crain, C. (2018) Black Mirror: “Men Against Fire” recap and analysis - https://tvobsessive.com/2018/04/30/black-mirror-men-against-fire/

  14. Anduril Industries (2025) EagleEye - https://www.anduril.com/eagleeye

  15. Meinhardt, L. et al. (2025) 'Mind games! Exploring the impact of dark patterns in mixed reality scenarios' - doi:10.1145/3743709

  16. Game UI Database (2020) Cyberpunk 2077 - https://www.gameuidatabase.com/ gameData.php?id=439

  17. Even Realities (2025) Smart glasses with display & ambient AI prompts | Even G2 - https://www.evenrealities.com/smart-glasses

  18. Game UI Database (2020) Cyberpunk 2077 - https://www.gameuidatabase.com/ gameData.php?id=439

  19. carbonatedshark55 (2024) I didn't realize Adam Smasher had anti-netrunner cyberware called Smasher ICE - https://www.reddit.com/r/cyberpunkgame/comments/ 1dc23lt/i_didnt_realize_adam_smasher_had_antinetrunner/


Research

  1. 21e8 (2022) 21e8 The Magic Number Company January 2022 pitchdeck - https://drive.google.com/file/d/1eSqtKqRROtGmVcmDm3pPe37djiHMUZrd/view?pli=1

  2. Ahmed, N. (2015) How the CIA made Google - https://medium.com/@NafeezAhmed/how-the-cia-made-google-e836451a959e

  3. 'AI slop' (2026) Wikipedia - https://en.wikipedia.org/wiki/AI_slop

  4. Allewelt, B. (2025) Cyborg theocracy - https://narrascaping.substack.com/p/cyborg-theocracy

  5. Amazon (2025) Amazon’s delivery glasses: the newest innovation designed to enhance the delivery experience - https://www.aboutamazon.com/news/transportation/smart- glasses-amazon-delivery-drivers

  6. Anduril (2025) Anduril’s EagleEye puts mission command and AI directly into the warfighter’s helmet - https://www.anduril.com/news/anduril-s-eagleeye-puts-mission-command-and-ai-directly-into-the-warfighter-s-helmet

  7. Anzolin, E. & Lo Nostro, G. (2025) Ray-Ban Meta glasses take off but face privacy and competition test - https://www.reuters.com/sustainability/boards-policy-regulation/ray-ban-meta-glasses-take-off-face-privacy-competition-test-2025-12-09/

  8. Apple (2025) Vision Pro - tech specs - https://www.apple.com/apple-vision-pro/specs/

  9. Baldry, M. et al. (2024) 'From embodied abuse to mass disruption: generative, inter-reality threats in social, mixed-reality platforms' - doi:10.1145/3696015

  10. Ball, M. (2024) Interviewing Epic Games Founder/CEO Tim Sweeney and Author/Entrepreneur Neal Stephenson - https://www.matthewball.co/all/sweeneystephenson

  11. BBC (2021) Apparently, it's the next big thing. What is the metaverse? - https://www.bbc.co.uk/news/technology-58749529

  12. BBC (2024) Ex-Israeli agents reveal how pager attacks were carried out - https://www.bbc.co.uk/news/articles/cwy3l02wxqdo

  13. BibleGateway (2025) Holy Bible (NIV) - https://www.biblegateway.com/passage/?search=Revelation%2013&version=NIV

  14. Bleecker, J. (2009) Design fiction: a short essay on design, science, fact and fiction. Near Future Laboratory - https://systemsorienteddesign.net/wp-content/uploads/2011/01/DesignFiction_WebEdition.pdf

  15. Brooker, C. (2011) ‘The entire history of you’, Black Mirror, series 1, episode 3

  16. Brooker, C. (2016) ‘Men against fire’, Black Mirror, series 3, episode 5

  17. Butler, S. (1872) 'The book of the machines' - https://zyg.edith.reisen/k/artifact/book_of_machines

  18. CD Projekt Red (2020) Cyberpunk 2077

  19. Center for Humane Technology (2019) Tristan Harris Congress testimony: understanding the use of persuasive technology - https://www.youtube.com/watch?v=ZRrguMdzXBw

  20. CERN (2025) The birth of the Web - https://home.cern/science/computing/birth-web

  21. Chomsky, N. (1957) Syntactic structures

  22. Darktrace (2025) Cyber AI Analyst - https://www.darktrace.com/cyber-ai-analyst

  23. Dawkins, R. (1976) The selfish gene

  24. 'Dead Internet theory' (2026) Wikipedia. Available at: https://en.wikipedia.org/wiki/Dead_Internet_theory

  25. 'Driver drowsiness detection' (2025) Wikipedia - https://en.wikipedia.org/wiki/Driver_drowsiness_detection

  26. Du Cluzel, F. (2020) Cognitive warfare - https://innovationhub-act.org/wp-content/uploads/2023/12/20210113_CW-Final-v2-.pdf

  27. Dunne, A. and Raby, F. (2013) Speculative everything: design, fiction, and social dreaming.

  28. Enard, W. et al. (2002) 'Molecular evolution of FOXP2, a gene involved in speech and

    language' - https://www.researchgate.net/ publication/11196534_Molecular_Evolution_of_FOXP2_a_Gene_Involved_in_Speech_and_Language

  29. 'ePassport gates' (2025) Wikipedia - https://en.wikipedia.org/wiki/EPassport_gates

  30. Even Realities (2025) Even G2 - https://www.evenrealities.com/smart-glasses

  31. Féval, C. (2024) Enshittification is a feature, not a bug - https://fev.al/posts/enshittification/

  32. 'Fifth-generation warfare' (2025) Wikipedia - https://en.wikipedia.org/wiki/Fifth-generation_warfare

  33. Fight Chat Control (2026) Overview - https://fightchatcontrol.eu/#overview

  34. Foster, N. (2013) The future mundane - https://www.core77.com/posts/25678/the-future-mundane-25678

  35. GeeksforGeeks (2025) Difference between Web 1.0, Web 2.0, and Web 3.0. -https://www.geeksforgeeks.org/blogs/web-1-0-web-2-0-and-web-3-0-with-their-difference/

  36. Gibson, W. (1984) Neuromancer

  37. 'Global cultural flows' (2025) Wikipedia - https://en.wikipedia.org/wiki/Global_cultural_flows#Mediascape

  38. Goldenberg, A. and Finkelstein, J. (2025) Cyber swarming, memetic warfare and viral insurgency - https://networkcontagion.us/wp-content/uploads/NCRI-White-Paper-Memetic-Warfare.pdf

  39. Google (n.d.) ‘Organise the world’s information’ - https://about.google/company-info/

  40. Hill, G. (2024) Facebook, a computing pioneer, a secret government program, and a strange coincidence - https://whyy.org/segments/facebook-a-computing-pioneer-a-secret-government-program-and-a-strange-coincidence/

  41. How-To Geek (2023) How to use the ping command to test your network - https://www.howtogeek.com/355664/how-to-use-ping-to-test-your-network/

  42. Illich, I. (1973) Tools for conviviality

  43. Imaishi, H. (dir.) (2022) Cyberpunk: Edgerunners - https://www.netflix.com/title/81054853

  44. Institute of Network Cultures (2021) Critical meme reader: global mutations of the viral image - https://networkcultures.org/blog/publication/critical-meme-reader-global-mutations-of-the-viral-image/

  45. IPBA Connect (2025) How Apple’s IP strategy creates powerful lock-in effects in a digital ecosystem - https://profwurzer.com/how-apples-ip-strategy-creates-powerful-lock-in-effects-in-a-digital-ecosystem/

  46. Jaynes, J. (1976) The origin of consciousness in the breakdown of the bicameral mind

  47. Johnny Mnemonic (1995) Directed by R. Longo

  48. Kirby, D.A. (2010) 'The future is now: diegetic prototypes and the role of popular films in generating real-world technological development' - https://www.researchgate.net/publication/249721702_The_Future_Is_Now_Diegetic_Prototypes_and_the_Role_of_Popular_Films_in_Generating_Real-World_Technological_Development

  49. Kurzweil, R. (2006) The singularity is near: when humans transcend biology

  50. Levy, K. (2014) A San Francisco bar banned Google Glass because it doesn't want patrons being secretly filmed - https://finance.yahoo.com/news/san-francisco-bar- decided-ban-204213419.html

  51. Lindley, J. and Coulton, P. (2015) 'Back to the future: 10 years of design fiction' - doi:10.1145/2783446.2783592

  52. Marcel, A.J. (1983) 'Conscious and unconscious perception: experiments on visual masking and word recognition'

  53. Marr, B. (2021) The fascinating history and evolution of extended reality (XR) – covering AR, VR and MR - https://www.forbes.com/sites/bernardmarr/2021/05/17/the-fascinating-history-and-evolution-of-extended-reality-xr--covering-ar-vr-and-mr/

  54. McLuhan, M. (1964) Understanding media: the extensions of man

  55. McLuhan, M. (1968) War and peace in the global village

  56. McLuhan, M. (1970) Culture is our business

  57. Meinhardt, L. et al. (2025) 'Mind games! Exploring the impact of dark patterns in mixed reality scenarios' - doi:10.1145/3743709

  58. Meta (2025) Meta Quest 3 - display and optics technology - https://www.meta.com/gb/quest/quest-3/

  59. Meta (2025) Orion - the future of wearables - https://www.meta.com/en-gb/emerging-tech/orion/

  60. Microsoft (2025) Microsoft Security Copilot. Available at: https://www.microsoft.com/en-gb/security/business/ai-machine-learning/microsoft-security-copilot

  61. Milgram, P. and Kishino, F. (1994) 'A taxonomy of mixed reality visual displays' - https:// www.researchgate.net/publication/231514051_A_Taxonomy_of_Mixed_Reality_Visual_Displays

  62. Minority Report (2002) Directed by S. Spielberg

  63. Mori, S. et al. (2020) 'InpaintFusion: incremental RGB-D inpainting for 3D scenes' -https://ieeexplore.ieee.org/document/9184389

  64. Naprys, E. (2025) View an ad and you’re cooked: Intellexa planted spyware with zero clicks - https://cybernews.com/security/intellexa-planted-spyware-with-zero-click-ads/

  65. NoScript (2025) What is it? - https://noscript.net

  66. Octra Labs (2024) Octra Network litepaper - https://octra.org/litepaper.pdf

  67. Ore, J. (2018) Surfing the Net is old school. Soon, we may inhabit it - https://magazine.utoronto.ca/research-ideas/technology/surfing-the-net-is-old-school-soon-we-may-inhabit-it-janusvr-james-mccrae-jonathan-ore/

  68. Orlowski, J. (dir.) (2020) The social dilemma

  69. Pondsmith, M. (1990) Cyberpunk 2020

  70. Postman, N. (1993) Technopoly: the surrender of culture to technology

  71. Potter, M.C. et al. (2014) 'Detecting meaning in RSVP at 13 ms per picture'

  72. Poulos, J. (2021) Human, forever: the digital politics of spiritual war

  73. Poulos, J. (2021) Testimony to the US Senate Committee on Commerce, Science, and

    Transportation, Subcommittee on Communications, Media, and Broadband: “Disrupting dangerous algorithms: addressing the harms of persuasive technology” - https://www.commerce.senate.gov/services/files/B38CCF21-4A40-4C56-BD40-79ED720B9F01

  74. PowerfulJRE (2025) Joe Rogan Experience #2422 - Jensen Huang - https://youtube.com/watch?v=3htpKYix4x8

  75. Proton (2024) What is pixel tracking? How to tell when emails are tracking you - https://proton.me/blog/pixel-tracking

  76. Reiners, M.C.R. (2025) Knock knock: what to do when police arrive over online speech - https://reiners.org.uk/knock-knock-what-to-do-when-police-arrive-over-online-speech/

  77. Resonite (2025) The future of virtual reality whitepaper - https://resonite.com/Whitepaper.pdf

  78. Richter, F. (2025) AWS stays ahead as cloud market accelerates - https://www.statista.com/chart/18819/worldwide-market-share-of-leading-cloud-infrastructure-service-providers/

  79. RightToCompute (2026) #RightToCompute - https://righttocompute.ai/

  80. Seyson, S. and Wesley, W. (2025) 'Exploring the evolution of dark patterns and manipulative design on Instagram' - doi:10.1145/3706599.3719771

  81. Snowden, E. (2019) Permanent record

  82. Song, V. (2024) College students used Meta’s smart glasses to dox people in real time - https://www.theverge.com/2024/10/2/24260262/ray-ban-meta-smart-glasses-doxxing-privacy

  83. Srinivasan, B. (2022) The network state: how to start a new country - https://thenetworkstate.com/

  84. Sterling, B. (2005) Shaping things

  85. Sterling, B. (2009) 'Design fiction' - doi:10.1145/1516016.1516021

  86. Stephenson, N. (1992) Snow crash

  87. Stross, C. (2005) Accelerando

  88. @stspanho (2025) ‘I've been building an XR app for a real-world ad blocker using Snap

    Spectacles...’ - https://x.com/stspanho/status/1935728608514838540

  89. Terranova Security (2024) 9 examples of social engineering attacks - https://www.terranovasecurity.com/blog/examples-of-social-engineering-attacks

  90. The Guardian (2013) NSA Prism program slides - https://www.theguardian.com/world/interactive/2013/nov/01/prism-slides-nsa-document

  91. The Independent (2017) Google cofounder Sergey Brin says these 2 books changed his life - https://www.independent.co.uk/news/google-cofounder-sergey-brin-2-books-changed-life-advise-helpful-reading-a7686246.html

  92. 'Total Information Awareness' (2025) Wikipedia - https://en.wikipedia.org/wiki/Total_Information_Awareness

  93. United Kingdom Parliament (2016) Investigatory Powers Act 2016 - https://www.legislation.gov.uk/ukpga/2016/25/contents

  94. United Kingdom Parliament (2023) Online Safety Act 2023 - https://www.legislation.gov.uk/ukpga/2023/50/contents

  95. United States Congress (2001) Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USA PATRIOT Act) Act of 2001, Public Law 107-56, 107th Congress - https://www.govinfo.gov/content/pkg/PLAW-107publ56/html/PLAW-107publ56.htm

  96. Urbit (2025) Urbit explained - https://urbit.org/overview/urbit-explained

  97. U.S. Department of War (2025) 12 military innovations that are now everyday parts of society - https://www.war.gov/News/Feature-Stories/Story/Article/4337784/12-military-innovations-that-are-now-everyday-parts-of-society/

  98. 'Utah Data Center' (2025) Wikipedia - https://en.wikipedia.org/wiki/Utah_Data_Center

  99. VIRTUE (2025) VITURE x Cyberpunk 2077 Luma Cyber XR Glasses - https://www.viture.com/cyberpunk2077?color=Jet+Black&size=Regular+(IPD+58-70mm)

  100. Vox (2023) Snow Crash author Neal Stephenson predicted the metaverse. What does he see next? - https://www.vox.com/technology/2023/3/6/23627351/neal-stephenson-snow-crash-metaverse-goggles-movies-games-tv-podcast-peter-kafka-media-column

  101. VRChat Inc. (2017) VRChat

  102. W3Techs (2025) Usage statistics and market share of Cloudflare - https://w3techs.com/technologies/details/cn-cloudflare

  103. Wang, Y. (2023) Regulations - drivers for mandating driver monitoring systems - https://www.idtechex.com/en/research-article/regulations-drivers-for-mandating-driver-monitoring-systems/30322

  104. Wilcox, M. (2020) 21e8 Index explainer - https://www.youtube.com/watch?v=6HYdTtIoyts

  105. Wilkins, A. et al. (2022) 'Visually sensitive seizures: an updated review by the Epilepsy Foundation'

  106. World (2025) World ID - https://world.org/world-id

  107. Xavier, H.S. (2024) The Web unpacked: a quantitative analysis of global Web usage - https://arxiv.org/html/2404.17095v2

  108. Xerox (2021) DARPA awards PARC contract to accelerate learning of complex skillsets through artificial intelligence and augmented reality - https://www.news.xerox.com/news/darpa-awards-parc-contract-to-accelerate-learning-of-complex-skillsets-through-artificial-intelligence-and-augmented-reality

  109. Yuwei (2025) Driver fatigue monitoring system - https://en.yuweitek.com/driver-fatigue-monitor-system.html

  110. Zarraelli, P. (2025) Peter Thiel’s Misunderstood Vision - https://sfl.media/peter-thiel-isnt-anti-democracy-hes-post-democracy/

  111. ZDNET (2016) NSA is so overwhelmed with data, it's no longer effective, says whistleblower - https://www.zdnet.com/article/nsa-whistleblower-overwhelmed-with-data-ineffective/

  112. Zuboff, S. (2019) The age of surveillance capitalism: the fight for a human future at the new frontier of power