Skip to main content
Summary

We wanted to understand how technology (in this case, VR) might be used to explore factors that drive disruptive digital behaviours, the social impacts of these behaviours, and how to ensure safe environments so that all New Zealanders are able to participate in, contribute to and benefit from the digital world.

This blog post by Karyn Brice is connected to the VR Tool work: VR: The challenge of creating a story arc in immersive experiences

Cross-pollination of ideas

Following New Zealand’s 2019 terror attacks, a team of 20 to 25 police was relocated for six months to the Lab when Police HQ became the focus of the terror response. The police team was developing AI “(digital twin Bella)” to manage high volume, low level interactions with the public, such as reporting lost property so staff could be freed to address more serious crime.

As with others who came in just to use the facility, there was passive idea-sharing – which in this case coincided with the Lab’s Emerging Tech team having just started a project with DIA’s Digital Rights and Ethics team to understand how governments might use such technologies as virtual reality (VR).

In this case to enable policy-makers to get a greater sense of empathy for the people and situation their policies are addressing.

What happened

The rise in online communication has seen a shift in socially acceptable behaviour as a result of the anonymity and the ‘safety’ of a screen between people. Politeness is often being replaced with confrontation and at the extreme end there is “snowflakes”, “flamewars”, “doxing” and industrialised outrage.

The project focussed on the cause-and-effect chains of online hate speech. It developed a VR tool called the Online Social Norms VR , an experience that gives a sense of how the public service may begin to use near-future technologies as a research strategy to inform policy development for complex social issues.

In this case two scenarios were created – enabling a person to sit behind a set of screens and ‘become’ someone sending online hate messages. They could then step into a second scenario and become the recipient of the hateful messages they’d just sent: being first the perpetrator, then the victim.

The VR tool has been used to give a sense of how the public service may begin to use near-future technologies in exploring aspects of a strategic policy issue. In this case allowing analysts to see and understand the drivers and impacts of online hate speech. We also show how near-future tech such as VR can support future policy development to be better informed by evidence led, contextual experience.

Alongside the tool, the team created an ethics plan as a guide to making sure ethical risks in such a project are considered and justifiable, and that measures are put in place to manage these.

International interest

This project was driven by the Digital Rights and Ethics team as part of the Government Chief Digital Officers office within the Department of Internal Affairs. It was presented and shared internationally as part of New Zealand’s contributions to the Digital Nations group (a collaborative network of the world’s leading digital governments).

The project was shared internationally as part of New Zealand’s contribution to the Digital Nations group, a collaborative network of the world’s “leading digital governments”. The current prototype is open source and can be accessed via GitHub.

Offering public servants a look into the future

In late 2019, the Emerging Tech team also created a visual, “20 Year Emerging Technology Landscape” that it shared across government agencies indicating new technologies the public.