Social media is the primary platform identified by New Zealand’s Pacific Peoples, Māori, Asian, and Muslim communities as a source of offensive content, according to a report by the Broadcasting Standards Authority (BSA). The report, titled “Freedom of Expression and Harms Impacting Diverse Communities,” surveyed 493 individuals and revealed that approximately one-third of participants from Māori, Pasifika, and Muslim backgrounds had encountered offensive, discriminatory, or controversial material in the past six months.
Social media received the most criticism for harmful content, followed by free-to-air television and online news sites. The BSA highlighted that half of the diverse audiences surveyed had actively avoided broadcasts due to perceived racist comments, anti-Māori sentiments, biased coverage of the Palestine/Israel conflict, or references labeling certain individuals as criminals or terrorists.
BSA chief executive Stacey Wood noted that while news media generally uphold broadcasting standards, there is a broader societal issue related to social cohesion and the need for kindness. The report indicated serious community-level impacts resulting from the public expression of offensive views, which can normalize negative behavior, affect community aspirations, and reinforce harmful stereotypes.
Denise Kingi-‘Ulu’ave, chief executive of Pacific mental health organization Le Va, emphasized the negative effects of ongoing discrimination on self-esteem and mental health. The survey revealed that a majority of respondents believe freedom of expression should be moderated to respect the views of others, with 56 percent of Māori, 60 percent of Pacific participants, 45 percent of Asians, and 41 percent of Muslims in favor of tighter restrictions to prevent harm.
The report suggests that mainstream media often legitimizes harmful themes prevalent on social media. The anonymity associated with platforms such as talkback radio and social media encourages more extreme viewpoints, contributing to diminished regulatory boundaries. Wood stressed the need for regulatory reforms that address the challenges posed by unregulated online environments.
Kingi-‘Ulu’ave described the report as accurately reflecting the experiences of Pacific communities, urging politicians not to propagate beliefs that incite discrimination or hate speech. She cited specific incidents where derogatory comments led to threats and trauma within the Ministry of Pacific Peoples in Auckland.
Furthermore, she emphasized the importance of media literacy within Pacific communities to help them navigate media content and engage critically with trustworthy sources. New Zealand Muslim community leader Anjum Nausheen Rahman highlighted the dangers posed by unregulated online media, where misinformation abounds, impacting community engagement.
Academic Malini Hayma noted the media’s role in fostering a more inclusive representation of diverse community groups, calling for storytelling that accurately reflects the experiences of all communities in New Zealand.
Despite the prevalence of offensive viewpoints, many respondents struggled to pinpoint specific sources. The report cited instances of negative portrayals, particularly regarding Māori and Pacific peoples, as well as biases in reporting on the Israeli/Palestinian conflict.
The findings received criticism from Jonathan Ayling, chief executive of The Free Speech Union, who warned that the emphasis on addressing harmful speech could lead to censorship of unpopular views. He asserted that existing laws adequately manage speech that incites violence, and that stifling expression harms human dignity.
Moreover, the report noted that only a small fraction of respondents took action against offensive views in public broadcasting, citing cultural hesitance to complain. The majority preferred discussing their experiences with family and friends or filing complaints through other channels such as the Human Rights Commission.
These findings emerge as global governments, including Australia and France, seek to impose stricter regulations on social media platforms to shield young people from harmful content. The EU has initiated a legal framework for accountability among networks, while new laws and lawsuits are emerging in the U.S., with significant regulations expected to take effect by 2026.