The Global Alliance for Responsible Media (GARM), an industry effort that unites marketers, media agencies, media platforms, and industry associations to safeguard the potential of digital media, and combat harmful content online, has set the brand safety floor and suitability framework for digital media.
These are steps essential to create a safer digital media environment that enriches society through content, communications, and commerce taking cognisance that harmful content and its creators threaten the potential for digital media and disrupt the connections everyone seeks.
GARM’s first step in safeguarding the positive potential for digital was to provide platforms, agencies, and marketers with the framework with which to define safe and harmful content online.
Its position is that the challenge of harmful online content cannot be addressed if one is unable to describe it using consistent and understandable language. To this end, GARM has developed and will adopt common definitions to ensure that the advertising industry is categorizing harmful content in the same way across the board.
In consultation with experts from GARM’s NGO Consultative Group, it has identified eleven key categories to establishing standards as an essential foundation needed to stop harmful content from being monetised through advertising. Individual GARM members will adopt these shared principles in their operations, whether they are a marketer, agency, or media platform.
It believes that, together, these definitions will form the cornerstone for the industry to find the balance between supporting responsible speech, bolstering public safety, and providing for responsible marketing practices.
With the framework of consistent categories in place, GARM hopes to be able to improve transparency in the availability, monetization, and inclusion of content within advertising campaigns. This, it said, is essential to help platforms, agencies, and advertisers make decisions essential to the advertising industry.
The GARM brand safety floor and suitability framework touch on the following content categories: adult and explicit sexual content; arms and ammunition; crime and harmful acts to individuals and society, human right violations; death, injury or military conflict; online piracy; and hate speech and acts of aggression.
The other categories are obscenity and profanity, including language, gestures, and explicitly gory, graphic or repulsive content intended to shock and disgust; illegal drugs/tobacco/e-cigarettes/vaping/alcohol; spam or harmful content; and terrorism
The document setting out contents not appropriate for any advertising support in these categories as well as sensitive content appropriate for advertising-supported by enhanced advertiser controls can be downloaded from here.