As I wrote last week, viewing Google, YouTube and Facebook as if they were publishers is a category error. They aren’t publishers; they’re platforms which permit a vast number of people and organisations to publish their content.

I have some sympathy with them too. Trying to police millions of users would be expensive for the platforms and, ultimately, it will be almost impossible to satisfy everyone. The reason for this is simple; some content that is posted will be objectionable to advertisers but it doesn’t breach any laws and could be free speech. How are Google and Facebook to deal with this? And, further complicating matters, how to deal with the fact that different advertisers have different views on how ‘edgy’ and tolerant they are of content which differs from mainstream acceptance?

A simple example; I doubt anyone would want to curtail individuals or groups wanting to raise issues of environmental standards and companies’ efforts to thwart them, subvert them or violate them. However, those same organisations would not want to advertise alongside this content.

The good news is that there are already verification solutions which can block such content for an advertiser. The platforms like Google, Facebook and YouTube do not need to invest the expense and time required to build this functionality. As an example, at Innovid we are tightly integrated with multiple verification partners who can identify objectionable content (and spammed locations, bots, spiders, robots and triggering keywords) to prevent serving a video into an unsafe environment. Applying such technology usually results, on average, with about 8% of a campaign not being served, protecting the advertiser and either saving them costs or letting them reinvest in more media.

The main challenge today is that Google, YouTube and Facebook don’t fully allow these third-party solutions to be applied to campaigns running on their platforms.

The platforms are presented with a crucial decision. Advertisers are demanding brand safety and control of where their content is displayed. Do the platforms take on the task and investment of building their own verification and policing all their content? Or do they open their “walled gardens” and expose more of their data to third parties?

In our view, the platforms could meet the needs of advertisers more quickly and without having to each invest in their own technology if they opened themselves to 3rd party verification. Some advertisers will also want to standardize on a verification platform and manage their preferences once rather than having to do this for each publisher – i.e., once for YouTube, again for Facebook, and so on. This seems like a more customer oriented approach than building their own verification platforms and always being subject to questions of conflict of interest between ad sales and brand safety.

We are at a momentous time where the industry, collectively, needs to restore advertiser trust and enable them to recommence their investments in digital. Opening up to 3rd parties will expose more of the platform’s data, but the performance of these platforms is strong and showing this more clearly could accelerate the growth of digital overall.