Facebook admitted, at best nonchalantly, on Thursday that its super-soaraway AI algorithms failed to automatically detect the live-streamed video of last week’s Christchurch mass murders.
The antisocial giant has repeatedly touted its fancy artificial intelligence and machine learning techniques as the way forward for tackling the spread of harmful content on its platform. Image-recognition software can’t catch everything, however, not even with Silicon Valley’s finest and highly paid engineers working on the problem, so Facebook continues to rely on, surprise surprise, humans to pick up the slack in moderation.
There’s a team of about 15,000 content moderators who review, and allow or delete, piles and piles of psychologically damaging images and videos submitted to Facebook on an hourly if not minute-by-minute basis. The job can be extremely mentally distressing, so the ultimate goal is to eventually hand that work over to algorithms. But there’s just not enough intelligence in today’s AI technology to match cube farms of relatively poorly paid contractors.
Facebook has blamed the failure of its AI software to spot the video as it was broadcast, and soon after when it was shared across its platform, on a lack of training data. Today’s neural networks need to inspect thousands or millions of examples to learn patterns in the data to begin identifying things like pornographic or violent content.
If I were a cynic, I could conclude Facebook probably wanted to keep the video, just to get the clicks and ad revenue.
Wait ……… I am a cynic, and my guess is that Facebook milked this for all it’s worth.