Algorithms might never solve Facebook's big data dilemma

Facebook should put humans, not machines, in charge of content on the platform, says Overture editorial lead Chris Mohney.

Algorithms might never solve Facebook's big data dilemma

Dan Patterson of TechRepublic asks Overture editorial lead Chris Mohney how Facebook should deal with the dilemma of content-on-the-platform. The following is an edited transcript of the interview.

Dan Patterson: Before we talk future technologies. And again, we're just talking about opinions. And so many people, when we're talking about internet technology and opinions, so many people have them.

But in your opinion, how should Facebook deal with this massive dilemma of content-on-the-platform? On the one hand, this is a platform, and it's a massive platform. It's scaled in that would be nearly impossible for any human to edit. On the other hand, as you said, a moment ago, there is a certain responsibility for the content creators to edit the content on their platform. So how do you bridge that gap of scale plus human need?

Chris Mohney: What Facebook, and people like Facebook, have to resign themselves to, is that there is no perfect solution, and will not be anytime soon. I'm not convinced that ultimately there won't be a way to algorithmically or to with all the kinds of advance science we're dealing with now, in terms of how machine learning and other things can look at vast amounts of content, even human generated UGC. And maybe police theses things better, or channel them better.

But we're not anywhere close to that now. And we have a very immediate need to deal with this. And the sort of constant backpedaling away from just hiring some people to help in the short-term. Let's call the short-term being five years, let's say, or 10 years; it's just like self-defeating. And it's also insisting on a philosophical point, which maybe is provable some day, but there are very real consequences of this unwillingness to engage with the problem, that have been happening recently and will continue to happen for the foreseeable future.

And I'll be fine with testing out more advanced technology to work on these things, while dealing with the problem at hand, which can absolutely be at least ameliorated by some human attention, which as it has been, in smaller endeavors. Yes, you're not going to deal with the vast output of a Facebook or a Twitter, or anybody to everyone's satisfaction. But you could so much more capably handle it than it's being handled now if you simply admitted that it's something that you have to hire some people to deal with.

Dan Patterson: Do you hire people? How do you teach editorial instincts to an engineering culture?

SEE: Internet of Things policy (Tech Pro Research)

Chris Mohney: Well, let me put it this way: so, one of my other jobs, which I don't think we touched on, was working at Tumblr in their experimental editorial operation, which was sort of a marketing thing, but also had aspects that dealt with the culture of the company, and how it interacted with the broader culture of a country and the nation... Or the world rather. What was impressive to me about Tumblr, and many things that were impressive to me at Tumblr was that they had a very serious, for quite some time, dedication to having a trust and safety division, which certainly had a wide variety of technological tools to help them try to keep the platform safe for people.

But they acknowledge sort of what I was getting at just now, which is that until we can perfect the tools, the people have to do it. And so they had people there who were like the sweetest people you can imagine, very dedicated and were true believers in Tumblr, and in tech, and everything, all the possibilities thereof. But most of their job was simply to look at just the most horrible offensive things you can imagine, incredibly violent, sexual, pornographic, combinations of all these things, just the worst possible things you can imagine.

SEE: IT leader's guide to the future of artificial intelligence (Tech Pro Research)

Because if they didn't look at it, then someone else was going to find it. And those people basically fell on that grenade every single day. And it's not the kind of job that I could every do myself. But I admire their sort of idealism and dedication, and, the fact that the company was willing to devout those resources to that kind of job. And I'm not saying that every company needs something quite like that, thought probably most of them do. But even on the fringes of things which are simply deceptive or fake, or crude, or violations of this or that, you know, logistical issue. There's just no reason... Because especially these incredibly wealthy platforms and companies, there's no reason not to as an interim solution to simply have some people help. Because until they have the technology to do this, they're just hurting everyone in my opinion.

And it just... constantly easily avoidable mistakes and offenses, and problems, and scandals just keep happening over and over again. I mean, case in point, just this ridiculous thing that Mark Zuckerberg said about holocaust denial. Like that was such an avoidable error to make in public, much less in the policy that it represents.

Also see