Problematic for Who? [...]

Facebook claims that human curation is problematic. But problematic for who?

According to Facebook, it developed two different options for how the 2016 clickbait update would work. One was a classifier based off the 2015 hoax detector based on user reports, and another was the machine learning classifier built specifically for detecting clickbait via computer algorithm.

Facebook says it found the specially-made machine learning clickbait detector performed better with fewer false positives and false negatives, so that’s what Facebook released. It’s possible that that the unreleased version is what Gizmodo is referring to as the shelved update. Facebook tells me that unbalanced clickbait demotion of right-wing stories wasn’t why it wasn’t released, but political leaning could still be a concern.

The choice to rely on a machine learning algorithm rather than centering the fix around user reports aligns with Facebook’s recent push to reduce the potential for human bias in its curation, which itself has been problematic. (Source)