Machine Learning Can’t Flag False News, New Studies Show

Current machine learning models aren’t yet up to the task of distinguishing false news reports, two new papers by MIT researchers show. From a report: After different researchers showed that computers can convincingly generate made-up news stories without much human oversight, some experts hoped that the same machine-learning-based systems could be trained to detect such stories. But MIT doctoral student Tal Schuster’s studies show that, while machines are great at detecting machine-generated text, they can’t identify whether stories are true or false. Many automated fact-checking systems are trained using a database of true statements called Fact Extraction and Verification (FEVER). In one study, Schuster and team showed that machine learning-taught fact-checking systems struggled to handle negative statements (“Greg never said his car wasn’t blue”) even when they would know the positive statement was true (“Greg says his car is blue”). The problem, say the researchers, is that the database is filled with human bias. The people who created FEVER tended to write their false entries as negative statements and their true statements as positive statements — so the computers learned to rate sentences with negative statements as false. That means the systems were solving a much easier problem than detecting fake news.

Read more of this story at Slashdot.

Source:
https://tech.slashdot.org/story/19/10/17/1928231/machine-learning-cant-flag-false-news-new-studies-show?utm_source=rss1.0mainlinkanon&utm_medium=feed