After the recent farce of a congressional hearing about “anti-conservative bias” on social media, Facebook wants the world to know it’s still taking this alleged issue seriously.
Facebook shared Wednesday that it will undergo a thorough political bias review, led by former Republican senator Jon Kyl and his lobbying firm, Covington and Burling. According to the firm’s website, Kyl is an expert in helping corporations navigate domestic and international policy. The conservative think tank Heritage Foundation will also take part in the independent review.
Additionally, civil rights experts will conduct an audit of civil rights abuses perpetuated by Facebook’s platform. These issues have stemmed from reports of racially biased ad targeting for housing and jobs on Facebook. The ACLU’s former director of its legislative office, Laura Murphy, will lead the inquiry alongside a law firm experienced with housing, lending, employment, and public accommodation discrimination: Relman, Dane & Colfax.
“We look forward to working with Facebook on this very important audit project,” a Relman, Dane & Colfax representative told Mashable.
Both audits came voluntarily from Facebook.
“Getting outside feedback will help us improve over time — ensuring that we can more effectively serve the people on Facebook,” Vice President of Global Policy at Facebook, Joel Kaplan, said in an emailed statement sent to Mashable.
The announcement of the audits came hours before the second day of Facebook’s developer conference, F8. Nestled within a larger discussion of artificial intelligence, Facebook research scientist Isabel Kloumann focused specifically on the importance of ethics and diversity when crafting AI tools.
“AI isn’t exactly our child, but it is our responsibility,” Kloumann said during her keynote presentation. “So let’s all work together to teach it.”
Kloumann announced an algorithmic auditing system called “Fairness Flow.” The system reportedly analyzes algorithms (specifically, in job advertising), for bias, to help ensure that equal opportunity is present in Facebook automation.
“Our efforts to eliminate algorithmic bias and irresponsible AI deployment include careful consideration of the code we generate as well as the people we hire to write, manage, and approve those AI systems,” Facebook wrote in an F8 blog.
Facebook is working to scale Fairness Flow beyond job application ads so that the algorithm auditor can be applied across products.
This initiative and the civil rights audit come after several reports that Facebook allowed advertisers to target housing and jobs in ways that excluded minority groups; what is one social media platform’s “targeted ad platform” is a civil rights attorney’s “housing discrimination.”
The algorithmic auditing system in conjunction with the independent, outside review could represent meaningful change to a real problem.
That’s in stark contrast with the (questionable) conservative bias allegations, which Mashable has argued represent a fundamental misunderstanding of how the Facebook algorithm operates. ThinkProgress points out that this bias review will be run by a conservative think tank and a republican lobbying arm — without the presence of any liberals. Mashable has reached out to the lobbying firm to ask them how and why it was selected, and what experience it has in determining bias. We will update this story if and when we hear back.
The bias allegations stem from Facebook flagging some conservative content as inappropriate or offensive, about which congresspeople hammered Mark Zuckerberg in his congressional testimony. Zuckerberg and Facebook have stated that this flagging was done in error. The Judiciary Committee did not return Mashable’s request for comment about whether this will satisfy its concerns about this matter.
Though Facebook’s automated processes regarding racial bias are clearly under review, at least internally with Fairness Flow, Facebook said it was too soon to say whether the anti-conservative bias review will include an examination of its automated flagging systems.
With both issues, Facebook appears willing to put itself under the microscope to improve equitability on its platform. But the efficacy of the audits will all depend on how much about its algorithm, machine learning, and automation Facebook is actually willing to share. Without that transparency, we’re just left to trust Facebook, without evidence. And that trust might not be something Facebook has earned.