Twitter’s META team is made up of some of the technology’s most notorious critics, with two more joining them soon: Sarah Roberts and Kristian Lum.
Machine learning engineer Ari Font was concerned about the future of Twitter’s algorithms. It was the middle of 2020 and the head of the company’s algorithm ethics and accountability research team had just left Twitter. For Ari Font, the future of ethics research was unclear.At the time, Font was in charge of Twitter’s machine learning platform teams – part of Twitter Cortex, the company’s central machine learning organization – but she thought ethics research could transform the way Twitter relies on machine learning. She always believed that algorithmic accountability and ethics should influence not just how Twitter used algorithms, but all practical applications of AI.So she volunteered to help rebuild Twitter’s META team (META stands for Machine Learning, Ethics, Transparency and Accountability), and embarked on what she called a “roadshow” to persuade Jack Dorsey and his team that machine learning ethics didn’t just belong in research.
Within a few months, after a litany of conversations with Dorsey and other top executives, Font had not only secured a more powerful and operationalized place for the once-small team. In addition to the budget for increased headcount and a new director, she eventually persuaded Dorsey and Twitter’s board to make Responsible ML one of Twitter’s top priorities for 2021, which allowed META’s work to evolve into Twitter’s products “I wanted to make sure that very important research was impacting the product and being scaled. This was a very strategic next step for META to take us to the next level,” Font said. “We had strategic discussions with Twitter staff, including Jack, and ultimately with the board of directors. It was a very intense and quick process. “A year later, Twitter’s commitment to Font’s team has won over the most skeptical people in tech – the ethics research community itself. Rumman Chowdhury, notoriously known and loved by fellow researchers for her commitment to algorithmic auditing, announced she was leaving her new startup to become Twitter’s META manager.
Kristian Lum, a professor at the University of Pennsylvania known for her work creating machine learning models that could reshape criminal justice, will join Twitter at the end of June as its new director of research. Finally, Sarah Roberts, famed for her criticism of tech companies and co-director of UCLA’s Center for Critical Internet Inquiry, will become a consultant to the META team this summer to determine what Twitter users really want from algorithmic transparency.
If this team looks different, it’s because all of its leaders are women and four of them have PhDs. Twitter has been on a massive hiring spree, and not just for META, and the result has proven that there is, in fact, no shortage of top talent with a wide range of backgrounds in technology.) These hires are a major coup for a social media platform desperate to escape the waves of vitriol and criticism surrounding Google and Facebook’s work on algorithms, machine learning and artificial intelligence. While Google was kicking out prominent ethicists and artificial intelligence researchers Timnit Gebru and Margaret Mitchell and Facebook was trying, unsuccessfully, to persuade politicians and researchers that it had no power to manipulate the way algorithms amplified misinformation, Twitter was giving Font and Jutta Williams, the product manager charged with helping operationalize META’s work, the resources and leeway to hire a team of people who could actually act on Twitter’s promise to listen to its researchers. Font’s “roadshow” took place before the very public firings of Gebru and Mitchell – Chowdhury said she would join Twitter the same week Google ousted Mitchell – but this explosion of attention on algorithms in 2020 nonetheless helped persuade Dorsey and his board that ethical algorithms are worth spending money on. Over the past year, the amplification of former President Donald Trump’s social media posts via Facebook’s engagement algorithms has sparked widespread outrage on the left; Facebook’s decision to very temporarily adjust those algorithms in response has drawn even more intense rebuke from the right. The spread of false information about the coronavirus followed a similar trajectory, while the national conversation about criminal justice and race-based policing awakened the general public to the inherent biases of algorithms. All of this new awareness found a breaking point in Google’s Gebru case. Its forced exit made the world pay attention to ethical AI.