Artificial intelligence, more than most other fields of research, is dominated by corporations. Private research labs at Google, Microsoft, and Facebook churn out more top-flight research than most universities. The companies bankroll the work of academic researchers. Increasingly, they’re financially sponsoring individual PhD students and cutting deals with universities to share custody of prized faculty.
But over the course of one topsy-turvy week, Google gave us a clear case study in the perils of turning so much AI research over to Big Tech firms.
No company is more dominant in AI than Alphabet, parent company of Google and its AI-focused sister lab DeepMind. Combined, the companies’ labs accounted for twice as much of the research published at AI conferences like NeurIPS as any other company or university. Their wide-ranging study has led them to defeat every existing chess player, translate over 100 languages, and probe the most minute mysteries of our biology. And in the first week of December, the conglomerate illustrated how the river of corporate money flowing into AI in all those fields can come at the cost of lower standards for transparency and accountability.
It started on Nov. 30, with a press release announcing a breakthrough in DeepMind’s protein-folding prediction model AlphaFold. The model is designed to predict what shape proteins will take based on their chemical composition—a torturous problem that has bedeviled researchers for decades and would, if solved, have major implications for drug discovery.
DeepMind’s press release trumpeted “a solution to a 50-year-old grand challenge in biology” based on its breakthrough performance on a biennial competition dubbed the Critical Assessment of Protein Structure Prediction (CASP). But the company drew criticism from academics because it made its claim without publishing its results in a peer-reviewed paper.
“Frankly the hype serves no one,” tweeted Michael Thompson, a structural biology researcher at the University of California, Merced. “[U]ntil DeepMind shares their code, nobody in the field cares and it’s just them patting themselves on the back.”
DeepMind vowed a formal paper would be forthcoming, but left the timeline fuzzy. “A lot of times we’ll see stuff get into the press that hasn’t been peer reviewed,” said Sarah Myers West, a postdoctoral research at NYU’s ethics-focused AI Now Institute. “Then when it goes through the peer-review process, you might have issues or inaccuracies that get raised and then get corrected.”
A few days later, on Dec. 3, renowned AI ethicist Timnit Gebru—who co-authored foundational research that demonstrated biases baked into commercial facial recognition algorithms—announced that Google had forced her out of her position as co-leader of the company’s Ethical Artificial Intelligence team. The company cited an internal email Gebru sent lamenting the company’s lack of progress on diversity, equity, and inclusion initiatives, and her decision to submit a paper for publication without company authorization.
Gebru’s paper raised concerns about the development of large language models like the ones that Google’s search business depends on. According to a review of the unpublished paper from the MIT Technology Review, Gebru and her co-authors pointed to the carbon emissions generated in developing the massive models, their broad potential for bias, and the danger that they could be used for nefarious purposes.
Gebru’s firing doesn’t fit well with the reputation Google had cultivated as a patron of high-quality AI ethics research. Part of that glow came from employing critical scholars like Gebru and apparently giving them free rein to publish as they pleased. (The academic literature shows, however, that private companies can often pay researchers high enough salaries that they will accept constraints on their ability to publish.) But the company’s ability to veto unflattering paper submissions, and fire researchers who step out of line, raises questions about accountability in a field awash with corporate cash.
The trouble is, there aren’t many alternative venues in AI that can fund accountability-focused research. Because of the massive computational costs associated with building out AI models, much of the major work in the field gets concentrated in the hands of a small number of companies and universities that can afford it.
“If in order to make sense of the harms we are overly reliant on the same companies that are producing the harms, that’s not a good place to find ourselves in,” West said. “If it’s up to them, they get to make choices about how to set the agenda. … They’re still exerting power and influence in the field in a way that may not be in the interest of the broader public.”
"danger" - Google News
December 12, 2020 at 04:00PM
https://ift.tt/347Hr4Z
The dangers of letting Google lead AI research - Quartz
"danger" - Google News
https://ift.tt/3bVUlF0
https://ift.tt/3f9EULr
No comments:
Post a Comment