The
Ivory Tower Can’t Keep Ignoring Tech
By
CATHY O’NEIL NOV. 14, 2017
These
days, big data, artificial intelligence and the tech platforms that put them to
work have huge influence and power. Algorithms choose
the information we see when we go online, the jobs we get, the colleges to
which we’re admitted and the credit cards and insurance we are issued. It goes without saying that when computers are making
decisions, a lot can go wrong. Our lawmakers desperately need this
explained to them in an unbiased way so they can appropriately regulate, and
tech companies need to be held accountable for their influence over all
elements of our lives.
But
academics have been asleep at the wheel, leaving the responsibility for this
education to well-paid lobbyists and employees who’ve abandoned the academy.
That means our main source of information on the downside of bad technology —
often after something’s gone disastrously awry, such as when we learned that
fake news dominated our social media feeds before last year’s presidential election,
threatening our democracy — is the media. But this coverage often misses
everyday issues and tends to be far too credulous when it does exists.
Much
of what should concern us is more nuanced and small scale — and much less
understood — than what we see in the headlines. Moreover, we shouldn’t have to
depend on journalism to do the tedious, serious work of understanding the
problems with algorithms any more than we depend on it to pursue the latest
questions in sociology or environmental science. We need academia to step up to
fill in the gaps in our collective understanding about the new role of
technology in shaping our lives. We need robust research on hiring algorithms that seem to filter out people with mental
health disorders, sentencing algorithms that fail twice as often for black
defendants as for white defendants, statistically flawed public teacher
assessments or oppressive scheduling algorithms.
And
we need research to ensure that the same mistakes aren’t made again and again.
It’s absolutely within the abilities of academic research to study such
examples and to push against the most obvious statistical, ethical or
constitutional failures and dedicate serious intellectual energy to finding
solutions. And whereas professional technologists working at private companies
are not in a position to critique their own work, academics theoretically enjoy
much more freedom of inquiry.
To
be fair, there are real obstacles. Academics largely don’t have access to the
mostly private, sensitive personal data that tech companies collect; indeed
even when they study data-driven subjects, they work with data and methods that
typically predict much more abstract things like disease or economics than
human behavior, so they’re naïve about the effects such choices can have.
The
academics who do get close to the big companies in terms of technique get
quickly plucked out of academia to work for them, with much higher salaries to
boot. That means professors working in computer science and robotics
departments — or law schools — often find themselves in situations in which
positing any skeptical message about technology could present a professional
conflict of interest.
The
many data science institutes around the country, which have created lucrative
master’s programs to train data scientists, are more focused on trying to get a
piece of the big data pie — in the form of collaborations and jobs for their
graduates — than they are on asking how the pie should be made. We won’t find
any help there. Indeed, while West Coast schools like Stanford and the
University of California, Berkeley, are renowned for creating factories that
churn out the future engineers and data scientists of Silicon Valley, there are
very few coveted permanent, tenure-track jobs in the country devoted to
algorithmic accountability.
A
final hurdle: There is essentially no distinct field of academic study that
takes seriously the responsibility of understanding and critiquing the role of
technology — and specifically, the algorithms that are responsible for so many
decisions — in our lives. That’s not surprising. Which academic department is
going to give up a valuable tenure line to devote to this, given how much
academic departments fight over resources already?
There’s
one solution for the short term. We urgently need an academic institute focused
on algorithmic accountability. First, it should provide a comprehensive ethical
training for future engineers and data scientists at the undergraduate and
graduate levels, with case studies taken from real-world algorithms that are
choosing the winners from the losers.
Lecturers
from humanities, social sciences and philosophy departments should weigh in.
Second, this academic institute should offer a series of workshops, conferences
and clinics focused on the intersection of different industries with the world
of A.I. and algorithms. These should include experts in
the content areas, lawyers, policymakers, ethicists, journalists and data
scientists, and they should be tasked with poking holes in our current
regulatory framework — and imagine a more relevant one. Third, the
institute should convene a committee charged with reimagining the standards and
ethics of human experimentation in the age of big data, in ways that can be
adopted by the tech industry.
There’s
a lot at stake when it comes to the growing role of algorithms in our lives.
The good news is that a lot could be explained and clarified by professional
and uncompromised thinkers who are protected within the walls of academia with
freedom of academic inquiry and expression. If only they would scrutinize the
big tech firms rather than stand by waiting to be hired. Cathy O’Neil is a data
scientist and author of the book “Weapons of Math Destruction: How Big Data
Increases Inequality and Threatens Democracy.
沒有留言:
張貼留言