A
Fourth Industrial Revolution is arising that will pose tough ethical
questions with few simple, black-and-white answers. Smaller, more
powerful and cheaper sensors; cognitive computing advancements in
artificial intelligence, robotics, predictive analytics and machine
learning; nano, neuro and biotechnology; the Internet of Things; 3D
printing; and much more, are already demanding real answers really fast.
And this will only get harder and more complex when we embed these new technologies into our bodies and brains to enhance our physical and cognitive functioning.
Take
the choice society will soon have to make about autonomous cars as an
example. If a crash cannot be avoided, should a car be programmed to
minimize bystander casualties even if it harms the car’s occupants, or
should the car protect its occupants under any circumstances?
Research
demonstrates the public is conflicted. Consumers would prefer to
minimize the number of overall casualties in a car accident, yet are
unwilling to purchase a self-driving car if it is not self-protective.
Of course, the ideal option is for companies to develop algorithms that
bypass this possibility entirely, but this may not always be an option.
What is clear, however, is that such ethical quandaries must be
reconciled before any consumer hands over their keys to dark-holed
algorithms.
The widespread adoption of new technologies is unlikely to prevail if consumers are not certain about their underlying ethics.
The challenge is that identifying realistic solutions requires the
input and expertise of a whole variety of stakeholders with differing
interests: leaders of technology companies who are trying to innovate
while turning a profit; regulators in varying jurisdictions who must
form policies to protect the public; ethicists who theorize with
evaluations of the unintended risks and benefits; public health
researchers who are looking out for the public’s health; and many
others.
With so many different stakeholders involved, how do we ensure a governance model that will make technology work for society?
What is needed is strong, anticipatory guidance by those who intersect the technology, health and ethics worlds to determine how we develop and deploy technologies
that deliver the greatest societal benefits. It requires an approach
not built on doing it alone at the country-level (as espoused by
President-elect Donald Trump), but an inter-sectoral and
inter-governmental approach. The World Economic Forum’s recently announced Center for the Fourth Industrial Revolution may be one venue to start these conversations.
Ultimately, evaluation of the net effect of new technologies on individuals and society is needed to identify appropriate rules and boundaries. Mark Zuckerberg might consider
a public discussion and debate among leaders from different sectors and
nations to establish Facebook’s real role in delivering information. No
matter how we view artificial intelligence technologies, we know they carry certain consequences — some good, some bad.. but none neutral.
No comments:
Post a Comment