Why we need an Australian vision for AI ethics
This post was originally published on LinkedIn on 7 April 2019.
Just as businesses and governments around the world are calling for frameworks to help guide the development of artificial intelligence, Google's ahead-of-the-curve foray into AI ethics has imploded, with the collapse of its controversial ethics board after just a week. As Vox reported today, the board was convened to guide “responsible development of AI” at Google, and to help allay concerns about Google’s AI program, both within the company itself and in the community. There are as yet, no ready answers to questions such as how AI can enable authoritarian states and what, if anything, ought to be done about it, the impacts of AI algorithms that produce disparate outcomes - with unknown distributions of benefit and loss, whether it is ethical to work on military applications of AI, and more. The board was intended to help answer these questions but it was beset by controversy from the start, amid allegations that the board with stacked with members sympathetic to the political priorities of the Trump administration. Thousands of Google employees signed a petition calling for the removal of board member Kay Coles James, President of the conservative think tank the Heritage Foundation, following media reports of her controversial comments about trans people and the Heritiage Foundation's ultra-conservative positions on immigration and climate change. The appointment of Dyan Gibbens, CEO of drone company Trumbell Unmanned, also reignited existing tensions among staff over the use of the company’s AI for military applications.
The Google melee is a very good illustration of what ethics is and what ethics frameworks offer as a basis for guiding action. In a post-Christian world, appeals to "ethics" are often made as though questions of principle can be figured out, uncontroversially, by the application of a bit of careful thought. Get a bunch of smart, experienced people into a room and ask them talk their way through a problem, and the "right" thing to do will emerge naturally, through a logical process. That being the case, "ethics" - and the codes of practice, ethical guidelines and the like that it produces, are thought not to be tainted with the kind of value judgements that gave "morality" a bad name. Ethics guidelines = right thing to do. No problem.
But of course that's not really the case. David Hume wrote that "reason is and ought only to be, the slave the passions" - by which he meant that, we can't think our way to the "right" answer, unless we have an idea of what "right" might look like. Or perhaps feel like - as Hume suggested. An ordinary human instinct for "right" is both necessary and perfect for working out what to do - and conclusions are liable to differ. This is why, after a thousands of years of trying, we are yet to come up with a universally agreed description of what a good life entails. Goods conflict, rights produce wrongs, and the balance to be preferred on any occasion is often deeply personal, ideological and political. Should AI never be used for killing? Are we entitled to defend our way of life? Will wealth creation produce more happiness than reducing environmental risk? Is happiness the most important aim anyway? If not happiness, then what? The answers depend on who you ask - as staff at Google, and for that matter, the Trump administration, knew.
CSIRO’s Data61 has now announced a discussion paper entitled Artificial Intelligence: Australia’s Ethics Framework, to stimulate a national conversation on how we develop and use AI in this country. This is the first attempt at a broad community consultation on AI ethics in Australia, at a time when organisations around the world are clamouring for guidance. When global organisations important values along with services, we ought to know what we think about that, and how we should respond.
Communities must be satisfied that regulations will work for them, to produce and reinforce the values and standards that define them. Just as Australians have a unique identity in the world, so too, do we have a unique set of values. An entrepreneurial spirit, belief in the idea of a "fair go", and protection for the vulnerable. The balance that we assign to risk reduction, against promoting new opportunities for a better life, as we define it, will have a uniquely Australian flavour. It's time for Government and the community to get on board with developing a national ethical framework for AI as a matter of urgency. The tech revolution has begun, and we ought to shape it the image of our own communities.