проповеди христианские

The visions that launched AI

This week on The Next Wave I’m going to re-publish versions of the pieces that I have shared on Artificial Intelligence over the past few months at my . Looking back, they seem to be reasonably coherent.





(Marvin Minsky, Claude Shannon and others at the Dartmouth College Summer Project, 1956. Photograph taken by Gloria Minsky)



In a in a recent edition of the Australian Griffiths Review, the technology researcher researcher Genevieve Bell tracked the history of AI back to 1956, and the group of white male researchers that imagined it.





Machines of loving grace





It’s a rich story, from the Macys Conferences on cybernetics (curated partly by Margaret Mead and Gregory Bateson), to the Dartmouth College Summer Project on Artificial Intelligence, where this photo was taken, to the Cybernetic Serendipity exhibition at London’s Institute of Contemporary Arts. And from there to some of the moments of liberatory potential that seemed to appear, if fleetingly—caught best, perhaps, in the line by poet Richard Brautigan’s about a future in which we would be “watched over by machines of loving grace.”





Well, that didn’t happen, as Bell points out:





The cybernetic meadows and forests of Brautigan’s imagination have not been realised, and the machines that watch over us now seem to lack loving grace. The AI that was promised in 1956 has not emerged, and technological revolutions have not led us to transcendence or a whole-­Earth point of view. According to a 2018 news feature by Nicola Jones for Nature, the world’s data centres consumed in excess of 200 terawatt hours of electricity each year – this is more than the consumption of some whole countries and represents 1 per cent of global electricity demand.





Different kinds of questions





Silicon Valley has been a myth-maker. It has told stories about the future, but it simply erases others. In the current ecological crisis, we need to tell different future stories:





One that focuses not just on the technologies, but on the systems in which these technologies will reside… Ultimately, we would need to think a little differently, ask different kinds of questions, bring as many diverse and divergent kinds of people along on the journey and look holistically and critically at the many propositions that computing in particular – and advanced technologies in general – present.





Bell proposes the as a metaphor for these stories—an indigenous construction that is thousands of years old. It combines technical, ecological and cultural knowledge “in a system that was designed to endure”.





Shallow learning





From there it’s worth heading over to the LRB blog to read Paul Taylor’s during the pandemic, and beyond. Even when it uses so-called ‘deep’ machine learning, the results are pretty shallow:





In 2019, an algorithm used to allocate healthcare resources in the US was to be less likely to recommend preventative measures if the patient is black, because the algorithm is optimised to save costs and less money is spent treating black patients. Around the same time, Timnit Gebru, a leader of Google’s ‘ethical AI’ team and one of the few black women in a prominent role in the industry, that commercially available face recognition algorithms are less effective when used by women, black people and, especially, black women, because they are underrepresented in the data the algorithms are trained on.





Existing power structures





AI, in other words, replicated existing power and social structures, and seems to have few mechanisms either to correct for this or deal with the ethics of it. 





Timnit Gebru, of course, isn’t working at Google anymore. She thinks she was fired, it says she resigned: the company has on the circumstances of her departure. But no-one disputes that it was down to that pointed out some of these home truths about AI rather than putting a happy smiley face on it. MIT Technology Review asked her about ; the interview is jaw-dropping.





Own goal





The stench from Google on this issue has got worse since then. The co-head of ethics research, Margaret Mitchell, was also , apparently for trying to defend the reputation of colleagues and her research group. (That’s not why Google said she was fired, of course).





You can’t help but think this was a bit of an own goal by Google. If they’d just let Gebru and Mitchell publish the paper, it would have quickly vanished into the thickets of the AI research community. As it is, they’ve given the whole issue the prominence it deserves.






(H/t for the Genevieve Bell article to Sahar Hadidimoud).

[fixed][/fixed]