But just as alchemy lacked a scientific basis, so, argued Rahimi, does ML. Both fields worked to a certain extent – alchemists discovered metallurgy and glass-making ML researchers have built machines that can beat human Go champions and identify objects from pictures. In a remarkable lecture he likened ML to medieval alchemy. At the most recent Nips (Neural Information Processing Systems) conference – the huge annual gathering of ML experts – Ali Rahimi, one of the field’s acknowledged stars, lobbed an intellectual grenade into the audience. But now it turns out that this comforting assumption may also be questionable. In all of this sociopolitical criticism of ML, however, what has gone unchallenged is the idea that the technology itself is technically sound – in other words that any problematic outcomes it produces are, ultimately, down to flaws in the input data. And this could become generalised: we may have created a technology that – however good it is at recommending films you might like – may actually morph into a powerful amplifier of social, economic and cultural inequalities. If the data from which a machine “learns” is biased, then the outputs will reflect those biases. Critics have pointed out that the old computing adage “garbage in, garbage out” also applies to ML. This “mission creep” has not gone unnoticed. Among the rationales for this feeding frenzy are increased efficiency, better policing, more “objective” decision-making and, of course, providing more responsive public services. Machine learning rapidly found its way into traffic forecasting, “predictive” policing (in which ML highlights areas where crime is “more likely”), decisions about prisoner parole, and so on. ![]() Think, for example, of the moment Walmart discovered that among the things their US customers stocked up on before a hurricane warning – apart from the usual stuff – were beer and strawberry Pop-Tarts! Inevitably, corporate enthusiasm for the magical technology soon spread beyond supermarket stock-controllers to public authorities. To many corporate executives, a machine that can learn more about their customers than they ever knew seems magical. It’s even got to the point where one prominent AI guru, Andrew Ng, likens ML to electricity. And the technology is already ubiquitous: virtually every interaction we have with Google, Amazon, Facebook, Netflix, Spotify et al is mediated by machine-learning systems. A machine-learning system is a bundle of algorithms that take in torrents of data at one end and spit out inferences, correlations, recommendations and possibly even decisions at the other end. ML uses statistical techniques to give computers the ability to “learn” – ie use data to progressively improve performance on a specific task, without being explicitly programmed. The experts seem calmly sanguine, while the boosters seem blissfully unaware that the artificial “intelligence” they extol is actually a relatively mundane combination of machine learning (ML) plus big data. ![]() In both cases there seems to be an inverse correlation between the intensity of people’s convictions about AI and their actual knowledge of the technology.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |