In the late 18th Century, an automaton chess master known as the ‘Mechanical Turk’ toured Europe and the US. Designed in 1770 by the inventor Wolfgang von Kempelen, the machine appeared to be able to defeat any human player.
It later turned out the Turk was in fact a mechanical illusion. A puppet dressed in oriental garb, it concealed under its fez and robes a human chess master. The American poet Edgar Allen Poe was so convinced of the Turk’s fraudulence that he wrote an essay to draw attention to the hoax.
A predetermined mechanism beating a human mind at chess was impossible, Poe claimed, for “no one move in chess necessarily follows upon any one other. From no particular disposition of the men at one period of a game can we predicate their disposition at a different period.”
Today, artificial intelligence allows computers to make just such predictions, so it might be fair to assume that such naive illusions are behind us. After all, computers now exist that can beat any human at chess.
But a similar illusion characterises the artificial intelligence industry. On Amazon Mechanical Turk, an online platform owned and operated by Amazon since 2005, human activity is supposed to take the appearance of mechanical activity. The premise of Amazon Mechanical Turk is simple. The site hosts contractors, often large tech companies, which outsource short data tasks to a crowd of workers.
The workers fulfil the tasks that machine learning algorithms are not yet able to complete. Because the work is supposed to appear as if artificial intelligence is doing it, the former Amazon CEO, Jeff Bezos, referred to the platform as “artificial artificial intelligence”. The contractors tend to interact only with the platform, which hosts the tasks and sources the workers. Having little to no direct contact with the workers, contractors experience the process as if it were entirely fulfilled by computers.
Read more about artificial intelligence:
- How an AI finished Beethoven’s last symphony and what that means for the future of music
- “Computers are not as smart as you think they are”: The struggle of teaching AI to tell stories
Machine learning, the most common branch of AI training, relies on large data sets to train models which are then used to make predictions. Integrated into this process are algorithms that analyse data to extract patterns and make further predictions, which then use those predictions to generate further algorithms.
The richer the data these technologies are exposed to, the more comprehensive their training and the more sophisticated their capacities become, enhancing their performance in tasks as varied as image categorisation, text classification and speech recognition. In many areas, such developments have bestowed machines with capacities that frequently match or surpass those of humans. AI diagnosticians are already at least as proficient as doctors at identifying certain types of cancer.
But to find patterns and make predictions, the algorithm needs the input data to be labelled or categorised. An algorithm for an autonomous car, for instance, must be exposed to detailed, annotated images of urban areas before it can safely navigate a vehicle around a city centre. Artificial intelligence is not yet capable of annotating these images itself, so instead humans have to label them. For a task that supports the training of autonomous vehicles, this might involve labelling an image of a junction with the tags ‘pedestrian’, ‘traffic lights’ and ‘car’.
This kind of work, often known as ‘microwork’ – due to the brevity of the tasks – is becoming increasingly popular. Growing numbers of sites such as Clickworker, Appen and Playment now host large crowds of workers who undertake these short data tasks, often for very little payment. One study found that the average wage of a worker on Mechanical Turk is less than $2 an hour, with only 4 per cent of workers earning over $7.25 per hour, the US minimum wage. Tasks are very short, running from around 30 seconds to 30 minutes and often pay as little as a few cents.
The tasks can be very repetitive and are often opaque to the point of being impossible to relate to a larger project. A 2020 study by academics found that contractors often offer very little detailed information on their tasks and on the purposes they serve. This means that workers have little idea of what they are precisely working on. This is of particular concern when workers might be supporting a technology such as facial recognition software, which has serious ethical implications.
The work is also highly insecure. Workers are usually categorised as ‘independent contractors’, so they do not enjoy the rights and benefits afforded full-time employees working for the companies that contract them. This means that workers will usually work for multiple contractors over the course of a single day, which in turn means that workers must continually search for new tasks. A significant portion of the day must be given over to finding work, rather than actually doing work that pays.
The majority of this work is currently done in countries in the Global South such as India, Kenya and Venezuela. But some studies suggest that this kind of digital work is also on the rise in countries such as the UK.
Work Without the Worker: Labour in the Age of Platform Capitalism by Phil Jones is out now (£10.99, Verso books).
Read more about AI:
- A computer scientist explains why you’re (probably) not living in a simulation
- Why there won’t be a robot uprising any time soon