Quote:
Originally Posted by lisarea
How Crowdworkers Became the Ghosts in the Digital Machine | The Nation
I did something like this setup with the old Google Answers. I figured I already did freelance work a lot, but I couldn't take really big jobs because of my regular job. So I figured I could just do little research projects instead. It was better than this, but it was still ridiculous in a lot of the same ways. I only did a few because most of them were so low paying or ridiculous that it wasn't worth it, but there were people there who'd bust their asses for some lousy $2 question and then still get their answer rejected, and there were people who were doing it because they really needed the money.
That was some kind of bullshit, but these things are a worse bullshit.
|
So I'm actually working on a paper that uses Mechanical Turks to judge metaphors in message board posts... We posted a sample amount, and it turned out that we were not paying minimum wage (more like $6/hr) but we did not know how long the task would take the average Turker to do before we posted it. It was only a few dozen instances though, so we will adjust the task based on this. I know in one of the classes here the professor tells the students in the crowdsourcing assignment to adjust the payment to match at least minimum wage.
Crowdsourcing is exciting because it can render possible many tasks that would be too time-consuming for a handful of people, and it would be an impractical pain to hire more permanent workers just for this task only to no longer need them a couple weeks later. It also seems that some of these tasks might be mind-numbingly tedious for someone to do full-time, but are quite bearable or even interesting for much shorter bursts.
I will also be doing some similar annotation personally as a comparison point, and it's really not bad at all... until you've been doing it for 30 minutes or more. Ten minutes is not a problem at all.
It's also great for things like collecting, say, linguistic data that in the past you might have had to go out into the field for hours to get, or you would have to pay people primarily for the trouble of coming to you rather than the task itself (of course a lot of linguistic data is now freely available when publicly posted anyway, but there are stilll some types of data that are elusive). When people are working from home on their computers, these costs aren't saved just because you're paying people less, but because the cost to them and you is just lower since travel time and effort of finding them has been reduced.
And naturally the cost of monopolizing someone's time ought to be higher than asking only for a minute or two here and there when they want to.
So there are definitely reasons to want to use crowdsourcing that aren't just based around exploiting a cheap, global labor pool.
But I myself certainly wouldn't work for $3/hour or even $6/hour. And they're clearly performing a core business activity for the crowdsourcing companies, so treating them as contractors who the crowdsourcers have no responsibility to can't be right.
The problem I see is that some of the solutions to these problems may remove the benefits of crowdsourcing or render them unusable for requesters, or wouldn't work very well, etc.
So, for example... Overtime would simply mean that they would cut you off once you've worked 40 hours. There's no way that wouldn't happen. The labor pool is too large for them not to do that. Under what conditions would it be worth it for them to pay overtime? A glut of requests from requesters who aren't expecting them to be fulfilled immediately immediately anyway? This would have the effect of pushing request prices up a bit though, as the crowd would be a bit smaller at any particular time. I suppose they could always allow requesters to specify that they're willing to accept overtime work (and thus Amazon would take a higher fee for any such work).
At the same time, there are some features of crowdsourcing that make it have some disadvantages to employers. These currently aren't much of a problem since the balance is so skewed to them anyway, but the fact is that Turkers are not supervised. Crowdsourcers could guarantee a minimum wage, and allow requesters to still pay piecemeal. But then they would instead pressure Turkers heavily to work more quickly, and cut Turkers off who don't work quick enough. But if requesters instead had to offer payment by the hour, how would I ensure that workers are not simply opening tasks and letting them run to get extra pay? The shitty place I used to work tried to use Turks to perform email harvesting. Now, they, of course, offered a shitty rate, offering only 5¢ a hit for a task that I knew personally could take a couple minutes or more, even if it was usually much faster. So inevitably, what happened is that we got a bunch of shitty, useless data and wasted hundreds of dollars because we did not follow up on requests quickly enough to reject work that was clearly just someone pasting "none@none.com" over and over, and we ended up having to do it ourselves anyway. But even if I was offering a decent rate, how would I ensure that I wasn't getting similar results? If they had offered 25¢ or 50¢ per hit, given our low accountability, it would've been even more attractive to take that option. So a reputation metric that punishes harshly for doing such things seems necessary to prevent that.
But anyway, to give an example of how one could handle some of these problems without trying to force it to fit into normal employment scenarios...
For larger requesters, it would seem to me that the best approach would be statistical - at the end of the month, calculate the average rate per hour they paid, and Amazon or whoever would charge them an additional fee to be disbursed to the workers who did their tasks, one disproportionate to the amount they underpaid. Similarly, managing all rejections would require a whole nother Mechanical Turks in itself, but with large requesters, you can simply take a random sample of their rejections to determine whether they're trying to screw over workers and assess them an additional fee again (this one would probably need to be even more disproportionate). This would allow flexibility in task pricing, maintain incentives to perform tasks more quickly and correctly, and simultaneously reduce exploitative behavior.
I'm not sure how best to handle smaller requesters (like me and my research partners), but at the very least, larger requesters would clearly be more attractive to do tasks for in such a situation, requiring smaller requesters to increase compensation to compete.
But I don't know whether that kind of scheme could fit into existing employment laws... It seems to me that if we think that crowdsourcing is a worthwhile type of work, the regulations need to be written for it specifically.
The worry is that this means that they won't get written unless they're favorable to the crowdsourcing companies and clients. I suspect if there's a major court case against them, it will either kill off the field or Amazon and others will lobby until the decision is reversed legislatively.