by Marjolaine Tremblay

@marjolainetrem4

Dolphins are pretty smart. They can do that cute clapping thing with their fins, they’re self-aware (if you think passing the mirror self-recognition test is an appropriate indication of self-awareness), and they’ve even demonstrated tool usage while searching for food. Oh... yeah… and they also fuck for fun. All things considered, they’re probably in the top ten when it comes to animal intelligence—rather impressive given that there are thousands of families in the animal kingdom.


But to say that dolphins are pretty smart is obviously a relative claim. Imagine how bizarre it would sound if any of the metrics I just listed were availed as a testament to the intelligence of a human. “Robert? Oh, yeah, Robert’s a smart guy. He can clap his hands, he can operate a hammer, and he knows more or less what he looks like.... Oh, yeah, and he’s really into doing it.” Nobody, upon hearing these proclivities, would be floored by Robert’s intellect.  


And while we have a tendency to appreciate the human brain as undyingly superlative, our artificial inventions challenge the inimitability of human cognition. As our technological capacities increase, and as the problems of daily life are understood in greater clarity, the offloading of labour to silicon brains becomes too good of a deal to pass up; the fleshy ones with which we’re endowed have a terrible tendency of fucking up. 


So we develop algorithms that make our lives easier by performing tasks more efficiently than we can. 


But just because these algorithms are more efficient than humans (by many orders of magnitude in most cases) does not mean that they are perfect. The most intractable problems of the human experience, ie. those that arise from our selfish and tribalistic instincts, resist algorithmic modeling, and indeed criticizing these algorithms for that very reason has become somewhat of a cottage industry for progressive journos. 


Yesterday, Vice published a piece titled “Flawed Algorithms Are Grading Millions of Students’ Essays” by Todd Feathers, wherein the shortcomings of automated essay scoring engines are analysed and lamented as the latest domain for systemic oppression. Despite being well researched and competently written, Feathers’ piece largely ignores the fact that the biases we seek to tease out of algorithms are far more prevalent in ourselves. 


Here’s the crux of his piece:


“Training essay-scoring engines on data sets of human-scored answers can ingrain existing bias in the algorithms. But the engines also focus heavily on metrics like sentence length, vocabulary, spelling, and subject-verb agreement—the parts of writing that English language learners and other groups are more likely to do differently. The systems are also unable to judge more nuanced aspects of writing, like creativity.”


Essentially, the argument is that automated grading algorithms are flawed because they are designed to gauge how technically proficient a student is rather than how subjectively creative or culturally attuned they are, and because this preference is reinforced over time.


Uhhh… yeah. That’s how algorithms work, and it’s also how designing standardized testing works… Teachers are incredibly flawed and have their own subconscious biases that are a lot more challenging to operationally account for than those of a coded algorithm. With algorithms, it’s possible to tease those biases out. With humans, it’s virtually impossible.


Now, it may be the case that the existing technology is so flawed apropos the assessment of semantics that it is currently unacceptable as a lone marking tool. The jury still seems to be out on that one. What we do know is that algorithms are rapidly making humans the dolphins of the marking industry.  And to say that algorithms are flawed because they cannot quantify the sort of editorial flare and literary pizzazz that enhances the writing of different students is merely to observe that different people like different things, a reality that’s unlikely to change anytime soon.