Relinquishing Our Humanity

04 Mar 2021 - Christina Eichelkraut

The inadvertent consequence of well-intentioned technology

woman in a tank top using a VR headset surrounded by computer monitors Photo by cottonbro studio

Technology journalism tends to be either blindingly glowing in its optimism or blatantly dystopian.

Yet regardless of how in-depth the analysis, or how deeply into technical minutiae an article or podcasts delves, the most pressing question is rarely addressed:

What happens when we remove humans from inherently human interactions and replace them with technology?

Sadly, the answer to this question is already well known.

The impact of turning human interactions into automated processes, whether augmented or replaced outright by technology, is rarely positive. When weapons systems become automated, warfare and policing become dangerously unpredictable. When grocery stores care more about profits than people (or at least customer service), shoppers end up waiting endlessly at a stymied machine for a human to make the machine work again.

Placing technology between two humans seems to make it easier for people to harm one another, as the famous Milgrim shock experiment proved. It is no accident that soldiers’ brains are rewired to accept killing using technology, whether sophisticated virtual gaming simulations or cabbages filled with ketchup to mimic blood spatter. People who would never dream of calling another person a rude epithet do so on social media without hesitation.

Even the best examples of technology journalism eschew kicking over this rock of existential implications.

Last September, National Geographic published an exceptionally well-done article titled “Face-mask recognition has arrived—for better or worse” by Wudan Yan. The article highlights a company developing software that identifies whether or not a person is wearing a mask.

The intentions behind the technology are the epitome of altruism. Akash Taykar wants to use his software company Leeway Hertz to stop the public shaming of people who do not wear masks in public.

Taykar’s initiative is an excellent example of a person willing to leverage their expertise and resources to benefit society. He should be congratulated for putting time, effort and money into addressing one of the most pressing challenges of the current pandemic.

Still, the technology raises a host of questions and issues that make it problematic which Yan dutifully runs through in the article.

Yan examines how it can be nearly impossible to reverse civil liberties encroachments, even those purportedly “good for you” at the time, because reversing laws of that nature has proven historically difficult. He points out that private companies would have intensely personal information at their disposal, taken without express consent. This, in turn, brings up more urgent questions of how those companies will – or won’t be able to– protect that data from breaches.

(Taking that a step further, another question begging to be asked, not covered in Yan’s article, is how else that data might be monetized or who it might be sold to in the future.)

On the more sinister side of the spectrum, there is the potential for unjust prosecution (after all, some people cannot afford a new box of masks or filters every few weeks). Yan also delves into known issues of built-in bias in current machine learning models which has repeatedly proven problematic for people of color, varying heights and even women.

In a laudable example of complete coverage, Yan interviewed experts who acknowledge that public policy is built on data. Knowing how many people are compliant with mask mandates in a given area or environment is undeniably useful in creating regulations or policies that could effectively combat the pandemic. The article is an example of journalism at its best, both in terms of content and technical execution. Yet for all its exemplary analysis, the question of how using software to replace humans in an interaction that requires humanity will impact society remains unasked.

The missing piece we just can’t automate

It’s not that technology is inherently bad, of course. But solutions to social problems are usually found by tapping into our inherent humanity itself.

People like Christian Picciolini didn’t leave the hate-filled world of the alt-right because they were canceled, shunned and dogpiled on by social media keyboard warriors; they left because of empathy, something no technology can replicate. Or, in the case of mask recognition technology, substitute.

We also can’t legislate people into morality. Doing so, and using technology to enforce said legislation, only makes things worse. And technology won’t necessarily address core issues that lead to societal ills.

Our toxic online environment has exacerbated the growing absence of empathy, compassion and reason in society. Increasingly those parts of our humanity are replaced with knee-jerk outbursts of rage disguised as indignation or moral self-righteousness, all shouted in the furious cadence of clacking keys.

The rise of “cancel culture” exemplifies this phenomenon. Many employers know firing someone – to take food from their table and put the roof above their family’s head in jeopardy, to rob one of the dignity of work (which psychologists have long acknowledged is the real value of earning a paycheck) – is no light matter.

Yet today many people who are outraged at corporate callousness when it comes to pink slips due to automation don’t hesitate to jump on board a hashtag campaign to have a complete stranger fired for voicing views that oppose their own.

Never mind the offender’s right to free speech (no, hate speech is not a legal construct; that does not mean it should be simply tolerated, either, but that is a different article). Or the person’s rights to equal protection under employment law.

Never mind a career that took decades to build is gone in a matter of hours, with no opportunity for the offender to learn or grow from it; or even their right to simply live their life in ignorance. The person has been deemed morally or ethically wrong by those privileged enough to enjoy technology and unfettered access to the internet, the only jury with jurisdiction and authority anymore. There is no room for human mistakes, misguided actions or even just plain ignorance online. Ruining a career and livelihood is somehow just, to be lauded even.

Given that people say things from behind the shelter of a screen they would not otherwise say, this makes sense to some degree. After all, words on a screen typed into a comment box will never reflect back to the typist the worry and fear in the eyes of a person with unpaid bills, or who has accumulated years of specialized education and experience, all rendered irrelevant in a matter of hours, sometimes for youthful mistakes decades in the past.

Yet with every online firing caused by digital mob rule the value of all our livelihoods becomes diminished.

No, it’s not just social media

Social media is often pointed to as the culprit for this social phenomenon but it is useful to cast the net a bit wider and remember social media is simply one type of technology we place between ourselves and other humans. There are restaurant kiosks, phone menus and automated suggestions on what music or movie to stream next.

Convenience is great, but it’s naïve to think it isn’t coming at a cost. That cost may just be our very humanity itself.

Now we are allowing – no, asking – for technological absolution from our own compassion, empathy or reason, long-term consequences be damned. If an ML glitch sends someone to jail for not wearing a mask (without asking why that might be) then we an individual can pretend we’re not culpable for enabling that Orwellian environment to exist. After all, it’s for the greater good. And we didn’t engineer the program, purchase it, or decide to deploy it. We’re just in the public spaces where the technology operates.

Technology certainly has a place in society. From microwaves to prosthetics and smartphones to adaptive technology, it undeniably propels humanity forward. The best technologies augment humanity, oftentimes by enabling it. A visually impaired person reads an audiobook. A power wheelchair user enjoys mobility and independence. Physicians with automated offices are able to spend more time with patients – an increase in human interaction that benefits both parties.

Technology that removes humans entirely, such as drones or restaurant table payment kiosks, doesn’t generally benefit society as a whole. It exacerbates problems like polarization, economic stratification and the spread of misinformation. Or, in the case of mask recognition, automates one of the uglier tendencies of human nature, ultimately replacing the hard work of empathy and compassion with automated, soulless persecution. That is not Taykar’s intent. And it shouldn’t be ours, either.