pastagang

pastagang blog

Kill the “User”: Musings of a disillusioned technologist

Unrestrained, raw technological possibilities are scary. That it is why it is the duty of the corporation to package, sanitise, file down and finish with a coat of beige (or the current Pantone colour of the year, if we’re being extra fruity), lest “users” hold them in the wrong way.

Of course in the interests of maximising profits/access to capita and on a personal level, advancing careers, the “user” cohort is interpreted to be the largest segment of the total addressable market stakeholders can barefacedly get away with.

This of course has a number of interesting implications: the first is that the requirements now demand a sort of universal accessibility. “User” “Experience” designers typically attempt to achieve this by heavily optimising for the “happy paths”

This is where the first tension and battleground comes in: defining what exactly the happy paths actually are. “User” centric design would argue that the happy path is one that allows the user to achieve their intended goals under normal circumstances. Unfortunately this view is heavily disincentivised, due to the ruthless logic of extractavism. Increasingly instead the happy path is determined to be the goal the business wants to achieve by manipulating and exploiting the “user’s” available resources, data, and ultimately time.

This is where the second implication comes in: the push towards universality means that there is a tendency to adopt a sort of minimalistic approach, lest “users” from a diverse bevvy of cultures, languages, and contexts, become overwhelmed by the volume of information required to interact with the heavily curated “experience”.

This reveals a second battleground: this minimalistic approach by necessity means that the interaction surface is constrained despite an increase in pixel density and computing resources available to “users” year after year. This leads to interfaces that radically change over time as corporate fiefdoms gain or lose prominence and executives helicopter in with the latest strain of brain worms.

Which brings me to the third implication: We live in an attention economy. Business compete for eyeballs and endless growth. We are bombarded with adverts, and call to actions and increasingly desperate attempts to monopolise our time and energy. This is a units game—cattle not pets.

“Users” are a commodity, a hot one perhaps, but like any other commodity, can be bought and sold. In such an environment, goes the line of reasoning in the mind of the average executive, does it not make sense to heavily prioritise onboarding alongside user acquisition so that users won’t immediately give up or get distracted, or gasp, go to a competitor?!

This line of reasoning continues, smirkingly, disturbingly, that like any abusive relationship, the “user” is locked in, or if not that, drained of any conceivable value. These shambolic souls are forced to work around sharp edge after painful sharp edge, cutting themselves on easily fixable issues continually postponed to the next quarter in favour of the latest shiny fad.

That this is a house of cards and that user grievance spread by word of mouth can ultimately bite the corporate personhood in the proverbial buttocks is of little matter to the average executive, given typically precarious leadership tenures and of course the wad of fuck-you money hidden under the mattress, no doubt some of it due to self-dealing and a little bit of securities fraud on the side, as a treat.

This short term thinking, where even the value of loyalty is calculated to the cent, means that the ability to develop mastery over one’s is derided as a design goal, and that statistical models, often times ones that are poorly conceived, drive the evolution of the “user” “experience”.

This thinking is also self perpetuating : “users” are manipulated, forced to confront a constantly shifting, destabilising landscape and treated poorly once they’ve passed the onboarding phase. They eventually “churn”, the preferred industry term for an agonising, oftentimes downright violent assault on our digital sanity, followed by a forlorn, sometimes futile, search for a less abusive alternative. This opens up room for “disruption” in the form of smaller competitors which are eventually engulfed by the corporate behemoths, run out of suckers to front their operations, or increasingly so, are legislated out of existence.

I of course am not the first person to remark on this process, a term for this has been gaining some traction: enshittification. However I am not writing this in order to retread the same points but to ponder what comes next.

Because it is clear that our epistemological lenses are cracked and that “user” “experience” design has failed us.

“User” is abstraction, homogenisation, commodification, interchangeability, and ultimately manipulation. It is a word laden with unpleasant connotations to addiction and of course conjures the spectre of colonial exploitation via gunboat diplomacy and the sale of opiates to the masses.

Meanwhile “Experience” implies something sterile and dead, a curated natural history museum filled with captured specimens and the nauseating smell of formaldehyde, aimed at “delight” (titillation), instead of a deeper, more respectful relationship with the people that are living through the numbered days of the torment nexus.

As an angry aside, “Accessibility” is a co-opted buzzword that prioritises metrics over disability justice.

How can we work within these types frameworks when they are so broken?

I don’t claim to have any definitive answers, but I have a few ideas. They all entail killing the “user”.

Personal Computing

Taste is an ugly word. It carries a value judgement when someone dares to transcend one’s delicate sensibilities.

We have been in the thrall of taste for too long, a view perhaps most promulgated by the ghoulish spectre of the Jobsian cult. In their worldview, the “user” is an antagonist of sorts who must be paternalistically prevented from embarrassing themselves by committing the cardinal sin of bad taste.

What would modern computing look like if people were free to own and customise their software environment in ways that would disgust, befuddle and challenge the original designers?

In other words can we kill the model of the “user” as the passive consumer, the lost lamb that must be gently directed down the chute, and instead let them into our treehouse as fully fledged collaborators?

This is not easy, especially given that the idea of personal computing is one that grows increasingly foreign to upcoming cohorts of tech workers, many of who have never had the freedom to just be absolutely cringe and shape the digital firmament to their liking.

My Space pages, Media player skins, the eye searingly gaudy hot dog stand theme. These all were vectors for self expression. But to offer these freedoms, you have to trust the person operating your system, honour their agency, and respect them as a serious intellectual partner instead of a gibbering idiot to be manipulated for your own ends.

Of course corporations would be hard pressed to cede any power to their “users”, lest they develop ideas above their station.

Nevertheless personal computing needs to make a firm return to normative design practises.

The “user” is dead. Long live Personal Computing!

Dignity Design

Taylorism, also known as scientific management has a lot to answer for. It’s aim was to curtail the power afforded to the labour force. Statistics was wielded as a cudgel to punish and manipulate workers and expertise was coercively eliminated and replaced by the model of labourer as cog in the machine. Given this, it was no surprise that this idea was wildly popular with the managerial class.

“Users” today are surveilled and scrutinised in much the same way as workers under a Taylorist regime (or at least its late-capitalist manifestations). I think this is no accident.

Because most executives hate serving their “users”, almost as much as they hate their employees. And even if they don’t, they exhibit a profound disinterest in how they make users actually feel.

Delivering value to “users” feels icky, and if company leadership could find a way to hermetically sustain growth without the human element, they undoubtably would choose that path.

On the other hand metrics tell a neat little story. They can be optimised for, and endlessly quoted, and don’t raise troubling questions like: ⁃ Does the “user” feel respected by the software? ⁃ How does this software affect the mental health of the “user”? ⁃ How does the software fit into the rest of the “users” lifestyle? ⁃ Does this software help the “user” perform a task/entertain them without coercion?

That is not to say that qualitative “user” “experience” research is something that is entirely absent from the product development story, but the questions that are being asked are co-opted by self-serving business interests.

Can we instead promote a design and research practise that instead centres the feelings and well being of the people who utilise your software artefacts?

The “user” is dead, long live Dignity Design!

Folk Software

“Legacy” is a label applied to software that typically elicits disgusted noises from devs who have been tasked with onerously bending at least somewhat nominally working software into a new, tortured, shape to execute the whimsical will of the business before the end of this quarter.

Calling something “legacy” code, is just as much a mind killer as fear. It aims to lay the blame at the feet of a previous cohort of software workers, while simultaneously motivating for permission to get rid of the “cruft” and rewrite the code to deliver a “leaner”, better architected experience.

This request surprisingly often gets stakeholder buy-in despite its increased risk, loss of organisational memory, and potential for the reintroduction of defects.

And the worst of it is, it is all understandable and logical if you consider the individual actors that are interacting within the system.

Technologists live in a destabilising regime. Layoffs are a constant spectre for the tech worker. The most popular software development tools, languages, and methodologies are constantly evolving, dying, growing, being
reinvented, or reborn. Things just stop working one day, and it eventually becomes just too much of a hassle to continue to find out why.

Simultaneously there is a constant scramble to stay “current”, to reach for the new, lest you as a professional be relegated and discarded as the old.

Because the tech industry feeds on the verdant and naive energy of the young; spinning mnemonics and buzzwords and dangling lofty dreams of financial prosperity and security in front of their faces in a sort of Faustian bargain: that as long as you remain compliant, and don’t speak up, and don’t cost too much, and are straight enough and pale enough and able enough, and young enough, and of course man enough, you too may be able to join the club, one day.

And then there is the constant pressure to deliver, there is just no time, never enough time! This is about survival. Don’t you know that it’s a jungle out there?

Documentation of course becomes an afterthought.

In such an environment, long term thinking doesn’t matter, all this code will be gone in like twelve months anyway as it gets rewritten for the umpteenth time and documentation likely isn’t factored in to your performance evaluation anyway (a process which has increasingly returned to the bad old days of stack rankings, a process which turns colleagues into rivals, in the name of personal and professional survival)

So the rewrite. Because the working things are opaque and mysterious and to be honest feel a little lived in and smelly right now. And it’s easier to just invent things from first principles because that’s what true rigor demands, right?

But of course it doesn’t have to be that way. And we’ve seen it. From the frankly astonishing run of Fortran in the banking sector, to the longevity of open source libraries such as SQLite, the Linux operating system, and even graphical applications like Blender.

While it’s hard to draw concrete lessons from such a diverse array of long lived software projects both free and commercial, there are some striking similarities to the idea of folk music.

Folk music is enmeshed in a particular culture. It is knowledge transmitted across generations, but which evolves to meet the challenges of the times and changing sensibilities. The tunes are typically simple, and emotional, the lyrics similarly so. The instruments used are frequently the traditional, or perhaps more explicatively, the accessible.

The original authors are typically anonymous, or at least forgotten to time. Different performers may interpret the songs in very different ways and in doing so act as both custodian and curator of ancestral knowledge.

This is “low” art, created by and for the people: labourers, slaves, the foot soldier, and of course mothers while they rock their children to sleep.

In folk software, the dictator is dead, no matter how benevolent, or failing that, their role at least is greatly diminished in importance. The community (or employees) have a loud voice that reverberates with the clacks of many people typing. Folk software is enmeshed in a culture, and is transformed by it.

The software has a distinct, compelling purpose. It was conservatively built from the traditional technology of the day, though with an eye towards the future and a willingness to adapt once a paradigm or tool has sufficiently proved itself.

It avoids the pitfalls of capital ‘D’ Design, taking a largely evolutionary and caretaker approach to feature development and maintenance.

Folk software evolves to meet the needs of the current moment and has well trodden paths to inure contributors to the code base.

Finally it is a “low” art, created by the people and oftentimes for the people. It’s made up of single mothers trying to put their kids through school and the mercenary clock watchers (in the case of commercial software) and the hopelessly nerdy, the neurodivergent, the bored 17 year old, and a disproportionately high number of trans fem lesbian cat girls with anime profile pictures (in the case of open source).

It is a model that asks you to gently embrace tradition, while allowing for gradual change. In many ways it is an antidote to the existential drift that is mandated by the grift of hyper-growth.

The “user” is dead, long live Folk Software!

Small Software

Folk software focuses on the transformative power and utility of cultural transmission and sharing and how “many hands make light work”.

In contrast, small software asks a much more intimate question: is there worth in solving a tiny problem or making a small game, an interactive poem, or a visual novel? And you make it just for you, or for your home, friends and family, local community, your D&D group, your sports club?

Because in a world delineated by the overreaching ambitions of those building everything apps, and the walled gardens, and the tasteless, soul devoid wastelands populated by AI ghouls and hungry content juggernauts, there is something refreshing about the idea of people making small, meaningful things for one other.

The “user” is dead, long live Small Software!

Cyborgs

This last section is one I have been mulling over the longest. Yet it has also been the most difficult to write and conceptualise. I think that is both because it touches on core parts of my own identity, and because it is a strange, particular lens through which to view the world.

The core to this model is the argument that we already live in a transhumanist society. We are now technic creatures, and our early childhood development is increasingly shaped and mediated by it. I have sent monetary aide to other transgender people on the other side of the world at the press of a button. I know within minutes when a major disaster happens in a country I’ve never even been. I can determine exactly where on this beautiful, dying, blue-green orb I am, down to the meter.

That my perceptions and ability to react to such stimulus has grown by many orders of magnitude within my lifetime is truly remarkable, and it seems a very minor quibble that this is achieved by interfacing with my visual cortex and auditory systems using sleek rectangular boxes made of glass and computational sand instead of the brain computer interfaces beloved by Science Fiction.

An argument against this framing could be made that we have always been tool users and that these technical artefacts of society are just a new kind of tool.

And while there is a tension to be explored in that argument, I would like to invoke the imagery of the spectrum, and insist that it is degree that matters. When we feel lost without our phones, when we feel itchy when there is no signal, when our heartbeat is monitored around the clock, and every twitch logged, how can it be argued that we are not something new and different? (And that is even omitting assistive technology and disability from the conversation)

This difference is not necessarily superior better mind you (I hope that anthropocentrism will one day be firmly banished to the shadow zone).

But if we are no longer in the realm of the purely human, how can we describe ourselves, and what does this lens reveal about how we think and treat technology and just as importantly, what does it say about how technology treats and relates to us?!

I am a cyborg.

This admission feels strange to say, yet so pedestrian. Perhaps it’s because I have been thinking about it for a while, or perhaps it’s because my gender transition has uprooted my identity in ways that have allowed me to discard some of the other accumulated detritus and the hazy, disassociated memories of childhood.

I am a proud cyborg.

This declaration is slightly more difficult. Culture decries the new (and perhaps this scepticism is a reasonable response given our recent history). Remonstrations about screen time and square eyes and the resultant forced socialisation with “other” boys who don’t play gently and are oh so loud and obnoxious, still echo through my mind.

Yet I am still proud. Because my biological form was remade using technology. And even before that, technology shaped me, cocooned me, until the day I was ready to understand, or at least some of it.

It is also why I am so damn angry right now.

Because I see the destruction. And the sheer effort it will take to rebuild our consensus reality. And I cry at the assault on our very bodily autonomy at the most fundamental of levels.

But that is just an aside. Because again, I ask what are we now? And this is an urgent question. Because we don’t have much time, and the technology that has made us has been warped, or maybe it was always warped, and we were naive, or maybe it was always warped, but if we stood at just so an angle, we could see a representation of ourselves that had vestiges of rightness. I don’t know.

Again what are we now?

I think the oligarchs would have us all be “users”. To be fated to be unthinking consumers of unhealthy slop, doled out grudgingly by a memetic echo machine that can only but regurgitate half remembered truths from a less broken reality.

I decline. I am no “user”!

What are we then?!

We are sovereign. Cyborg people who have inner lives shaped and mediated by technology. But our bodies are ours and our minds are our own, and we as sovereign beings with dominion over both we must demand that this simple, incontrovertible, yet revolutionary fact be recognised.

Death to the “user”, death to algorithmic tyranny, death to the creeping hands that seek to control the manifestation of our embodiment, whilst manufacturing unreality in order to destabilise our minds.

And most importantly death to the “user” so that the cyborg might live.