Designing for trust: why UX ethics matter

“Good design” is supposed to make things better. But what happens when “better” is defined by business metrics rather than by people?
We’ve built a world where digital products too often are judged by clicks and conversions, and not by the dignity or wellbeing of those who use them — and the line between persuasion and manipulation has never been blurrier. If, as designers, we ever wonder why so much of the web feels engineered to frustrate, exploit, or simply wear users down, this is the answer.
So the important question here is not what we can design, but what we should (and why we often don’t).
After all my years in design, first in the architecture and then in the UX, I am still amazed by how differently people define “good design” — and how those definitions shift depending on one’s vantage point. To most people, good design is simply about things looking great. To users, it’s also about how well something works. To managers, good design is whatever delivers results and meets business goals. And to designers… well, that is a bit of a complicated story.
Designers themselves, one might think, would have the broadest and most nuanced understanding. After all, we are trained to balance aesthetics, usability, and business needs. Yet even within our own community, there is a persistent blind spot: the ethical dimension of design. Too often, design ethics is reduced to questions of professional loyalty — protecting client secrets, honoring NDAs, or avoiding plagiarism — while the deeper ethical questions, those that concern how our work shapes users’ autonomy, wellbeing, and trust, are seldom given the seriousness they deserve.
Sometimes this happens because, as humans, we tend to shy away from difficult conversations or ethical disputes. Sometimes it’s due to a misplaced sense of “professional loyalty” that discourages us from questioning our bosses’ or clients’ priorities. Sometimes it’s because we don’t think it matters. After all, there are more than enough designers on the market; we make noise, and soon we’ll be replaced with someone who doesn’t give a damn, the only result being us losing our jobs. And sometimes, quite simply, it’s because we were never taught to think about these issues in the first place.
“Design is not just what it looks like and feels like. Design is how it works “ — Steve Jobs
Not really. Design is not just about how great things look, but it is not just about how smoothly they function, either. Of course, it’s important when products help us work more efficiently, travel more comfortably, or even make better coffee, and it is nice when they also look great — but that is only what shows on the surface.
In the deeper layers, design is also about how products impact users, shape their behavior, direct their choices, and encode values - often invisibly. Unfortunately, it happens all too often that ethical questions get lost beneath the surface of usability, desirability, and business metrics. When we start to measure success in clicks, time spent, and money earned, the ethical dimension becomes easy to overlook or rationalize away. This is why it is so very important, when we talk about “how it works,” to also ask: on whom, and to what end?
Originally, UX was about seeing users as individuals with their own needs, vulnerabilities, and rights. Designers were to solve real problems, think in systems, and make sure that products serve the broader good, not just business or technological progress, and UX was meant to bridge the gap between user needs and business goals. Unfortunately, as digital products evolved into gazillion-dollar, growth-obsessed ecosystems, that balance shifted.
“The greatest danger of manipulation is that it can become invisible, normalized, and woven into the fabric of everyday life.” — Shoshana Zuboff
In a world where digital products are more and more sophisticated, and business models become more and more aggressive, the original human-centered nature of UX is less and less obvious. Commercial imperatives often take precedence over anything else, and UX designers often are pressed to prioritize “business impact” over user wellbeing. The tools of persuasion, used for years to gently guide users toward value — are now misused and reframed as instruments of manipulation. Dark patterns that trick users into doing things they didn’t mean to do, now generate billions in unintended subscriptions and purchases.
The very skills that were supposed to make technology humane — are now more and more often being used to take advantage of people.
And what makes this ethical erosion even more troubling is that it isn’t incidental anymore, it is quite systemic. In many organizations, product roadmaps rarely reference ethical design principles, while KPIs for user engagement and monetization are prioritized on regular basis. We have developed a professional environment where designers are highly skilled at optimizing user behavior for business goals, but rarely equipped (or empowered) to recognize and address the moral consequences of those optimizations. When success is measured by how effectively interfaces extract attention, data, and dollars, even well-intentioned designers can find themselves complicit in user manipulation. The “triple constraint” of product development — speed, scope, cost — rarely includes ethics as a fourth pillar, and so the cycle continues.
The consequences of this metric-driven obsession are no longer abstract. When Amazon’s 2023 Prime cancellation flow required users to navigate 17 screens — a digital obstacle course the FTC later considered “designed to frustrate escape” — it wasn’t an anomaly but a blueprint for how far companies will go to retain users, regardless of the ethical cost. Amazon’s internal code-name for the flow, “Iliad,” was telling: it is a reference to an epic journey, and a clear signal that friction was by design. The process played on loss aversion, distraction, and cognitive overload, using every psychological lever to keep users from leaving, and stood in stark contrast to Amazon’s one-click checkout, so celebrated for its frictionless efficiency.
Europe’s Digital Services Act now defines some unethical design choices as “illegal dark patterns,” with rather heavy fines. This move clearly shows an unsettling twist: the same psychological insights that once made UX a respected discipline like Fogg’s behavior model, Hick’s Law, or cognitive load theory, are now often used as tools for manipulation. The DSA’s ban, together with their recent legal actions against some major platforms, are obvious signs that manipulative design has become a serious societal problem. The message behind all this is rather clear: platforms should be accountable not only for what their users do, but also for how their design choices actually impact and shape their users’ actions.
“Ethics is knowing the difference between what you have a right to do and what is right to do.” — Potter Stewar
Unfortunately, the line between what we can get away with and what actually is right, is not always clear. In a world that often is more interested in quick wins rather than long-term expectations, it is easy to justify manipulative patterns by pointing to positive business metrics. But still, we have to keep asking ourselves: when we design for metrics, are we genuinely helping users, or just squeezing value out of them?
The consequences of neglecting the responsibility for users’ wellbeing are visible everywhere — and they are obvious symptoms of a broader problem. When companies deliberately complicate the process of cancelling a subscription, when interfaces are engineered to keep people engaged far beyond their intentions, when users need to enter billing information to start a free trial period — these are all examples of design choices that may deliver business results in the short term, but lead to a gradual erosion of trust. These are not isolated lapses, but signs of a broader pattern in which business goals are consistently placed ahead of user interests, normalizing practices that ultimately undermine the very relationships companies should be depending on.
The psychological mechanics behind these patterns are well understood: reciprocity, scarcity, social proof, loss aversion. What began as benign nudges, like thank-you messages for user actions, has transformed into “confirmshaming” pop-ups that play on users’ social compliance instincts. Casino-inspired mechanics like variable reward schedules — once confined to slot machines — now dictate when dating apps display potential matches or e-commerce sites flash “limited stock” alerts. The impact of all this on people is becoming harder to ignore: numerous studies have found that problematic or excessive social media use is strongly correlated with higher rates of anxiety, depression, and other psychological distress among heavy users of these platforms. We have learned to mint money from compulsion, and too often, we either don’t choose to do otherwise, or (even worse) we choose to do exactly that.
“Technology challenges us to assert our human values, which means that first of all, we have to figure out what they are.” — Sherry Turkle
This is not simply the fault of individual designers, the problem is rather systemic. Product roadmaps are filled with KPIs focused on attention, extraction and conversion, while ethical considerations are rarely even mentioned. Most organizations have no process for evaluating the moral impact of design decisions, and few designers are given the authority to push back when lines are crossed. And even when designers do sense that something is wrong, they often lack the support, or even the language, to make their case.
One of the most overlooked roots of this problem is education. Most UX bootcamps and degree programs focus on usability, research, and aesthetics. Ethics, if it appears at all, is treated as a side note — a single lecture or a vague admonition to “do no harm.” The messy, real-world dilemmas like navigating business pressure, resisting manipulative design, advocating for user dignity, are very seldom discussed in depth.
The consequences of this gap in education are very real. New designers are starting their first jobs without the tools that would help them recognize when their work crosses a line. Without the vocabulary and confidence to push back, they may quickly find themselves pressured to implement dark patterns, or to optimize for engagement at the expense of user wellbeing. The result is a profession that too often confuses compliance with ethics, and business loyalty with moral responsibility.
Meanwhile, the tools at our disposal are growing more powerful — and more dangerous. Artificial intelligence can now personalize nudges, test hundreds of variants, and optimize for engagement with ruthless efficiency. The same technology could be used to detect and flag manipulative patterns, to enforce transparency, or to measure the ethical impact of our work — but unless organizations choose to set those boundaries, the default will always be to optimize for what’s easy to measure: engagement, clicks, revenue.
“The real question is not whether machines think but whether men do.” — B.F. Skinner
The arrival of AI in design is a double-edged sword. On one hand, it allows for unprecedented personalization and efficiency. On the other, it can scale manipulation to levels never possible before. AI can not only identify moments of vulnerability and tailor messages to exploit them, but also do so invisibly, and at scale. The European AI Act’s prohibition on “subliminal manipulative techniques” is a recognition of just how urgent and complex the questions concerning the use of AI have become.
Thing is, regulation alone cannot solve the problem. The real work must happen within the profession itself.
“Without courage, we cannot practice any other virtue with consistency. We can’t be kind, true, merciful, generous, or honest.” — Maya Angelou
So, what would it actually take to make ethics as real and as natural a part of our daily decision-making as any business KPI?
Maybe the thing to start with should be the way we think about design — and, in consequence, how we teach it. Not as a set of tools, but as a way of thinking, with ethics as an inseparable part of it.
The business KPIs will always be there, but they can’t be the only signals we follow. We should care just as much about how informed, respected, and in control people feel moving through a flow, as we do about how quickly they do it.
We need to empower designers to speak up and maybe give them institutional backing when they do.
And finally, we need to recognize that the real impact of our work is not just what users do, but who they become.
“Not everything that counts can be counted, and not everything that can be counted counts.” — William Bruce Cameron
Of course, not all problems can be solved with an algorithm, a checklist, or a new procedure. Design is not neutral; it shapes habits, beliefs, and social norms. It can reinforce power imbalances or foster inclusion, erode trust or build it. As technology becomes more pervasive and persuasive, the stakes only rise. If we want to build a future where people trust the products they use — and the people who make them — we need to treat ethics not as an afterthought, but as a central measure of our success. The challenge is not technical, but moral. It is about having the courage to ask, at every stage: Who benefits? Who’s at risk? And what kind of world are we designing?
The line between persuasion and manipulation in UX is rarely clear, and the pressure to deliver business value often pushes designers into ethical grey zones — sometimes knowingly, sometimes simply because nobody is asking the right questions. As long as metrics are rewarded over meaning, and as long as ethical questions are treated as optional rather than essential, these patterns will keep repeating themselves.
Luckily, nothing is inevitable here. We do have the ability to challenge business-as-usual, to push back when asked to cross a line, and to insist that ethical considerations are built into both our process and our definition of success. This is not about grand gestures or heroics; it’s about making ethics what it should be: a normal and expected part of the job, just like usability or accessibility.
If we want our field to be respected , and if we want to respect ourselves as professionals, we need to start treating ethical choices as seriously as business ones. And if we expect things to improve, we cannot wait for change to come from elsewhere. It begins with each of us, in the moment we choose not to look away from the next ethical dilemma coming our way.

Leave a Comment