If Tech Fails to Design for the Most Vulnerable, It Fails Us All

Building around the so-called typical user is a dangerous mistake.
Collage of images of police harassing protestors Grindr app icon on phone and Telegram logo
Photo-Illustration: Sam Whitney; Getty Images

What do Russian protesters have in common with Twitter users freaked out about Elon Musk reading their DMs and people worried about the criminalization of abortion? It would serve them all to be protected by a more robust set of design practices from companies developing technologies.

Let’s back up. Last month, Russian police coerced protesters into unlocking their phones to search for evidence of dissent, leading to arrests and fines. What’s worse is that Telegram, one of the main chat-based apps used in Russia, is vulnerable to these searches. Even just having the Telegram app on a personal device might imply that its owner doesn’t support the Kremlin’s war. But the builders of Telegram have failed to design the app with considerations for personal safety in high-risk environments, and not just in the Russian context. Telegram can thus be weaponized against its users.

Likewise, amid the back and forth about Elon Musk’s plan to buy Twitter, many people who use the platform have expressed concerns over his bid to forefront algorithmic content moderation and other design changes on the whim of his $44 billion fancy. Bringing in recommendations from someone with no framework of risk and harms to highly marginalized people leads to proclamations of “authenticating all humans.” This seems to be a push to remove online anonymity, something I’ve written about very personally. It is ill-thought-through, harmful to those most at risk, and backed by no actual methodology or evidence. Beyond his unclear outbursts for changes, Musk’s previous actions combined with the existing harms from Twitter’s current structures have made it clear that we’re heading toward further impacts on marginalized groups, such as Black and POC Twitter users and trans folks. Meanwhile, lack of safety infrastructure is hitting home hard in the US since the leak of the draft Supreme Court’s opinion in Dobbs v. Jackson showing that protections provided under Roe v. Wade are mortally under threat. With the projected criminalization of those seeking or providing abortion services, it has become more and more apparent that the tools and technologies most used for accessing vital health care data are insecure and dangerous.

The same steps could protect users in all these contexts. If the builders of these tools had designed their apps by focusing on safety in high-risk environments—for persons who are often seen as the more “extreme” or “edge” cases and therefore ignored—the weaponization that users fear would not be possible, or at the very least they would have tools to manage their risk.

The reality is that making better, safer, less harmful tech requires design based on the lived realities of those who are most marginalized. These “edge cases” are frequently ignored as being outside of the scope of a typical user’s likely experiences. Yet they are powerful indicators for understanding the flaws in our technologies.This is why I refer to these cases—of people, groups, and communities who are the most impacted and least supported—as “decentered. The decentered are the most marginalized and often most criminalized. By understanding and establishing who is most impacted by distinct social, political, and legal frameworks, we can understand who would most likely be a victim of the weaponization of certain technologies. And, as an added benefit, technology which has recentered the extremes will always be generalizable to the broader usership. 

From 2016 to early this year, I led a research project at the human rights organization Article 19 in conjunction with local organizations in Iran, Lebanon, and Egypt, with support from international experts. We explored the lived experiences of queer persons who faced police persecution as a result of using specific personal technologies. Take the experience of a queer Syrian refugee in Lebanon who was stopped at a police or army check point for papers. They had their phone arbitrarily searched. The icon for a queer app, Grindr, is seen, and the officer determines the person is queer. Other areas of the refugee’s phone are then checked, revealing what is deemed as “queer content.” The refugee is taken in for further interrogation and subjected to verbal and physical abuse. They now face sentencing under Article 534 of the Lebanese Penal Code and face potential imprisonment, fines, and/or revocation of their immigration status in Lebanon. This is one case among many.

But what if this logo was hidden, and an app indicating an individual’s sexuality was not readily available to them? While still letting the individual keep the app and connection to other queer people? Based on the research and collaboration with the Guardian Project, Grindr worked to implement a stealth mode on its product.

The company also implemented our other recommendations with similar success. Changes such as the Discreet App Icon allowed users to have the app appear as a common utility, such as a calendar or calculator. So, in an initial police search, users can bypass that risk of being outed by the content or visuals of the apps they own. While this feature was created solely based on the outcomes of extreme cases, such as the queer Syrian refugee, it proved popular with users globally. Indeed, it became so popular that it went from being fully available only in “high risk” countries to being available internationally for free in 2020, along with the popular PIN feature that was also introduced under this project. This was the first time a dating app took such radical security measures for its users; many of Grindr’s competitors followed suit.

After the success of the Discreet App Icon, and based on the recommendations from the project, Grindr introduced a variety of new features—such as unsending messages, ephemeral photos, screenshot blocking, and contextualized safety guides—in late 2019 and 2020, and focused on harm reduction for all users. These changes are now utilized by users worldwide, from New York to São Paulo, Brazil. Earlier this year, WhatsApp adopted changes for its 2 billion users that centered the harm faced by the queer community in the Middle East and North Africa. These changes are based on my recent report documenting the use of digital evidence against LGBTQ people in Tunisia, Lebanon, and Egypt. Both companies report that these changes caused a positive shift in user engagement with these features specifically, as well as with the app as a whole.

These are all examples of what I call designing from the margins, or how building new communication technologies needs to be done by directly centering highly marginalized and criminalized communities. (My report—Design From The Margins: Centering the Most Marginalized and Impacted in Design Processes, From Ideation to Production*—*has just been published.) We need to radically shift how we are building our technologies if we have any chance at mitigating further harms.

Going back to Telegram, Elon Musk’s Twitter plans, and the impending threats to abortion access and safety: If these apps’ security protocols were learning from experiences of decentered people in highly policed and risky contexts, Telegram could have had discreet app icons to avoid it from being an admission of guilt in the eyes of the Russian police, or it could have implemented a more robust disappearing-messages structure that could protect against device checks. It also could have built safer channels to avoid threats to administrators in high risk contexts (as Telegram could have learned from incidents in Iran).

Likewise, if Twitter understood the risks faced by communities living under surveillance and built products for them, it would have moved to encrypting DMs end-to-end, which would prevent Musk (or anyone else within Twitter) from having easy access (especially with increasing concerns due to Musk’s worrisome partnerships). But as of today, that technology has not been rolled out, and it’s impossible to truly delete DMs.

As with Musk’s vague nods at quashing anonymity: If Twitter had learned from groups and communities who have to remain anonymous to exist, it would have pushed for more nuanced, practical, and less data-invasive strategies to protect against fake profiles while also protecting vulnerable people from privacy breaches, especially those who live under dictatorships. Take the learnings from Egypt: We know Egyptian police use fake online profiles to entrap and arrest queer people and use what is found on their phones as evidence in court. Yet most queer people in a context where their identity is criminalized need to remain anonymous while connecting online. Their ask is for companies to better detect entrapers without creating more risks for the community’s safety: It’s the imperative of companies and experts to find this middle ground without squashing those most impacted. Progress on this can be made only when companies center experiences that highlight gaps in designs from all angles.

As for Dobbs v. Jackson: In 2021, the Digital Defence Fund outlined the needed steps for online safety for folks to avoid putting themselves at risk with a trove of digital footprints when seeking abortion support. Experts have rightly pointed out that the major threat isn’t solely data collection via period-tracking apps, but also fake clinics and police device searches. And those who will be most impacted are, of course, communities of color, LGBTQ people, sex workers, and highly disenfranchised communities. If the tools had been built with their cases in mind, with some foresight of the risks and their weaponization, they could have avoided a mass exodus of users (as we’re seeing with period-tracking apps) but also remained as go-to resources in a time of dire need. These shifts and changes could protect those who need it most in such a turbulent time, and also make the tools pioneers among their competitors.

No small fixes can address the larger and deeply rooted structural issues that foreground how these technologies are deployed or weaponized. However, in centering, consulting, and understanding the legal, social, and political issues that impact decentered and marginalized groups, some protections can be provided. Reimagining the importance of cases that deviate from the so-called typical technology user and placing them at the center of the design process is not a revolutionary concept. It's a core basis for many industries. In terms of safety and security, it should be seen as a corporate responsibility in developing tech. As one of the Tunisian lawyers from the Digital Crime Scenes report frames it. “We are in your application, you must protect us,” and when you do, you’re protecting all of us. It is no longer advisable to design technologies without designing directly for those who are most harmed. In fact, technology is better, safer, more innovative, more robust, and more integrative of privacy when those most marginalized are centered from the start of the development process. So the question isn’t can it be done, but rather why isn’t it being done?