How Does Design Impact Our Experience of Technology?
Pete Furlong, Lead Policy Researcher
What does “design” mean in technology, especially for social media and AI?
From login screens to notification alerts, font size to follower counts, our entire experience of a tech platform is influenced by its design. A platform’s design is not a given; rather, it represents a series of choices made by a team of developers and builders throughout a design process.
Not all design choices are visible to the user. A platform’s algorithm — hidden from the user — decides what kind of content gets promoted. TikTok, for example, has a powerful algorithm as an integral part of its design. Hidden design choices in AI products can include the selection of training data, and how the AI model is fine-tuned.
A tech platform’s design often reflects the company’s goals — whether that be conveying a brand aesthetic, or ensuring a user remains on the platform for as long as possible. Larger goals driving tech design might be financial gain for the company, and the company increasing its market share of users.
While user safety can be addressed at the design stage, tech platforms have historically addressed user safety only in response to harms and controversies.
What kinds of design choices do technologists make when they build social media platforms?
Social media design is vast, from viral content being placed at the top of a user’s feed, to the steps a user needs to take to delete their account. Notification colors, the layout of a user’s page, and even the character count for captions are all design choices. Parental controls — or lack thereof — also qualify as design choices.
Many design choices on social media aim to keep the user on site for as long as possible, and maximize engagement.
Social media design choices have had a ripple effect across culture and society. Examples include: display of follower counts; infinite scroll; “read” receipts on messages; algorithmic “for you” feeds (instead of chronological feeds); showing “like” counts; and more.
What kinds of design choices go into building AI? Do they have any overlap with social media?
Anthropomorphic (or human-like) text responses are one of the most notable design choices in today’s generative AI products.
AI companion chatbots have platforms designed to look like popular messaging apps, such as WhatsApp. They feature typing bubbles to give the impression of an AI companion typing out a reply. Some AI platforms also offer human-like voice calls. These design choices aim to simulate the experience of talking to a real person.
Allowing a user to engage with an AI companion for an unlimited amount of time is a design choice. Sycophancy, or endless validation of the user, also qualifies as a design choice in AI companions.
Like with social media, these designs strive to keep the user on the AI platform for as long as possible. By keeping a user on the app, the company can harvest more data from the user, and further train its AI model.
Can a tech product’s design be harmful?
Tech design that doesn’t factor in user safety can — and often does — produce damaging outcomes.
Tech that is designed to keep users on site for as long as possible can lead to dependency. Highly engaging content — whether in the form of viral social media posts, or manipulative AI chatbots — holds a user’s attention captive and can foment addiction.
Vulnerable groups, including kids and teens, may be more impacted by reckless tech design than other populations. But harmful tech design can impact people from all backgrounds, and have a corrosive effect on our society and institutions.
What does safer tech design look like?
Throughout design history, there is a rich legacy of taking safety into consideration with physical products. But, safety considerations haven’t necessarily been prioritized in mainstream digital products — especially social media and AI.
Safer tech design — or “safety-by-design” — means taking user wellbeing into account throughout a design process. Examples could include mental health prompts if a user expresses thoughts of depression, time limits on app use, and design that creates friction in the process of resharing a post. Instead of trying to maximize user engagement, companies could optimize for user health, community, and shared reality.
What can we do to make safer design a reality?
Right now, technology companies lack clear incentives to think about safety in their design process, usually only implementing safeguards in response to a crisis. Yet risk mitigation at the design stage remains one of the most reliable and powerful ways to minimize harms and build an innovative product for users.
Policy solutions — such as design codes for social media and enhanced liability for AI developers — seek to shift these incentives, ensuring companies have a stake in their users’ safety.