Devious Design — How Can A Seemingly Neutral Design Have Ill Intent?

A look at Robert Moses And The Saga Of The Racist Parkway Bridges

A framework for privacy in an AI-driven world.

A city with flying cars, image generated with AI
Image by andreakushh✨ generated with Adobe Firefly AI

A few weeks back, a recruiter contacted me through LinkedIn to invite me to become a UX/UI technical interviewer for his company and help them validate technical skills for candidates. It sounded like an appealing source of extra income, so I agreed to schedule a meeting for the next day. In the meeting, we never exchanged any visual interaction nor saw each other faces, but we had a great talk about the benefits and responsibilities of the role.

We agreed to move one step forward and scheduled a second interview for the next day. I noticed the recruiter was not one of my LinkedIn connections, so I asked him three questions regarding my personal information. The questions I asked were basic:

  1. Are these interviews being recorded?
  2. Is my information shared with third parties?
  3. As an employee, do you consider the company is making ethical and efficient efforts to protect employees’ data?

He canceled the meeting and replied that the candidate’s information was secured and not recorded. He apologized for the last-minute meeting change and promised to check availability to reschedule the call that same week.

Those were his last words, I never heard back again from the proposal, and eventually, I doubted the transparency of his words or the trustworthiness of the company. This situation made me analyze the interactions users exchange in the digital world, human-to-human or human-to-AI. As I never saw his face, I never knew if the company was doing research to train a model to perform the interviews or if I was dealing with an actual human.

The Web ecosystem we live in centers its functionality on algorithms and centralized strategies of mass data generation, analysis, and automated control for decision-making. As a product designer, I was aware of the uses of AI to identify expressions, emotions, and tone of voice to analyze, treat and train algorithms to fasten recruitment processes and help companies to ease their workflow.

As this ecosystem evolves, new attributes non-related to physical skin, sexual identity, or national origin emerge in the digital realm. The use of personal information about these attributes, either explicitly or virtually, involves and implicates privacy interests in controlling how data is used, by who, and for which purposes.

Futuristic drone scanning a person in a nature-tech digital city, image generated with AI
Image by andreakushh✨ generated with Adobe Firefly AI

A few days back, the congress of the state of Montana, in the US, signed a bill into law that bans TikTok from operating within the state. According to POLITICO, it will protect private data and sensitive personal information from the Chinese Communist Party. This law will be set in place in 2024 and includes social media apps of foreign adversaries like CapCut, Telegram, Lemon8, and WeChat.

This first round of data restriction in the US is a test to analyze people’s behavior toward the new law. Gen Z and Millennials, who are most likely to consume information through this social media app, are the target of this analysis, starting to limit its use in Montana and planning to expand across the US.

Websites and third-party data will present more restrictions in the upcoming years, and industry leaders should be able to formulate policies and standards to ensure that the usability and accessibility of the Web architecture work in a way that social aspects are not limited to the Semantic Web.

Google recently announced the launch of Privacy Sandbox, a regulation that aims to protect personal data from third-party cookies for 1% of Chrome users in Q1 of 2024.

For so many years, this reality has existed in countries such as Singapore, where the Broadcasting Authority requires registrations and licenses from political and religious websites, being the clearest example of policy regulations that often use adaptations of firewall technology to create a giant intranet of web clusters within their borders.

A futuristic nature-tech city inside a giant bubble, image generated with AI
Image by andreakushh✨ generated with Adobe Firefly AI

It is also a fact that not everyone should be able to have access to users’ data. The most recalled case in American history is Facebook’s in the elections of 2016, where Cambridge Analytics leaked geographic, demographic, and relational data from more than 80 million American users and used their data to expose ads on Facebook and influence people’s behavior for the elections.

So why decide to ban TikTok, but not Facebook?

Interesting to analyze the vulnerability of control access and information sharing in distributed systems, and the decisions of who has sovereignty over people’s private information, or individual’s cognitive processes.

We now comprehend that brands enter our perception by showing information in our scroll-down behavior. For instance, Twitter’s value proposition consists in providing access to the written thoughts of different users by scrolling down the site, but among all those personal thoughts, Twitter’s algorithm is popping up hundreds of ads and recommended tweets to grant the option to follow new accounts.

As a user, you keep scrolling down because you are not particularly interested in buying any product from those ads, but all that information enters your subconscious mind, shaping your emotions, creating cognitive biases, and influencing your decision-making by the time you need to choose between two or more brands. The more ads you unconsciously see, the more likely you are to choose one brand over another.

That is how social media works.

Regulations that go beyond the established in the GDPR, and are related not only to demographic, racial, or geographical information but also to mental and biometrical models should be reinforced and regulated by industry leaders and corporations.

Mental, physical, and behavioral characteristics such as emotion, facial, auditive, expressive, or fingerprint recognition, used to grant access to systems, devices, or web data, become vulnerable to malicious attacks in immersive realities.

Biometrical databases can be easily hacked, exposed, or replicated by control systems that might or not be related to criminal organizations, government representatives, or AI cyber entities created to identify, segment, target, and track individual data.

Misuses of information, data leaks, the potential longevity of stored information, the difficulty in identifying privacy breaches, and blocking freedom of speech should be the main concerns of new generations.

Based on studies from Insider Intelligence, the risk of Gen Z and Alpha generations consuming fake news is more considerable than the generations that precede it. The influence on human behavior is why industry leaders are fighting for the sovereignty of social media. Even though these generations are the most open-minded and ethically diverse, they are the least concerned about data privacy and protection.

Social media working as a unique identifier for each individual who owns an account can be used to ensure more accurate identification. Even though biometric data must be unique, AI systems could work to generate, collect or replicate an individual identity from any picture or video that users agree to upload freely on social media apps.

Deep beautiful eye with a cosmic magical universe inside, image generated with AI
Image by andreakushh✨ generated with Adobe Firefly AI

How can we address this new reality?

In the web ecosystem, there are two types of users: data generators and data consumers.

The first group of users is aware of the power of algorithms and uses data to influence patterns in human behavior. They research and segment consumer interactions, emotions, and brand perception through performance and lifestyle indicators.

The second group of users is more prone to act and react to certain stimuli; they move and expand based on trends, suggestions, and digital influences.

For both groups, AI has important implications for global security and stability. It can help improve privacy and security, identify and track misinformation and manipulated data, generate more accurate ways to detect cyberattacks, and predict and provide faster solutions to users.

For companies

AI can help learn and analyze customer behavior to predict algorithm patterns, segment interests, and aid purchasing decisions. It can also help to inform, improve and automate processes to manage customer databases and plan strategic marketing campaigns.

Therefore, companies must ensure employee and customer data is protected and secured by setting clear statements about the usage on a legitimate and legal basis.

  • Transparency: Brands must state clearly how personal data is used and stored. Companies shall require previous consent from individuals before proceeding to any interaction, specifying if personal data is being shared or treated by other agents or third parties. They need to state the type of system users are dealing with, whether be a person, a trained robot, or an AI.
  • Explainability: Companies should explicitly describe what and how data is collected, used, and secured. They must provide users the option to agree or disagree without compromising the objective of the relationship.
  • Risk assessment: Companies without solid policies that do not comply with regulations could risk unauthorized access. Therefore, they shall perform regular audits to assess patterns or anomalies in systems or databases to mitigate risks from data attacks or manipulation. AI can help anticipate potential threats.
  • Accountability: Corporate and industry leaders need to ensure that all identifiers are being removed or protected from systems or data sets. It is a responsibility to provide honest and ethical practices to consumers and employees.
  • Governance: Aside from country laws or legislations, companies must ensure the right of trust, quality, accessibility, and data management for all individuals, enabling data stewards to cultivate trust, inform and promote data integrity for their communities.

For individual users

In addition, to ensure regulations or privacy policies, users must prevent potential misuse by third-party attackers while interacting with a company or brand. Name, address, phone number, email, and physical identifiers are easily traceable by automated systems and can be at risk if connected to a public network like airports, co-working spaces, or any coffee place.

Beyond knowing that anyone (human or machine) can take biometrical data from social media to malicious traits, individuals must ensure two main aspects to protect their privacy:

  • Transparency: Users should understand the implications of sharing personal data before interacting with any company or device. They must agree or disagree with the party statements and research who can access their personal information.
  • Explainability: Individuals must convene to research and evaluate the quality and criteria of the audits or the assessments the company is taking to protect their data. Users must have the right to know, correct, or delete their data from databases.

Web users must be able to rely on practices and act upon legislation if suspecting of mistreated data. Everyone has to ensure a safe place of mutual interaction based on ethical principles and trustworthiness.

The University of Helsinki has established a free online course designed for those who want to become familiar with the ethical aspects of AI.

Google also shared a policy agenda for responsible AI progress, which helps to establish structures of accessibility in Web Governance.

Final thoughts

An interconnected world with biological, computational, and cognitive systems is merging in a way that different realities: the mental, physical, and digital, coexist in one.

The implications of creating a system of such magnitude that we cannot control could be devastating, and there is still a long way for brands to offer commercial value while providing privacy and security properties. We are still in time to make ethical decisions that protect the interests of the Web ecosystem and its users.

For the ecosystem to contribute to the global well-being of trustworthy technology developments in human-computer interaction, we must design ethical experiences and establish infrastructures that enable security, usability, and accessibility for all users.

Knowledge (data) is power, so take that in mind to build individual and collective practices to ensure user privacy without compromising human rights, brand engagement, and technology evolution.

*Side notes:

  1. AI was not used to write this article.
  2. Adobe Firefly Beta was used to create the images for this post.

Why designers must reinforce data protection in times of AI was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.






Leave a Reply

Your email address will not be published. Required fields are marked *