How to solve a designer’s biggest fear with startups: wearing multiple hats

Startups offer tons of value to designers if you know how to handle the extra work

To stay relevant, we need to move upstream, not downstream.

Person racing to the top of a hill on a path
Racing to the top | Photo by trail

“I am an AI designer.”

When I was in high school and college, I devoured printed graphic design materials.

They were album covers from labels like Factory Records, posters for artists like Bjork, and magazines like Ray Gun, Emigre, and Japanese publications such as Idea. I admired star graphic designers and design studios like David Carson, Peter Saville, tomato, and the Attik. In design competitions, annual reports made regular appearances. As I couldn’t yet afford expensive design books, I used to spend hours in the university library and the art section of bookstores.

I thought I would follow the path of becoming a graphic designer in the traditional sense.

While I was in college, the Internet, or the World Wide Web, almost overnight, became a thing that people got obsessed about, much like AI today. After graduating, I moved to NYC, where the web industry was rapidly growing. “Web design” suddenly became an option for designers, at least as a career entry point for me.

“Tell the landlord you are a web designer, not a graphic designer.”

That was the piece of advice I got when I was looking for an apartment. That’s how much the Internet economy was starting to boom. I borrowed money from my parents to last for a few months and lived in a tiny one-bedroom apartment with my twin brother in a sketchy neighborhood in Brooklyn. We only got the apartment not because I was a web designer (I wasn’t as I didn’t have a job) but because the Albanian landlord took pity on us two young Japanese guys trying to make it in NYC.

Every week, I would look up job listings in a magazine called Silicon Alley Reporter and started applying for numerous design jobs and roles.

The combination of being fresh out of school, new to the City, and with no connection was a hurdle to begin with. Add “need H1-B visa” to that mix, no employer was willing to take a chance on a designer with no experience. I made ends meet by taking on random freelance web design gigs, like designing and coding web articles for $150/piece. I could barely get by.

Even with the “web designer” title on my resume, no company would still hire me. With a lot of time on my hands for many months, I reworked my printed portfolio into an interactive online portfolio. I had double-majored in art and computer science in college and knew a little about programming languages like C, C++, and Java. In addition to traditional design and art pieces, I had a few interactive typographic pieces, inspired by another role model of mine John Maeda who had told me to learn programming a few years prior.

It wasn’t the fact that I claimed to be a web designer that helped me land a job. It was my design taste mixed with technical skills that ended up differentiating me as a designer. My book wasn’t necessarily better than other designers. It was different.

If you tell your prospective landlord today that you are a web designer, your application will likely be denied. In recent years, landlords in NYC might be favoring job titles like “digital product designer.” In 2023, “prompt designer” or “AI designer” might give you a better chance.

“The term ‘AI artist’ irks me,” I heard an artist grumble recently. “If you are asking AI to do the work for you, you aren’t really an artist.”

While I can sympathize with this artist’s sentiment, it won’t stop the wave of AI as well as a whole lot of mediocrity from being generated.

A cautionary tale

Speaking of AI and people revolting against technology, there are several analogous cases from the past. One such case is elevator buttons.

It was in 1892 when the first elevator buttons were invented and introduced. Until then, elevators needed trained human operators to function. This invention made it possible for passengers to just press a button and be taken to the desired floor.

“So what happened to elevator operators? Nothing.”

Kevin Roose, a NY Times technology columnist, mentioned this story on the podcast “Hard Fork” a few weeks ago, referring to the wave of automated elevators that people thought would take over their jobs. That week, he was attending the Cannes Lions Festival of Creativity where he heard an abundance of talks and mentions of AI but in actuality, he didn’t observe innovative or revolutionary use of AI. The point Roose was trying to make with the elevator story was that it might take longer for AI to take over human labor and tasks than we currently anticipate.

In reality, it took a long time for elevator buttons to become common. Over 50 years in fact.

Throughout the first half of the 20th century, elevators remained human-operated. The core issue in automated elevators was mistrust. Humans just didn’t feel safe in elevators without other humans operating them.

From the 1920s to the 1960s, there were several strikes, notably in New York, Philadelphia, and Chicago, by elevator operators mainly protesting low wages and brutally long working hours. The irony of these strikes is that they became the catalyst that sped up elevator automation.

The strikes forced elevator manufacturers to think of ways to make riders feel safe. In the few years following a massive strike in 1945 in New York City that cost an estimated $100 million in economic loss to the City, companies started to introduce human interface elements within elevators that would provide a better sense of safety: emergency stop buttons, embedded phoneµ to let the riders talk to remote operators, and alarms were such safety measures.

Finally, in 1950, the first operatorless elevator was installed at the Atlantic Refining Building in Dallas. That marked the beginning of the end of elevator operators. “By the 1970s most elevators operated without human operators,” writes Henry Lawson Greenidge in his article “ How A Historic Strike Paved the Way for the Automated Elevator.Resistance proved to be futile.

This may be a cautionary tale for the Writer Guild of America.

More with less

“I have to work much harder now.”

A seasoned photographer friend of mine lamented over afternoon coffee a few years ago.

In his fifties, he has been a still-life photographer in New York City for over 30 years. His expertise is shooting cosmetics products for major brands like Estée Lauder, Shiseido, Clinique, and MAC, among others, and his work has graced many magazine pages, posters, and billboards. Each photo he takes is carefully crafted, curated, and choreographed. In his field, he is established enough that he could command, I speculate, five figures relatively easily. If he does one or two of those assignments a month, he could earn a comfortable living even in an expensive city like New York City. From what I could gather, he’s done alright.

“I used to spend an entire day setting up one shot. Now I have to do ten shots a day.”

For a similar kind of fee, he says his clients nowadays demand much more work from him.

During his career, he has seen a dramatic evolution in technology. He is old enough to have started his career in the days of film. I’m also old enough to have learned film photography when I was in high school. We could shoot only up to 36 photos on one roll of film. We would develop film and print photos in the darkroom, and the process would take hours if not overnight.

This process now requires no discernible skill, talent, or special equipment. It barely takes milliseconds, if that. And it produces exponentially more volume. In 2023, over 85% of all photos are now taken on smartphones.

With generative AI, we don’t even need a camera or a smartphone to shoot. We can just… generate.

Soon, my photographer friend could be asked to do a hundred shots, if not more, a day for less money.

Or, he may not even be asked at all. If he’s in a race to the bottom, that is.

Creativity is the only job left

Obviously and admittedly, being an elevator operator and being a creative are drastically different professions.

Just as getting on an elevator without a human operator was unthinkable, automatically generating sentences and images were unfathomable before generative AI. Now, just with a few keystrokes, we can generate.

Nike’s mission statement reads “If you have a body, you are an athlete.” The same idea could apply to creativity: if you can type, you can be a creative.

Throughout history, technology democratized creativity. Cameras allowed people to create realistic images without being a skilled painter. Vinyl gave DJs new ways to make music. Smartphones made everyone a decent photographer. Squarespace didn’t make everyone a web designer but it did give access to good-enough web design for everyone for free.

While technology de-industrializes creative professions, it forces creative professionals to adapt.

David Lee, the Chief Creative Officer of Squarespace and an old friend of mine, made a few points that I thought are useful in framing how humanity can thrive in an age of machine automation.

1. Designers and creatives, move upstream.

My photographer friend, instead of trying to shoot more shots per day, changed his methodologies as well as outputs. He decided to move upstream and now directs product photography and videography for cosmetics as well as technology hardware companies by leveraging his experience. He’s now incorporating AI into his process.

Move upstream, not downstream.

2. Taste is the new skill

Anything that can be repeated is going to be automated by machines. If we are doing the same task day in and day out, we need to watch out.

The phrase “Taste is the new skill” is from the digital artist Claire Silver. “With the rise of AI, for the first time, the barrier of skill is swept away.”

Taste was always the skill creatives needed to have to succeed. As David put it, what’s old is new again.

The aforementioned photographer, with years of experience and training, has better taste and better creative judgment than those without. His taste is the differentiator.

3. Creativity is the only job left

Making the work used to be a big part of creativity. That may be changing slightly. As the barrier to entry for making creative things gets lower, there will be more premium on ideas. Those who can come up with original thoughts will continue to be the ones to thrive in an age of machines.

And that requires a race to the top, not to the bottom.

To listen to my conversation with David Lee, please take a listen here:

Apple Podcast:


If you like what you read, please subscribe for my weekly newsletter at

Thanks for reading.

– rei ★

Originally published at

Race to the top was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.






Leave a Reply

Your email address will not be published. Required fields are marked *