AI Broke Your Job Title ‘Moat.’ What Happens Now?

Designers and engineers have spent years specializing in work that AI now does in minutes. It’s time to redefine human value and potential.
Key Takeaways
Everyone seems to be asking the wrong question.
Across every team I’ve worked with (design, engineering, operations, content), I hear the same thing: “How do I integrate AI into my workflow?” Designers are trying to figure out how AI fits into Figma. Engineers are exploring AI-assisted code generation. Content writers are testing AI drafting tools. Ops teams are automating their existing processes with AI wrappers.
They’re all optimizing within their lanes. And that’s precisely the problem.
What we’ll cover
You are not your job title
We’re living amid a seismic transformation
I’m a designer who vibe-coded a prototype
Domain expertise isn’t the new moat
What problems can you solve?
You are not your job title
Professional identity has never been about innate ability. Every discipline or specialized role was built on two moats:
- The tool moat: the specialized software and platforms that took hundreds of hours to master that most people didn’t have the time or inclination to learn.
- The knowledge moat: the frameworks, principles, and domain theory that took years of study, practice, and pattern recognition to internalize.
Together, these moats created the specialization that justified the title.
Designers haven’t just operated in Figma. They’ve spent years with interaction design patterns, principles, and a deep understanding of human behavior. Engineers weren’t just fluent in Python or JavaScript; they carried knowledge of software architecture, system reliability practices, and the hard-earned intuition for what scales and what breaks.
Ops professionals understood process frameworks, change management theory, and organizational dynamics. Product managers internalized market analysis methods, prioritization models, and stakeholder management strategies. Copywriters understood narrative structure, persuasion psychology, and the craft of making complex ideas land simply.
But here’s what we’ve forgotten: In every discipline, the people who stood out weren’t necessarily the most fluent with the tools. They were the ones who used the tools to think.
We’re living amid a seismic transformation
Now, AI is dismantling both moats at once, and the reality underneath is becoming impossible to ignore.
The tool moat is obvious: when anyone can generate functional code, create polished visuals, draft compelling copy, or automate workflows, tool mastery alone stops being a differentiator. But the knowledge moat is falling, too.
You don’t have to develop domain expertise with AI. A designer generating code can also get software architecture guidance, best practice recommendations, and performance optimization strategies drawn from the entire body of engineering knowledge. Similarly, an engineer creating an interface can get interaction pattern suggestions, accessibility considerations, and information hierarchy principles informed by decades of design theory.
Gstack, for example, is an open-source system that turns a single developer’s environment into a virtual engineering team, with distinct AI agents playing the roles of product strategist, architecture lead, quality engineer, design systems reviewer, and deployment engineer. They offer structured expert perspectives working in sequence across an entire sprint.
Someone who has never managed a release process now has a deployment engineer in the room. Someone who’s never run a design audit now has a design systems reviewer on call. The knowledge that used to live behind years of specialization is now found in an AI collaborator.
I’m a designer who vibe-coded a prototype
At Salesforce, our design system team manages a library of over 1,700 icons. Our internal designers and developers had a sub-optimal search experience when trying to look for icons. They’d assume the library didn’t have what they needed, contribute new icons with Salesforce-unique names, and overload the library with redundancies. Even the metadata for the synonyms didn’t help.
We’d spend hours de-duplicating, using the same sub-optimal manual process that was riddled with space for human errors. We needed a better solution.
Proof of concept
Around July 2025, I started vibe coding a proof of concept that used a vision model called SigLIP2 to enable natural language and visual similarity search across the icon library. A user could type “arrow pointing up and to the right” or upload an image, and the model would return ranked matches with confidence scores. No more guessing icon names.
In the before times, this project wouldn’t have been mine to build. It would’ve meant writing a proposal, getting engineering buy-in, waiting for a product manager to scope and prioritize it against competing roadmap items, then waiting again for sprint bandwidth before depending entirely on a different function to bring it to life. As a designer, I would have filed the ticket and hoped it made the priority cut.
Instead, I was able to begin with a local prototype. While it was rough and limited, it worked well enough to show the idea had merit.
Over the following months, as AI models improved, I could make architectural improvements and performance optimizations that would have been beyond my reach just months earlier. I wasn’t becoming an engineer. I was a problem-solver with access to increasingly powerful domain knowledge on demand.
The app is live
Today, this app is live for the internal team and includes a Figma plugin and an admin dashboard for maintaining the icon database. The only engineering support I needed was guidance on performance tuning and help setting up a secure hosting environment.
Iteration by iteration, I figured out the architecture, model integration, UI, and deployment pipeline. There were mistakes and dead ends, bad architectural decisions I had to unwind, and things I understood only after getting them wrong. But that’s the point: the feedback loop between building and learning was mine to own, not gated by a specialist’s schedule.
I didn’t need to become an engineer. I needed to clearly define the problem, stay persistent, and use AI to access the tools and knowledge that used to live behind someone else’s job title.
Domain expertise isn’t the new moat
You may be thinking that domain expertise still matters and that AI-assisted “dabbling” doesn’t replace a specialist. That may be true – for now.
But think about this as two curves on a graph. One line represents the depth of domain expertise required from humans to accomplish a given task. The other represents the domain capability of AI models. These curves are moving toward each other and intersecting, not all at once, but domain by domain, task by task.
When I started building my tool, AI-generated code was solid for prototypes. Eight months hence, AI models can generate code that’s production-ready.
There’s no question that AI will match human domain expertise. It’s just a matter of when.
What problems can you solve?
How AI fits into an existing role isn’t the right question. But even “what problems can I now solve?” might not go far enough. AI is getting better every day at connecting dots across domains, synthesizing information, and managing complex workflows, too.
The people who are thriving aren’t just cross-functional thinkers. They’re orchestrators, directing multiple AI agents and human collaborators toward outcomes no single person or system could reach alone. And that pattern keeps shifting upward.
Which raises a bigger question. For centuries, we’ve defined human value through a narrow lens of productive output: what a person can make, what they know, what they can execute. Entire economies and education systems were built around that framing. But if AI can increasingly handle the execution, the knowledge work, and even the synthesis, what does that free the human brain to actually do?
Maybe we’ve been underestimating ourselves for a very long time. Not because people weren’t capable, but because the structures we built never asked anyone to be more than a specialist.
I don’t have a neat answer for what comes next. But I’m fairly sure it starts with letting go of the titles, the moats, and the assumption that a person’s value was ever defined by what they could produce within a single lane. Whatever the human brain is actually capable of, we might be about to find out.
What’s your superpower in the agentic AI era?
We want your perspective on the changes shaping the agentic era. How is your work changing? How is your role evolving? What tools and experiences will amplify your superpowers? Join our research program to help shape the future of enterprise work.













