The Automation Consideration: Keeping Autonomy and Agency in the Age of AI — PathAble

By Burt Brooks, Co-Founder & CEO, PathAble AI

Automation and robotics have emerged as powerful accessibility tools for people with disabilities, from brain chips to robotic limbs to AI-powered vision aids. The potential is real. But so is the risk. As these tools advance, we have to ask ourselves an honest question: where does the automation end and the agency of the individual begin?

We need to be intentional about how we support people, without quietly eroding their personal contributions in the process.

A New Kind of Machine

For most of history, our relationship with machines was simple: press a button, the machine does a thing. AI has changed that equation. Today's machines can decide when to press their own buttons. Give them a goal, and they'll pursue it until something stops them. That ability to reason, act, and adapt is a genuinely new paradigm, especially in the workplace.

People with disabilities have always been experts at adapting to a world not built for them. AI is still early enough in its development that we have a real opportunity to shape it, to build tools that work for us, not the other way around. Every worker, disabled or not, wants to make a meaningful contribution and be recognized for it. But if you automate that contribution entirely, does it still carry meaning?

Automation vs. Agency

Humans bring something to work that machines can't replicate: lived experience, flexible moral judgment, originality, and the ability to build real relationships. We adapt on the fly. We read the room. We show up for each other in ways no algorithm can fully capture. Strip that away and you haven't created a better workplace; you've created a machine that needs babysitting.

This isn't a new challenge, actually. In disability employment services, we already work with the concept of fade away supports, where an employment specialist supports a person with a disability on the job until the individual can work independently, with natural supports in place. The specialist fades. The person remains. That's the model.

We need to apply that same thinking to AI.

A robotic arm that gives someone with limited mobility new ways to interact with their workplace? That's equalization. That's brilliant. But if that same arm performs the work identically whether a human is using it or not, you haven't built an accessibility tool. You've built a replacement.

AI glasses that describe an environment to someone who is blind? Powerful. Those same glasses autonomously operating someone's computer and sending emails on their behalf? You've crossed a line, from support into substitution.

The difference matters. And it's exactly the line we think about at PathAble AI every time we make a product decision.

Building the Jobs of the Future

We keep saying "AI makes mistakes" as our comfort zone. But that window is closing. The real question we need to be asking now is: what are the jobs of the future in a world where AI makes fewer mistakes than humans?

If AI drives safer than us, why have human drivers? If it diagnoses disease more accurately than doctors, why have human doctors? We've been here before. When elevators were first invented, human operators ran them, not because the machines needed help, but because people didn't trust them yet. The moment that trust shifted, the job disappeared almost overnight.

What happens when that kind of trust extends to more of our daily lives?

Without intentionally repositioning our workforce, we risk deepening inequality, and people with disabilities, historically hit hardest by economic disruption, will bear a disproportionate share of that impact. We have a window right now to soften that blow. But only if we start building with intention today.

That means keeping workers with disabilities in the room as these tools are developed. Not as an afterthought. As co-designers. Because they're the first to know when "support" has crossed into "control," and the first to be harmed when it does.

Augment, Don't Replace

At PathAble AI, we build on a straightforward principle: augment, don't replace. Our platform is designed to give job coaches more capacity, not fewer clients. To give job seekers more independence, not less. Every feature we ship gets evaluated against one question: does this expand what a human can do, or does it quietly take over?

We believe a more accessible future is possible. One where people still feel relevant, included, and genuinely supported. But it won't happen by accident. It will happen because people who understand the stakes helped build it.

We hope you'll join us in shaping it.

Visit The Automation Consideration: Keeping Autonomy and Agency in the Age of AI — PathAble