PRODUCT @ PRISMIC

TWITTER - LINKEDIN

Do We Still Need Accessible Front-Ends When AI Can See for Us?

November 6, 2025

Browsers like Atlas, Comet or Dia, can now read, sum up and even interact with web pages. Is front-end accessibility still relevant in 2025?

The question

Browsers are changing. They’re no longer just rendering machines that display what developers build. They are starting to interpret. Some can already scan a page, recognise what each element does, and even click on your behalf. A new kind of browsing is emerging where you might delegate the act of navigating entirely to a machine.

That raises a question that needs asking

Accessibility 101

For my non technical friends, web accessibility simply means designing websites so that everyone can use them. That includes people with visual, auditory, motor, or cognitive impairments. It covers readable text contrast, clear navigation, captions on videos, and support for screen readers or keyboard-only navigation. According to the World Health Organization, roughly 16% of the world’s population rely on these features to access information, work, and communicate online. And yet, less than 3% of websites meet basic accessibility standards.

Accessibility on business context

As a product manager, I’ve been guilty of delaying accessibility work more times than I’d like to admit. I pushed those tasks to the next sprint, or the one after that, because something else felt more urgent. It’s easy to convince yourself you’ll circle back later, but 'later' rarely comes. Only after seeing how those decisions affect real people did I understand what those trade‑offs really mean and even then continue to delay some of this.

After years of delaying this work, I can’t help but feel a strange optimism when I think about where the web might be headed. If AI can describe an image more precisely than alt text, infer a button’s purpose from its context, and navigate without a keyboard, it could mean the web is finally evolving toward a world where inclusion is built in, not patched on later.

It’s a tempting story. But it’s also partially wrong.

How AI browsers see

AI browsers don’t truly understand—they guess, they approximate. They look at the DOM, the layout, the copy, and they guess what each part means. Most of the time, they’ll be right. Sometimes, they’ll be confidently wrong.

That’s why accessibility still matters. Not only because humans continue to need it. It’s also because accessibility is the clearest, most reliable way to express intent.

A properly labelled button tells a screen reader what to say. It also tells a machine what the element is for. An aria-label that says “Subscribe to newsletter” isn’t just helpful for someone using assistive tech. It’s a semantic anchor for an AI browser deciding which button to click.

In this sense, accessibility has always been about structure and meaning. We used to think of those as human concerns. Now, they’re machine concerns too.

Two layers of accessibility

The web is quietly becoming two layers deep: one for humans and one for agents. The human layer is what we see and touch: the colours, motion, typography, and layout. The agent layer is invisible but equally real. It’s the semantic structure, intent metadata, and predictable patterns that intelligent browsers rely on to make sense of it all.

When we design clearly for humans, we make it easier for machines to navigate. And when we build clear semantics for machines, we make it easier for humans who depend on assistive technologies.

The two goals aren’t in conflict. They’ve always been aligned. We just didn’t have to think about it much until now.

And here’s the irony. The biggest push for accessibility may not come from accessibility advocates at all. It will come from companies chasing visibility.

The future ?

When a growing share of traffic flows through AI-native browsers that summarise content, answer queries, or even shop autonomously, businesses will have no choice but to optimise for them. They’ll make their sites easier for machines to parse, label actions more clearly, and clean up messy interfaces. Not because it’s the right thing to do, but because it’s the profitable thing to do.

At the same time, these AI agents / browsers will continue to evolve. Their builders have just as much incentive to make sense of the messy, inconsistent web as the companies trying to be found on it. They’ll get better at detecting intent, inferring meaning, and correcting human design mistakes. Both sides—businesses and browser makers—will push in the same direction: a web that’s more structured, more semantic, and more understandable.

That shared effort, even if driven by commercial incentives, will end up benefiting everyone.

Because when everyone builds for intelligent browsers or for agents that rely on structure and clarity, and when those browsers become smarter at understanding intent, we get a web that’s finally easier for people who’ve always needed those same features.

The web might become more inclusive not because of empathy or regulation, but because incentives on both sides finally align.

A slightly cynical path, but a genuinely hopeful outcome.

I'm trying to build good articles about AI with AI. Find the prompt here

AI is Slow and Inaccurate Like Humans.
Here's how to Design for Them

February 20, 2025

In the age of AI and large language models (LLMs), one of the most profound realizations for designers and developers is this: AI is slow and inaccurate—just like us. This insight reshapes how we think about building products that integrate AI, particularly in B2B settings. To build intuitive, effective systems, we must design for AI much like we design for human interactions. But to understand why this approach is effective, let’s first explore the two central challenges of LLMs: their slowness and their inaccuracy.

Slowness

Comparing AI to Traditional APIs

One of the less discussed property of LLMs is their latency. For a typical query, it can take several seconds for an LLM to process and respond. For example, generating a thoughtful paragraph of text will take more than 5 seconds on average. Compare this to traditional APIs, where tasks like fetching structured data or performing computations are measured in milliseconds—often fewer than 100ms.

The expectation among users today, especially developers, is lightning-fast responses. APIs and systems have conditioned us to demand immediate feedback loops. Waiting even a second can feel like an eternity in software interactions. Yet, with LLMs, developers must rewire expectations: the inherent complexity of processing language, reasoning, and generating contextually rich responses requires time.

Inaccuracy

AI’s Hallucinations and Human-Like Errors

Accuracy is another area where AI challenges our traditional paradigms. Conventional APIs are deterministic and reliable. If you query an API for user data, you expect the data to be exact every time. LLMs, however, are probabilistic models that generate outputs based on training data, leading to hallucinations, incorrect facts, and unpredictable results.

It's now common to see example on the Web of LLMs getting facts wrong like this famous example.

Getting facts wrong, is not a unique property of LLMs.

For example :

  • 63% people are subject to Mandela Effect (a large group of people collectively misremember a fact or event) - You're part of them if you think that the Monopoly man's wears a monocle
  • 65% of people believe that we are only using 10% of our brain - Spoiler it is wrong
  • 7% of Americans still believe that the Earth is flat.

We’ve built systems to correct for our propensity to make mistakes and made them a first class citizen in all the software that we use. It is now time to leverage them with AI. Think of tools like

Collaborative documents: These allow editing and reviewing cycles because humans make mistakes.

Version control systems: They track changes to correct errors when needed.

Feedback loops: Features like commenting, tagging, and assigning tasks rely on collaboration to resolve inaccuracies.

Similarly, LLMs require systems that don’t assume perfection but instead offer mechanisms for error correction, review, and iteration.

This raises a fundamental question for product teams: how do you design around this slowness and inaccuracy? The answer lies in looking at systems that already accommodate slow, asynchronous feedback, which brings us to us - humans.

Designing for AI Like You Design for Humans

Since AI is slow and inaccurate like humans, then the logical conclusion is to design AI systems using the same principles we use for human interactions. This involves integrating patterns of collaboration, review, and asynchronous workflows into your products. Below, we’ll explore these principles and support them with real-world examples.

Embrace Asynchronous Workflows

Principle: Just as humans send emails or follow up on tasks asynchronously, LLMs work best when given time to “think.”

Many successful AI-powered tools embrace asynchronous workflows to make collaboration more efficient. For example, Read.ai, Gong, and Jiminny generate meeting summaries and follow-up action items after discussions, allowing teams to focus on conversations instead of taking notes in real-time. This mirrors how humans work together in teams—delegating tasks, summarizing key points, and ensuring that insights are shared asynchronously. Instead of forcing users to wait for immediate AI-generated results, these tools integrate AI in a way that aligns with natural work rhythms, making them more effective and less intrusive.

Stream Responses to Reduce Perceived Latency

Principle: Humans often communicate by delivering partial answers while formulating their thoughts. AI systems can mimic this by streaming responses, making interactions feel faster and more dynamic.

Streaming responses is one of the most effective ways to reduce the perceived slowness of LLMs. ChatGPT and Claude use incremental streaming to deliver answers as they are being generated, preventing users from waiting in silence. This approach makes interactions feel fluid and conversational, much like a human gradually explaining their thoughts. Similarly, AI-powered development tools like Bolt.new and Replit stream real-time coding suggestions, enabling developers to see potential improvements as they type. This experience mimics pair programming, where another developer suggests refinements on the fly rather than waiting for a complete code review.

By embracing streaming, AI interactions become smoother, keeping users engaged without feeling the underlying computational delay.

These two solutions (asynchronous workflows and streaming) provide effective ways to manage the intrinsic latency and slowness of LLMs while keeping users focused and avoiding frustration.

Facilitate Iteration and Review

Principle: LLM outputs often need refinement, just as human-generated ideas are subject to brainstorming, feedback, and revisions. The most successful Generative AI products embrace this paradigm.

Generative AI tools work best when they encourage iteration rather than providing a single, fixed answer. ChatGPT and Midjourney allow users to generate multiple versions of text or images, similar to how teams brainstorm multiple ideas before settling on the best one. AI-assisted writing tools like Google Docs’ Smart Compose and Figma’s AI-powered design suggestions integrate human feedback into the revision process, ensuring that AI-generated content aligns with user expectations.

In coding environments, GitHub Copilot continuously suggests alternative code implementations, allowing developers to refine their approach before committing changes. By enabling iteration and feedback loops, AI becomes a creative collaborator rather than a rigid, one-shot answer provider.

Integrate AI as a Team Member

Principle: AI should function as an active participant in workflows, much like a team member who contributes insights, suggests actions, and enhances decision-making without needing explicit instructions. By embedding AI into existing collaboration processes, teams can benefit from continuous support, contextual recommendations, and intelligent automation that amplifies human efforts.

AI-powered tools are increasingly designed to integrate seamlessly into team workflows, acting as proactive contributors rather than passive assistants. Harvey (Legal AI) supports legal teams by identifying inconsistencies in contracts and suggesting precise edits, much like a human proofreader reviewing documents for accuracy. By integrating AI as a dynamic team member, businesses can ensure that critical opportunities are surfaced in real time, reducing manual oversight while improving efficiency and decision-making.

Make those AIs proactive like your best colleagues

Principle: Instead of waiting for users to request help, proactive AI offers suggestions or flags inconsistencies, much like a team member anticipating needs.

Proactive AI shifts the paradigm from users asking AI for help to AI offering suggestions before users even realize they need them. AI-powered CRMs like Salesforce and HubSpot analyze customer interactions and proactively recommend follow-ups or content adjustments to improve engagement.

The Holistic Approach: Learn from Human-Centered Design

By aligning AI design with human-centered design principles, we embrace the strengths and weaknesses of both. Like humans, AI thrives in systems built for collaboration, error correction, and flexibility. By understanding that slowness and inaccuracy are not limitations but characteristics, we can build products that seamlessly integrate AI into existing workflows.

At Prismic, we are envisioning new ways to embed AI into collaboration leveraging those main principles to shape a powerful and cohesive vision

Transition to Prismic’s Vision

These examples illustrate how the most successful AI systems today adopt patterns inspired by human collaboration while addressing the inherent constraints of AI. Whether it’s ChatGPT optimizing for latency or Read.ai enabling asynchronous task management, these tools thrive because they embrace principles we’ve long used to interact with humans.

In the case of Prismic we are building a seamless collaboration between humans and AI to go beyond automation.

We're building a future where AI becomes an integral part of team collaboration within the tool. This vision is built around the concept of augmented collaboration, where AI enhances and streamlines interactions between team members to drive creativity and efficiency.

Imagine a workflow where the AI actively monitors and enhances the comments left by your team, much like the screenshot above. When a teammate leaves feedback or a question, Prismic’s AI doesn’t just sit idle—it jumps in to:

  • Rephrase unclear comments to make them sharper and more actionable.
  • Suggest alternative ways to approach the task or refine ideas.
  • Offer variations or enhancements to align the output with your team’s goals.

For example, in the mockup, the AI provides suggestions of solutions allowing teammates to accelerate their workflow

While already very interesting this remains reactive and we think that it should proactively suggest improvements that make your website more relevant and relatable for your end visitors.

For example, detecting that “Mach Alliance” is gaining traction and recommend repurposing existing blog posts to target this keyword, noticing that 20% of visitors to a landing page are Spanish-speaking and proactively suggest translating the page to increase conversions.

Beyond content optimization, this proactive intelligence could extend to broader workflows, embedding AI into key operational processes. AI should interact with backlog management by tracking tasks, flagging blockers, and raising issues autonomously. It could provide asynchronous updates on progress and priorities through Slack or other collaboration tools, ensuring teams stay aligned without constant check-ins.

By transitioning from reactive AI to proactive AI, Prismic envisions a future where AI is not just a tool but an active force in driving efficiency, strategy, and team productivity.

Final Thoughts

Designing for the age of AI requires reframing expectations. AI is not just another API or tool; it’s a new kind of collaborator—one that shares many traits with humans. By designing for AI like we design for human interaction, we create systems that are intuitive, resilient, and future-proof.

Startups and product teams should embrace this paradigm shift, recognizing that the best designs don’t fight against AI & human’s nature but rather harness it to create exceptional user experiences by allowing both party to collaborate together and make it easy for human to adopt such solutions.

Thanks to Todd Hamilton for the Figma template used to create the article image.

I'm trying to build good articles about AI with AI find the prompt here

Running during lockdown. 1KM maximum

August 23, 2023

I tried to create a running app to advocate the fact that runners should not go out for a run. That's sounds like a nice challenge no?

During the COVID19 lockdown, I worked on a running App. My goal was to try to teach people about the COVID spread and if possible prevent people to go out for a run. Sadly I had a complicated lockdown that kept me way too busy to finish the development of the app. I wrote about anyway, to tell the story of this canceled app. Enjoy the reading and let me know what you think.

As most of the countries around the world, France is now in lockdown for more than 3 weeks. Policy changes through countries but our government chose to create a special rule for People that want to exercise. It is permitted in France to do a short run (60 minutes) every day with one simple restriction, you are allowed to run in a radius of 1km of your home.

Updated 8 of April: In Paris, you can now go out for a run only before 9 AM and after 7 PM.

I'm seeing a lot of people every day going out for this daily exercise, It's almost like people discovered a new passion for running in the past 4 weeks.

I have a lot of friends that are fighting directly COVID19. Every day I can see how annoyed, tired, and afraid they are seeing those people going out & possibly spreading this disease. For them, those people became a symbol of the lack of understanding of what this disease is. They were basically seeing people running directly into a bed of Intensive care.

Idea

Last Sunday, I was spending my day like most of the people scrolling to my Facebook feed, and after a new report about those runners, I decided to do something about it. My thinking was, "Can I provide some value to those runners for them to download an app that will teach them afterward what is the impact of COVID19 in France?"

A simple app that looked like an old school radar notifying them when they were living the 1Km radius could provide enough value. I thought that I had found a simple concept that could spread pretty quickly.

This application will be my trojan horse to educate about the disease and avoid this spread as much as possible;

I spent the day looking around React Native, geolocation & Map and ended up with a proof of concept.

By following the documentation of an excellent plugin and a tutorial that I saw on medium, I managed to display a map, get the current position of my phone and draw a line representing the route that my phone was following.

Design

The main idea was to have a clear and straightforward representation of the concept. It was displaying a circle of 1KM around the starting point of the runner. The first action of the user will be to launch the run, and the GPS of the phone will give you an indication of your distance from your starting point.

The design will be as simple as possible, displaying the distance from the starting point, a map of the run & a simple color indicator. Those colors are helping to understand how close you are from the maximum distance you're allowed to run.

I thought about using a sound indicator, too, to give an idea of how close the runner will be from the limit allowed by his/her run.

Since I wanted to release and test the app as fast as possible, I chose a simple design mostly based on the map colors taken from snazzy maps and worked around it. The concept was simple, green your good, Orange, you're close to the 1Km limit, Red you're at the border.

Using those colors in the UI buttons and text indicator was the simplest way to put some emphasis on it.

And that was it.

Code

I started to develop quickly and realized how bad I was at it. I needed more than two weeks of copy-pasting code from tutorials and examples from the react-native-maps library to get something close to what I wanted.

I managed to have my maps displayed and my marker animated with my route being drawn. I had a hard time implementing the color change in the route and it's still not working at the moment.

I also sadly realized that testing a running app relying on geolocation while being stuck in my small apartment was also a pretty complicated thing. I was closed to have something final but stopped developing the end of the lockdown being close. I managed anyway to have something almost working without having the time to implement the last screen. This one was the most important one. The one that was meant to teach people about the virus and get some money if possible.

Teach

The end goal of this app was to arrive at the last step: Being able to provide value to runners in order to teach them the impact of COVID in the french population. The goal was to encourage people to stay home and to explain the risk if they were going out.

For that, different ideas came up. The dumbest but more impactful one was to share at the end of the run, the number of people died from COVID during the race. Discussing it with my Friend Hugo, we realized that it was too aggressive, and he recommended me to share something more related to the spread of the virus. The number of people that you could have given the virus to during your race seemed like a good idea, and I chose to design that.

I also wanted to use this screen to try to raise money for the Parisian hospital by only adding a button that will trigger a gift to the organization.

It seemed to be pretty easy to implement with the SMS react-native plugin and by having a simple button that will trigger an SMS to give 5€ to the APHP. I might use this knowledge later.

To finish

I had a fun time trying to build this application and learned a lot in the process. I'm now more motivated than ever to develop more concepts and to talk about it. I hope that the reading was interesting to you and might encourage some of you to make useful things.