Coding agents suck at frontend because translating intent (from UI → prompt → code → UI) is lossy.
For example, if you want to make a UI change:
How the coding agent processes this:
Search is a pretty random process since language models have non-deterministic outputs. Depending on the search strategy, these trajectories range from instant (if lucky) to very long. Unfortunately, this means added latency, cost, and performance.
Today, there are two solutions to this problem:
Improving the agent is a lot of unsolved research problems. It involves training better models (see Instant Grep, SWE-grep).
Ultimately, reducing the amount of translation steps required makes the process faster and more accurate (this scales with codebase size).
But what if there was a different way?
In my ad-hoc tests, I noticed that referencing the file path (e.g. path/to/component.tsx) or something to grep (e.g. className="flex flex-col gap-5 text-shimmer") made the coding agent much faster at finding what I was referencing. In short - there are shortcuts to reduce the number of steps needed to search!
Turns out, React.js exposes the source location for elements on the page.1 React Grab walks up the component tree from the element you clicked, collects each component's component name and source location (file path + line number), and formats that into a readable stack.
It looks something like this:
<span>React Grab</span> in StreamDemo at components/stream-demo.tsx:42:11
When I passed this to Cursor, it instantly found the file and made the change in a couple seconds. Trying on a couple other cases got the same result.
I used the shadcn/ui dashboard as the test codebase. This is a Next.js application with auth, data tables, charts, and form components.
The benchmark consists of 20 test cases designed to cover a wide range of UI element retrieval scenarios. Each test represents a real-world task that developers commonly perform when working with coding agents.
Each test ran twice: once with React Grab enabled (treatment), once without (control). Both conditions used identical codebases and Claude 4.5 Sonnet (in Claude Code).2
<a class="ml-auto inline-block text-..." href="#"> Forgot your password? </a> in LoginForm at components/login-form.tsx:46:19
Without React Grab, the agent must search through the codebase to find the right component. Since language models predict tokens non-deterministically, this search process varies dramatically - sometimes finding the target instantly, other times requiring multiple attempts. This unpredictability adds latency, increases token consumption, and degrades overall performance.
With React Grab, the search phase is eliminated entirely. The component stack with exact file paths and line numbers is embedded directly in the DOM. The agent can jump straight to the correct file and locate what it needs in O(1) time complexity.
…and turns out, Claude Code becomes ~3× faster with React Grab!3
Distribution of edit times across 20 UI tasks. React Grab eliminates the search phase by providing exact file paths and line numbers, letting the agent jump straight to the code.
Below are the latest measurement results from all 20 test cases. The table below shows a detailed breakdown comparing performance metrics (time, tool calls, tokens) between the control and treatment groups, with speedup percentages indicating how much faster React Grab made the agent for each task.
To run the benchmark yourself, check out the benchmarks directory on GitHub.
The best use case I've seen for React Grab is for low-entropy adjustments like: spacing, layout tweaks, or minor visual changes.
If you iterate on UI frequently, this can make everyday changes feel smoother. Instead of describing where the code is, you can select an element and give the agent an exact starting point.
React Grab works with any IDE or coding tool: Cursor, Claude Code, Copilot, Codex, Zed, Windsurf, you name it. At its core, it just adds extra context to your prompt that helps the agent locate the right code faster.
We're finally moves things a bit closer to narrowing the intent to output gap (see Inventing on Principle).
There are a lot of improvements that can be made to this benchmark:
On the React Grab side - there's also a bunch of stuff that could make this even better. For example, grabbing error stack traces when things break, or building a Chrome extension so you don't need to modify your app at all. Maybe add screenshots of the element you're grabbing, or capture runtime state/props.
If you want to help out or have ideas, hit me up on Twitter or open an issue on GitHub.
1This only works in development mode. React strips source locations in production builds for performance and bundle size. React Grab detects this and falls back to just showing the component names without file paths. The component tree is still useful for understanding structure, but you lose the direct file references. This only works in production if you have source maps enabled.
2Single trial per test case is a limitation. Agents are non-deterministic, so results can vary significantly between runs. Ideally we'd run each test 5-10 times and report confidence intervals. The 3× speedup is directionally correct but treat the exact number with appropriate skepticism. Future benchmarks will include multiple trials. I'm very open to fixing issues with the benchmarks. If you spot anything off, please email me or DM me on Twitter.
3This is median speedup across all 20 test cases. Some tasks showed 80%+ improvement (simple element lookups), others showed minimal gains (complex multi-file changes where search wasn't the bottleneck). The variance is high. Your mileage will vary depending on codebase size, component nesting depth, and how descriptive your component names are.