One Year with Next.js App Router — Why We're Moving On
A critique of React Server Components and Next.js 15.
Oct 21st, 2025
webdev
technical analysis
opinion
As I've been using Next.js professionally on my employer's web app, I find the
core design of their App Router and React Server Components (RSC) to be
extremely frustrating. And it's not small bugs or that the API is confusing,
but large disagreements about the fundamental design decisions that Vercel and
the React team made when building it.
The more webdev events I go to, the more I see people who dislike Next.js, but
still get stuck using it. By the end of this article, I will share how me and
my colleagues escaped this hell, seamlessly migrating our entire frontend to
TanStack Start.
The pitch of RSC is that components are put into two categories,
"server" components and "client"
components. Server components don't have useState, useEffect, but can be
async functions and refer to backend tools like directly calling into a
database. Client components are the existing
model, where there is code on the backend to generate HTML text and frontend
code to manage the DOM using window.document.*.
The first disaster: naming!! React is now using the words
"server" and "client" to refer to
a very specific things, ignoring their existing definitions. This would be
fine, except Client components can run on the backend
too! In this article, I'll be using the terms "backend" and
"frontend" to describe the two execution environments that web apps
exist in: a Node.js process and a Web browser, respectively.
This Server/Client component model
is interesting. Since built-ins like <Suspense /> get serialized across the
network, data fetching can be very trivially modeled with async server components, and the fallback UI works as if it were
client-side.
src/app/[username]/page.tsx
// For this article, server components will be highlighted in red
export default async function Page({ params }) {
// Page params are given as a resolved promise
const { username } = await params;
// The components `UserInfo` and `UserPostList` will be run at the same
// time. Once `UserInfo` is ready, the visitor will see the page with a
// `PostListSkeleton` if the post list is not yet ready.
return <main>
<UserInfo username={username} />
<Suspense fallback={<PostListSkeleton />}>
<UserPostList username={username} />
</Suspense>
</main>
}
// Waterfalls are avoided by having multiple components, which
// are all evaluated at the same time.
async function UserInfo({ username }) {
const user = await fetchUserInfo(username);
return <>
<h1>{user.displayName}</h1>
{user.bio ? <Markdown content={user.bio} /> : ""}
</>
}
async function UserPostList({ username }) {
const posts = await fetchUserPostList(username);
return /* post list ui omitted for brevity */;
}
If we ignore the 40kB gzipped bundle size of React itself, the above example
has zero JavaScript for the UI and data fetching — it just streams the
markup! For example, the imagined markdown parser within the <Markdown />
component stays on the backend. When an interactive frontend is needed, Client
components can be created by putting them in a file starting with "use client".
src/components/CopyButton.tsx
"use client"; // This comment marks the file for client-side bundling.
export function CopyButton({ url }) {
return <>
<span>{url}</span>
<button onClick={() => {
const full = new URL(url, location.href);
navigator.clipboard.writeText(full.href);
// omitting error handling, success ui, styles
}}>copy</button>
</>
}
src/app/q+a/Card.tsx
export function Card() {
return <article>
<header>
{/* Make the browser import the copy button */}
<CopyButton url="/q+a/2506010139" />
</header>
<p>
{/* Process markdown on the backend */}
<Markdown content=".........." />
</p>
</article>
}
After quitting Bun as a runtime engineer (I implemented Server Components
bundling and a RSC template there), I joined a small company working on the
front lines: a Next.js app with a Hono backend. The following notes are
simplifications from the real world problems I've encountered when trying to
maintain and develop new features. As a result of all of these, everyone's time
is wasted either working around design flaws, or explaining to each other why
what should be a non-issue is an immovable object.
The Next.js documentation for performing mutations does not mention optimistic
updates; it appears this case was not thought about.
Components rendered by the React Server, by design, can
not be modified after mounting. Elements that could change need to be inside a
client component, but data fetching cannot happen on the client components,
even during SSR on the backend. This results in awkwardly small server
components that only do data fetching and then have a client component that
contains a mostly-static version of the page.
src/app/user/[username]/page.tsx
export default async function Page() {
const user = await fetchUserInfo(username);
return <ProfileLayout>
<UserProfile user={user} />
</ProfileLayout>;
}
src/app/user/[username]/UserProfile.tsx
"use client"; // Must separate the client code into a second file!
export function UserProfile({ user: initialUser }) {
// There are many great state management libraries out there;
// for simplicity, this example will use one state cell.
const [user, optimisticUpdateUser] = useState(initialUser);
async function onEdit(newUser) {
optimisticUpdateUser(newUser);
const resp = await fetch("...", {
method: 'POST',
body: JSON.stringify(newUser),
... // (headers, credentials, tracing, and more)
})
if (!resp.ok) /* always remember to test for errors! */
}
return <main>{/* user interface with editable fields... */}</main>:
}
As more of the page needs interactivity, it gets messier trying to keep the
static parts truly server-side. On the work app, nearly every piece of UI
displays some dynamic data. A WebSocket synchronizes data live as it
updates (for example, a user card's online state along with their basic
profile). Since these component setups are harder to understand and maintain
for engineers, almost all of our pages are entirely "use client" with a
page.tsx that defines the data fetching.
A more concrete example of what this looks like in practice with the
data-fetching library we use at work, TanStack Query.
src/queries/users.ts
// At work, there is a helper function `defineQuery` for type safety.
// Fetchers are trivial and can run on the backend or the frontend.
export const queryUserInfo = (username) => ({
queryKey: ['user', username],
queryFn: async ({ ... }) => /* fetch data */
});
src/app/user/[username]/page.tsx
export default async function Page({ params }) {
const { username } = await params;
// There's no global state in the React Server. Since layouts
// are executed in parallel, the TanStack `QueryClient` has to
// be reconstructed multiple times per route.
const queryClient = new QueryClient();
await queryClient.ensureQueryData(queryUserInfo(username));
// HydrationBoundary is a client component that passes JSON
// data from the React server to the client component.
return <HydrationBoundary state={dehydrate(queryClient)}>
<ClientPage />
</HydrationBoundary>;
}
src/app/user/[username]/ClientPage.tsx
"use client";
export function ClientPage() {
const { username } = useParams();
const { data: user } = useSuspenseQuery(queryUserInfo(username));
// ... some hooks
return <main>
{/* ... an interactive web page */}
</main>;
}
This example has to be three separate files because of the rules of server
component bundling. (The client component needs "use client", and server
component files often can't be imported on the client due to server-only
imports.). In the Pages router, this could've been a single file because of the
tree-shaking that getStaticProps and getServerSideProps has.
Since the App Router starts every page as a server component, with (ideally)
small areas of interactivity, a navigation to a new page has to fetch the
Next.js server, regardless of what data the client already has available! Even
with a a loading.tsx file, opening /, navigating to /other, and then
going back to / will show the loading state while it re-fetches the homepage.
The only case this works is for perfectly static content, where instant
navigations and prefetching work great. But web apps are not static, they
have lots of dynamic content. Being logged in affects the homepage, which is
infuriating because the client literally has everything needed to display the
page instantly. It's not like the cookies changed.
aside: In further testing on a blank project, I observe cases where the
Next frontend code would pre-fetch routes, but without any real contents.
On the hello world example, this was a 1.8kB RSC payload that pointed to 2
different JS chunks 4 separate times. This is just pure waste of our
bandwidth and egress, especially considering all of this information is
re-fetched when I actually click the link.
1:"$Sreact.fragment"
2:I[39756,["/_next/static/chunks/ff1a16fafef87110.js","/_next/static/chunks/7dd66bdf8a7e5707.js"],"default"]
3:I[37457,["/_next/static/chunks/ff1a16fafef87110.js","/_next/static/chunks/7dd66bdf8a7e5707.js"],"default"]
4:I[97367,["/_next/static/chunks/ff1a16fafef87110.js","/_next/static/chunks/7dd66bdf8a7e5707.js"],"ViewportBoundary"]
6:I[97367,["/_next/static/chunks/ff1a16fafef87110.js","/_next/static/chunks/7dd66bdf8a7e5707.js"],"MetadataBoundary"]
7:"$Sreact.suspense"
0:{"b":"TdwnOXsfOJapNex_HjHGt","f":[["children","other",["other",{"children":["__PAGE__",{}]}],["other",["$","$1","c",{"children":[null,["$","$L2",null,{"parallelRouterKey":"children","error":"$undefined","errorStyles":"$undefined","errorScripts":"$undefined","template":["$","$L3",null,{}],"templateStyles":"$undefined","templateScripts":"$undefined","notFound":"$undefined","forbidden":"$undefined","unauthorized":"$undefined"}]]}],{"children":null},[["$","div","l",{"children":"loading..."}],[],[]],false],["$","$1","h",{"children":[null,["$","$1","KCFxAJdIDH3BlYXAHsbcVv",{"children":[["$","$L4",null,{"children":"$L5"}],["$","meta",null,{"name":"next-size-adjust","content":""}]]}],["$","$L6","KCFxAJdIDH3BlYXAHsbcVm",{"children":["$","div",null,{"hidden":true,"children":["$","$7",null,{"fallback":null,"children":"$L8"}]}]}]]}],false]],"S":false}
5:[["$","meta","0",{"charSet":"utf-8"}],["$","meta","1",{"name":"viewport","content":"width=device-width, initial-scale=1"}]]
9:I[27201,["/_next/static/chunks/ff1a16fafef87110.js","/_next/static/chunks/7dd66bdf8a7e5707.js"],"IconMark"]
8:[["$","title","0",{"children":"Create Next App"}],["$","meta","1",{"name":"description","content":"Generated by create next app"}],["$","link","2",{"rel":"icon","href":"/favicon.ico?favicon.0b3bf435.ico","sizes":"256x256","type":"image/x-icon"}],["$","$L9","3",{}]]
In review, I found there is actually some content in here: the loading state.
Do you see it?
["$","div","l",{"children":"loading..."}]
It's still a lot of waste, since all of this data gets re-emitted in the
actual page RSC.
The solution to this appears to be staleTime, but it's marked
experimental and "not recommended for production". The fact this is a
non-default afterthought configuration option is embarassing. Even if we used
it, you cannot make multiple pages that refer to the same underlying data share
any of it.
One form of loading state that cannot be represented with the App Router is
having a page such as a page like a git project's issue page, and clicking on a
user name to navigate to their profile page. With loading.tsx, the entire
page is a skeleton, but when modeling these queries with TanStack Query it is
possible to show the username and avatar instantly while the user's bio and
repositories are fetched in. Server components don't support this form of
navigation because the data is only available in rendered components, so it
must be re-fetched.
In our Next.js site, we have this line of code on our server component data
fetchers to make soft navigations faster by skipping the data fetch phase all
together.
src/util/tanstack-query-helpers.server.ts
export function serverSidePrefetchQueries(queries) {
if ((await headers()).get("next-url")) {
// This is a soft-navigation. SKIP the prefetching to make it faster.
// The client might already have this data, and if not, they have the
// loading state. Ideally, this server request wouldn't exist -- The
// client side has nearly ALL the code since the app is written mostly
// as client components. Kind of a design flaw of the App router TBH.
return;
}
// ... data prefetching-logic ...
}
In addition to this, loading.tsx should contain the useQuery calls so that
while the network request for the empty RSC happens, the data is being fetched
if it actually is needed. In fact, the loading.tsx state can just be the
actual client component, and you'll see the client page.
At work, we just make our loading.tsx files contain the useQuery
calls and show a skeleton. This is because when Next.js loads the actual Server
Component, no matter what, the entire page re-mounts. No VDOM diffing here,
meaning all hooks (useState) will reset slightly after the request
completes. I tried to reproduce a simple case where I was begging Next.js to
just update the existing DOM and preserve state, but it just doesn't.
Thankfully, the time the blank RSC call takes is short enough.
Layouts can perform data fetching, but they can't observe or alter the request
in any way. This is done so that Next.js can fetch and cache layouts whenever they
want. In every other framework, layouts are just regular components that have
no feature difference compared to page components.
Fetching layouts in isolation is a cute idea, but it ends up being silly
because it also means that any data fetching has to be re-done per layout. You
can't share a QueryClient; instead, you must rely on their monkey-patched
fetch to cache the same GET request like they promise.
When a coworker asks me about why Next.js rejects some code, I've given up on
explaining the technical intricacies and just say "It's a Next.js Skill Issue,
I'm going to blow it up soon don't worry." These rules are too hard for normal
developers to understand.
Unlike the "Islands Architecture", Server Components still have to
be hydrated on the frontend to support Suspense and preserving client
component state. When doing soft navigations, the "RSC Payload" (which is not
HTML at all) is retrieved by fetch. On a fresh reload, HTML is needed for the
first paint, but the information about Client components and Suspense is
not contained within that HTML. React's solution is to send a second copy of
the entire page's markup. An example of what a Next.js production server
would send in a dynamic page render would be something like this:
GET /user/clover
<!DOCTYPE html>
<html>
<head>
{link and meta tags}
</head>
<body>
{server side render}
<script>
// a bootstrap script that sets up global `__next_f` as
// an array. once React loads, this `.push` function
// gets overwritten to write new chunks directly to the
// RSC decoder. this script has some dom helpers too
(self.__next_f=self.__next_f||[]).push([0])
</script>
<script>
// the RSC payload for the application shell.
self.__next_f.push([1,"1:\"$Sreact.fragment\"\n2:I[658993,[\"/_next/st{...}"])
</script>
<!--
the closing </body> is NOT written yet, since there is a
suspense boundary not resolved. time passes, and only
then is more data is written
-->
<div class="user-post-list">
{server side render of a Suspense boundary}
</div>
<script>
// the RSC payload for the suspense boundary
self.__next_f.push([2,"14:[\"$\",\"div\",null,{\"children\":[[\"$\",\"h4\"{...}"])
</script>
<!-- HTML and script tags repeat until the entire page is done -->
</body>
</html>
This solution doubles the size of the initial HTML payload. Except it's
worse, because the RSC payload includes JSON quoted in JS string literals, which
format is much less efficient than HTML. While it seems to compress fine
with brotli and render fast in the browser, this is wasteful. With the
hydration pattern, at least the data locally could be re-used for interactivity
and other pages.
Even on pages that have little to no interactivity, you pay the cost. To use
the Next.js documentation as an example, loading its
homepage loads an page that is around 750kB (250kB of
HTML and the 500kB of script tags), and content is in there twice.
You can verify that by pressing Cmd + Opt + u
on Mac or Ctrl + u on other platforms. And then
Cmd / Ctrl + f to locate any string of the
blog, such as "building full-stack web applications". It's there twice. And
there is no way around this, since it's a fundamental piece of React Server
Components.
This RSC format certainly has more waste. But I really don't feel like digging into
why the string /_next/static/chunks/6192a3719cda7dcc.js appears 27 separate
times. What the hell, guys? Is your bandwidth free???
Turbopack emits code that is hard to debug in a debugger (in development mode)
Turbopack throws bad error messages in many cases
I wouldn't have given this point a section in the blog normally, but I want to
point out three actual examples directly from the project.
The first is a place where during some refactoring to satisfy the Server/Client
component models, I accidentally made a Client component async. This one was
quite anoying because it didn't say at all where the issue was, but only
contained the server stack trace.
Another case of a terrible error message:
After fixing the underlying issue in this second error (which I cannot recall),
the Dev server hung and had to be restarted to recover.
The final one is the dozen times I place a debugger breakpoint and the
variable name hello gets turned into
__TURBOPACK__imported__module__$5b$project$5d2f$client$2f$src$2f$utils$2f$filename$2e$ts__$5b$app$2d$client$5d$__$28$ecmascript$29$__["hello"]
and other bullshit.
A web app with majorly dynamic and interactive components.
And Next.js is the wrong tool for both of these jobs. If you're in the first
category with a static web site, go for Astro or Fresh. For everyone who
needs the full power of React, this section is about how I replaced the vendor
locked Next with TanStack Start, incrementally and seamlessly.
It started with this Vite config.
vite.config.ts
const config = defineConfig(({ mode }) => {
const env = loadEnv(mode, process.cwd(), "NEXT_PUBLIC_");
return {
// Use the Next.js default port 3000
server: { port: 3000 },
// Use the Next.js default env prefix "NEXT_PUBLIC_"
define: Object.fromEntries(Object.entries(env).map(
([k, v]) => [`process.env.${k}`, JSON.stringify(v)])),
plugins: [
viteTsConfigPaths({ projects: ["./tsconfig.json"] }),
tailwindcss(),
// For ease of understanding from coworkers, I started porting
// the routes in `src/tanstack-routes`. When the migration was
// done, it would go back to the default `src/routes`.
tanstackStart({
router: { routesDirectory: "src/tanstack-routes" },
}),
viteReact(),
],
resolve: {
// The key to the incremental migration: redirect `next` elsewhere
alias: { next: path.resolve("./src/tanstack-next/") },
conditions: ["tanstack"],
extensions: [
// Allow a file named like `utils/session.tanstack.ts` to
// override `utils/session.ts` when imported.
".tanstack.tsx", ".tanstack.ts",
// Default import extensions
".mjs", ".js", ".mts", ".ts",
".jsx", ".tsx", ".json",
],
},
};
});
Then, I looked for every usage of a Next.js API, and either removed it or made
a stub for TanStack. For example, src/tanstack-next/link.tsx implements
next/link:
src/tanstack-next/link.tsx
import { Link } from "@tanstack/react-router";
import type { LinkProps } from "next/link";
export default function LinkAdapter({ href, ...rest }: LinkProps) {
return <Link {...rest} to={href as unknown as any} />;
}
Some of these stubs can be extremely simple. Starting out, my implementation
of useRouter was just return {}, but later I had to add a couple methods
to the object. The code here doesn't have to be clean, because it is
temporary.
Now, the new site can import nearly every client component by either stubbing
out the Next.js APIs it needs, or by using the .tanstack.ts extension to
re-implement logic on a file-by-file basis. And shortly after, I got the site's
homepage to work in TanStack Start, and we merged the branch.
This first PR only supported one of our pages, and was able to do it in a
thousand lines of added code, and 40 lines deleted. I had previous patches to
remove the few uses of next/image and next/font.
What was left was porting every other route over. The one thing we lose in
migrating from Next.js to any other framework is the ability to await
data-fetching functions in the UI. In practice, moving every route into a
loader function made it much more clear what happened when a page was SSR'd.
For pages that had multiple fetches, these could be combined into a single,
special API call that would return all of the relevant data for that page.
To re-iterate in bold font: The
migration path from Server Components is to just simplify your code — RSC
inherently drives you down a chaotic road of things you do not need.
Nearly every complex part of our site got easier to understand for all
engineers. The exception to this was having everyone get used to the new file
system routing conventions. With enough examples, we all got the hang of it.
With the incremental migration in place, new code did not break the existing
deployment. TanStack slowly took over the codebase, and we eventually deleted
all of the Next.js stubs and gained all of the beautiful type-safety features
that the TanStack Router provides. At the end, the site performed faster from
every angle: Development Mode, Production page load times, Soft navigations,
and at a lower price than our Next depoyment with Vercel.
We're not the only ones seeing the change. While I try and keep myself off of
social media, someone sent me the results of Brian Anglin's work at
Superwall, showing incredible CPU reductions on TanStack
Start. I also recall ChatGPT switching from Next.js to Remix (random online
chatter: [1] [2] [3]) a year ago.
In my opinion, this is one of the only good APIs Next.js has, and was the one
place in our code where moving to TanStack made things harder to do. Instead of
worsening the code, I just ported their metadata API into a regular function,
so everyone can use it. Originally, I had a 1:1 port on NPM, but earlier this
year I simplified it's API into one short and understandable
file. As of this blog post, I have added a TanStack-compatible
meta.toTags API, which can be installed from JSR, NPM,
or simply copied into your project.
notice: Due to time constraints with writing this article, the library
has not yet been updated. I'll probably get around to it by the end of this
week (Oct 24th). As a placeholder, I'm able to share the version that is used
at work to my website: meta.tanstack.ts.
// once in your project
import * as meta from "@clo/lib/meta.ts";
export const defineHead = meta.toTags.bind(null, {
// site-wide options
base: new URL("https://paperclover.net"),
titleTemplate: (title) => [title, "paper clover"]
.filter(Boolean).join(' | '),
// ...
});
// for each page...
export const Route = createFileRoute("/blog")({
head: () =>
defineHead({
title: "clover's blog", // templated with `titleTemplate`
description: "a catgirl meows about her technology viewpoints",
canonical: "/blog", // joined with `base`
// When specified, configures Open Graph and Twitter embed,
// using the page title and description as the default.
// The defaults are good, but it supports more options.
embed: {},
// Every exotic meta tag is done with a JSX fragment. This
// doesn't render React, it just loops through the tags.
// My goal was to cover the most common 99% of uses.
extra: <>
<meta name="site-verification" content="waffles" />,
</>,
}),
component: Page,
});
function Page() {
...
}
My version wasn't concerned with covering the entire space of Next.js's metadata
object, but instead uses inline JSX to fill that gap.
At the Next.js Conf 2024, everyone there was raving about Server Components. I
forget exactly who I talked to, but the big people were all in on this. I,
having implemented the bundler end of RSC, saw a couple of the problems in the
format. With Next 15 "stabilizing" the App Router last year, many companies are
building their products on it, realizing these pitfalls first-hand.
I came into the Next.js game late, only starting in June with version 15.
But everyone I've talked to at events sympathize with my notes. All the people
I talked to on the subject at Bun's 1.3 Party agreed with me. Even some people
at Vercel told me they don't like how Next.js is to actually use.
I hope as TanStack Start stabilizes, it becomes the Next.js replacement everyone
wants.
A lot of in the JavaScript ecosystem is a mess. That mess is why web
development gets made fun of. There were a lot of times I thought that working
with the web was an unrecoverable mess, but the mess was actually just the
commonly-used libraries I surrounded myself with. When that is peeled back,
modern web development technologies are awesome.
I've been making this website from scratch without any framework since late
2024, by writing systems like my own TUI progress widget, static
file proxy, incremental build system, and many more components.
Working on this code has produced some of my best coding sessions (by
happiness) in years. The viewers of paper clover get a better quality
website; the mini-libraries I create get extracted for public use,
everyone wins.
This level of from-scratch is too much for most people, especially at the
workplace. I say that at the minimum, we should only give our attention and
money to high quality tools that respect us. And Next.js and the company behind
it, Vercel, are not that.
If you use Next.js, and feel that the experience doesn't remind you of respect
too, consider whether you and your colleagues want to continue supporting their
serverless empire. The Vite ecosystem seems pretty decent to build on right
now, but I still have little experience in using their tools at scale in
production. The Vite+ launch from Void0 seems interesting, but
only time will tell if these venture-funded tools will respect us (end-users
and developers) long term.
Slowly, I've been replacing many pieces of software that disrespect me with
better alternatives. Some examples of this are GitHub, Visual Studio Code,
DaVinci Resolve, Discord, Google Drive/Workspace, along many more. I plan to
write more on this blog about the technical things I do (that progress library,
the purpose of my own site generator, learnings from my current job), including
some of my past projects at Bun (details on HMR, the crash reporter, and the
crazy system for bundling built-in modules). If it interests you, please
subscribe to the email list: