Musings from the trenches

Conference Report: Performance.now() 2025

Published onbyMarkus WaltherUpdated on26 min read

I felt so lucky attending Performance.now() this year for the first time ever on October 30 and 31. I had read too many good things online about this premier web-performance conference, and — for added incentive — met Rahul Nanwani, the CEO of imagekit.io, one of the sponsors of this year's conference, just a few days prior in Zurich after my lightning talk at a local meetup. Because the date felt too close to the AXA Software Engineering Summit in Sevilla, Spain, where I had the privilege to give a talk myself, I opted for the livestream instead of flying out to Amsterdam in the Netherlands. The conference was single-track, so no fear of missing out — it usually gives greater focus and cohesion, albeit at the price of the occasional boring talk that cannot be skipped by switching to another track.

Livestream

First things first, how was the livestream quality? This being my first paid livestream, I was generally pleased with what I got for €129 from Conffab. There were some temporary hiccups, with one incident showing stuttering video/audio apparently attributable to a malfunctioning wireless microphone, but they were few and far between. What helped was a livestream chat for the participants, where reports of stuttering would be echoed by others, thus helping me to ascertain it was not bad WiFi on my side. And conference organizers were present on that chat as well, calming down everybody - nice! The on-site audio-visual team seemed competent and reacted quickly and professionally. Sometimes we had to wait a few minutes for the onset of a session, as the actual schedule deviated from the posted timings a bit for various reasons, but it was no big deal and usually well-communicated. Of course, I missed the mingling with other participants I would have had in Amsterdam, but could still feel the unique conference vibe!

Talks

Now onto the talks themselves. This was quite a star-studded conference, including some prominent speakers from the web-performance scene. There were well-known independent consultants, experts from web-performance companies, engineers from Google, newcomers with technical deep-dives — this conference was generally packed with competence (exceptions notwithstanding). Unfortunately, the talk videos are not freely available on-demand, being reserved to paying participants (maybe Performance.now() could remove the paywall after a reasonable delay?). Therefore I can only link to slides and give a necessarily eclectic summary of what I found interesting or noteworthy.

My very personal selection of talk highlights and lowlights is as follows.

Tammy Everts (SpeedCurve): How fast is fast enough

Slides. Tammy is Chief Experience Officer at SpeedCurve, and turned out to be a very talented speaker.

She said that fast is magical, eliminates cognitive friction. Fast web-page performance taps into our feelings, i.e. a technical measurable property has implications for the emotional, human realm. Fast, she continued is also pragmatic, for a given customer a web-performance expert may have to work out a reasonable compromise. Rather than seeing those two terms as polar opposites, she proposes a synthesis expressed in her newly coined word pragmagical, creating performant web pages that are good and fast enough to create that magical feeling in visitors.

She asks web-performance experts: who are you trying to please? Your boss? Google? Users?

If your boss, convey that the attitude 'we're so cool' does not automatically translate to fast experiences. If the experience is slow, studies show that it leads to fewer conversions. And this effect lingers in users' minds and leads to a slower return rate to a website.

Tying technical measures like Largest Contentful Paint (LCP) to business metrics like conversion rate is possible, correlations exist and can be shown in histograms. However, no one-size-fits-all correlation exists, rather correlations are specific to (groups of) users.

If you are trying to please Google, beware that you are using "some else's metrics". Google's Core Web Vitals (CWV) condenses performance into few metrics, but again "no one-size-fits-all" applies. Rather, CWV is to be seen as the starting point of a customer-dependent performance analysis. This is because Google's CWV thresholds are not necessarily your business thresholds for what's good, ok, or poor performance.

Interaction to Next Paint (INP), another CWP metric, matters, a 20% difference in conversion rate was observed for some customers. But don't per se aim for "some else's metrics", customize them.

If you are trying to please users, know that metrics, which are automatically measurable, are not the same as feelings, which still need usability testing. For example, lots of clicks could either mean that the user is very engaged with a web page, or that she is totally lost!

Ask yourself why users are on your web site? Do the want to do engagement tasks like planning a trip or productivity tasks like wanting to solve a problem while already in the trip? For the former, you can expect a high tolerance for waiting, but not for the latter!

Also know that a user who experiences one, perhaps crucial, step in a user journey as slow may feel that the entire user journey is slow — with implications for what to performance-optimize.

She said that one can't really create the highly sought-after 'flow' experience online. However, one can still strive to fake it.

She closed her very inspiring talk with the challenge to not aim for 'just tolerable' web performance resp. web-site experience, but rather a delightful experience.

Harry Roberts: How to think like a performance engineer

Slides. This was a great, content-packed presentation from a very experienced and well-known web-performance consultant — I loved it!

Harry mentioned his own obs.js as an inlineable JS snippet for helping to implement context-aware web performance for everyone, allowing client-side decisions and telemetry based on network connection strength, battery status, and device capability.

He then listed essential tools of the trade:

He covered metrics that matter:

Next, we need to establish test conditions:

He then covered test scenarios. He noted that p75 is actually a cover term for a mix of scenarios. Again his emphasis was on more realistic scenarios akin to what users actually experience.

For example, one can use WPT's script panel to simulate that a user has already accepted the infamous cookie banner, hence test for a more common scenario. Or use another custom script in WPT to avoid clicking on an empty shopping cart, an infrequent experience for most users. Finally, ask yourself: How did people get to a particular web page in the first place? And then custom-script the user journey to that web page in WPT, as 'using' a web site as intended makes it 'warmer', less like a cold-cache scenario!

He cautioned that testing isolated URLs via skripted WPT is still not what users do; it might for instance miss redirects that slow down the experience for them. You can catch such redirects by filtering for non-200 HTTP status codes in DevTools on a given domain.

All in all, Harry was very critical of Single-Page Applications (SPA) due to them being often bloated and suffering from poor performance. According to him they can be replaced by much simpler and more performant multi-page applications (MPA) a.k.a. traditional web sites in many cases. But he admitted that as a web-performance consultant one might be tasked with testing SPAs as well. For this, one has to skript soft navigations in a SPA, i.e. navigations that don't navigate to a different URL and don't incur network roundtrips. This can be done for example by simulating clicks in a WPT custom script to model user interactions that navigate to new UI states.

He recommended to generally have more than one test, and try to cover the agreed test scenarios. And he cautioned against the blind spot iOS, since CrUX data is only collected for users logged-in to a real Chrome browser, while iOS does not (yet) offer a real Chrome app. As a rule of thumb, however, one can start with the assumption that generally mobile Safari on iOS is faster than what CruX data shows, due to more powerful devices on average in the Apple ecosystem. The real solution to the blind spot according to Harry is to use your own RUM to collect the metrics exposed on mobile Safari, too.

Marcy Sutton Todd: Accessibility and Performance

While the topic is important, I had my issues with Marcy's talk — her slides were way too full, too much detail for a talk like this, and frequent reading of slide text as-is did not help either, e.g. for WCAG criteria and their explanations. Less is more, as they say. I would recommend to the conference organizers to pre-screen presentations and offer coaching in this regard, because a relatively pricey conference like this should ideally have a uniformly high standard of presentation to not leave attendees and livestream viewers disappointed on occasion.

Among the noteworthy insights from her talk was the fact that > 60% world-wide are mobile web users. Hence, one needs to pay attention to WCAG mobile criteria.

Also, horizontal scrolling is bad for low-vision users, one needs to explicitly design for zoomed-in web pages resp. user interfaces.

What if we measured performance in a new way, from an accessibility point of view? One proposal would be to record the number of keystrokes to achieve a task.

Towards a measurable account of accessibility she mentioned the following:

To make a page performant in the sense of enabling faster screen reader support she listed a number of measures. Here it would have been appropriate to go deeper in such a technical conference, but as presented things remained unexplained: e.g. what exactly does event coalescing mean in the context of accessibility and what would be the consequences for frameworks and bespoke JavaScript on a page to optimize for that dimension?

A quick-win is to support CSS's @media (prefers-reduced-motion) query on your web site, to reduce the amount of animation for users that would suffer from excessive flickering etc.

Marcy's recommendation was to use the Axe browser extension to quickly flag accessibility issues.

Why should one care about accessibility? Apart from empathy, a good driver might be the European Accessibility Act, which came fully into force in 2025!

She also said that building in accessibility from the start is cheaper that trying to retrofit it. Furthermore, one could publish an Accessibility Statement for a web site to be explicit about what is intended to be supported. And finally it was news to me that there is the equivalent of caniuse.com for accessibility, namely a11ysupport.io, a compact reference answering 'will your code work with assistive technologies?'.

Michael Hladky: Big Data, Zero JS —Cross-Browser Virtual scrolling

Slides. Michael gave a lovely, deeply technical talk with a charming, heavily Austrian accent 😊! Clear slides, fancy demos, great competency. For me, an alternative title would simply be Use content-visibility: auto!

He started by going deep into the browser's recalculate-style/layout/paint render pipeline. From there he motivated and explained CSS-based ways to restrict the amount of work in that pipeline, with obvious impact on page performance. To understand what CSS property triggers which steps in the pipeline, he referenced the handy CSS Triggers page. Armed with this knowledge, one can attempt to select less costly properties when given a choice.

He introduced a bookmarklet (slide 13) to guesstimate whether there is a lot of paint/recalcluate-style work induced by a given page. It works by programmatically zooming in the document body slightly with a zoom factor of 1.01. If already zoomed in, it will then zooming back to factor 1.0. Clicking the bookmarklet while recording a performance trace with DevTools will leave its mark in the trace, where render-pipeline steps are color-coded.

Michael launched into a series of demos using his own diagnostic test apps to show the impact of various parametrizations of the CSS contain property (layout, paint, ...) on the work the browser's rendering pipeline has to do. A useful composite parameter he recommends is contain:content (= paint + layout).

The culmination of his talk is the CSS one-liner content-visibility: auto, whereby the browser will ignore off-screen work completely! Hence the talk title: with big data imagined as a long list of items, one can suppress rendering the off-screen parts of the list without fancy JS-based virtual-scrolling implementations. One simply applies the CSS one-liner to list items instead.

The real-life impact of using content-visibility: auto is a drop in the INP performance metric due to less work being done overall on the browser's main thread.

In the moderator-guided on-stage discussion after the talk, Michael was of the opinion that content-visibility: auto is completely under-used.

I agree. However, what Michael did not mention — and what weakens the strong claim of the talk title considerably — is that the amount of DOM nodes held in RAM is not reduced by applying the magical CSS one-liner. I don't see how, especially for cheap CPU- or memory-limited smartphones that are customary in developing countries, the effort for a JS-based virtual-scroller implementation can be avoided: only a virtual scroller keeps the number of DOM nodes a constant even for very long lists, whereas a pure-CSS solution only addresses the rendering aspects of the problem, not the unbounded growth of DOM size.

Ines Akrap: Fast, green, responsible

Slides. Forgive me for being blunt, but this was the most forgettable talk of the conference. The speaker appeared not knowledgeable, was vague most of the time, paid lip service to the conference topic and struggled to 'fatten' what can only be described as miniscule content into the required length of a full talk.

Quotes like "If you all move to Iceland, we can all stop talking about this" (environmental impact of web sites) made it hard for me to take her message serious. Pro-tip to the organizers: do not include a talk just because the message appears to be politically correct, but focus on the advertised conference theme and actively check talk content beforehand to ensure that what's on the tin is actually inside …

Umar Hansa: Modern Performance Workflows

Slides. Fortunately, Umar delivered a top talk right after the forgettable one. My brief summary alone cannot do it justice, studying the treasure trove that his slides are is mandatory. The title could be faithfully extended with "… using DevTools". Because that's what it mostly was for me: a very hands-on presentation centered on lesser-known developer tools in Google Chrome that motivates me to explore our beloved and feature-packed web-performance Swiss Army Knife more:

Umar then moved to the hot topic of AI-assisted performance diagnosis. He mentioned in passing AI wrappers such as GravityWrite (what a weird company name) and ended with Model Context Protocol (MCP) servers and agentic AI (= LLM tool calling in a loop). The vision with the latter would be to enable prompts like "Load my website, get Core Web Vitals, and improve it" to give satisfactory results.

Umar explored the free AI assistance built into DevTools next, summarizing them with the verdict: Useful, not (yet) mindblowing. Among the useful parts is AI code completion, known from IDEs, now being available in DevTools, too.

He introduced DevTools workspaces, demystifying what the .well-known/appspecific/com.chrome.devtools.json paths showing up in my local webserver logs are good for.

And he pointed to the very useful fact that Chrome DevTools now comes with its own MCP server, so AI can inquire about performance-related aspects of a web page in natural language — didn't know that (but had heard about Playwright's MCP, which he also featured).

There is much more in Umar's talk than what I can focus on here. Just one more thing: in the on-stage discussion the moderator asked him how he keeps up with all the new developments and hidden gems in DevTools. His answer was insightful to me: work in Chrome Canary, that's where the new stuff comes out earlier!

Ethan Gardner: Web Performance Allies

Slides. Ethan Gardner gave an important talk about the people side of web performance. He adressed questions I often have myself: how do I convince customers, managers, non-technical people that performance matters?

He pointed to soft skill gaps identified for typical software engineers in a 10-year old study, which he thinks is as relevant today as it was back then. As just one example, in communication with a customer or manager it might help to avoid saying a harsh "No!" in favour of a softer "Not right now."

In order to bring marketing on board with a web-performance project, one could point out that marketing's role is to 'open new doors', whereas performance work 'keeps doors open'. Framed this way, marketing and performance engineering are now related, and marketing can become a performance ally.

And he listed a number of get-to-know questions for starting or motivating a web-performance project:

He mentioned that it might help to show performance leaderboard comparisons among a company and its competitors' web sites to attract interest from the C-suite of a company.

There is an entire website wpostats.com focussed on demonstrating the impact of web performance optimization (WPO) on user experience and business metrics. It could be used as a neutral reference for case studies and experiments if additional convincing was needed.

Ethan also suggested to quote user comments (from online forums, Google and the like) to customers seeking to improve tehir websites' performance in an attempt to show them ordinary people perceive performance issues.

He threw in a helpful book recommendation: "Making Numbers Count" ("Required Reading") should help performance consultants to talk better about numbers with their customers. For example, don't talk about milliseconds, because people don't know what that means, rather use comparisons to real-world events like the difference between first and last place in men's 100m sprint, or use pictures.

Furthermore, Ethan urged us to tell stories when convincing customers. He also advised to document criteria for what constitutes a good vs a bad third-party script, so customers can be provided guidance for their web-performance implementation. And he advocated for team working agreements, negotiated in advance between a customer and the web performance consultant, for example covering how to handle disagreement.

He pointed to TC39's how-we-work document as an inspiring example in this regard, and closed with yet another book recommendation for "Get to the Point" to sharpen those indispensible communication skills for the working web-performance consultant.

Tim Kadlec: Stubborn Empathy

Slides. Tim is a well-known web-performance consultant who now works at Cloudflare. His delightful opening talk on day 2 of the conference centered on the notion that web-performance consultants are to have stubborn empathy for users, defending their performance needs and expectations.

Users are on different browsers and experience different network speed, the "localhost delusion" (Tim) does not do that justice.

A particularly noteworthy announcement from Tim was that apparently performance APIs to measure LCP, INP etc. are going to be cross-browser-available by the end of 2025 — can't wait for that.

He also pointed to https://github.com/cloudflare/telescope, a "diagnostic, cross-browser performance testing agent".

Tim in particular singled out 'AI slop' — poor AI-generated code — as a threat to performance, pointing to one study that found +154% increase in average PR size, +91% increase in review time and +9% increase in number of bugs per developer, i.e. where the code generated did not work according to the specification.

A helpful distinction, he said, is to see that the machine (AI) operates on an abstraction level that does not exhibit accountabilitythat quality is reserved for human developers, who are critically mandated to protect their users. He quipped that there would be so much stuff to clean up for consultants as the result of AI slop.

Continuing his critical view on the impact of AI on web performance, he quoted from another study that found the crawl-to-referral ratio — how many HTML page crawls by bots result in one HTML page referral — of Anthropic to be a whopping 70'900 : 1, whereas Google's ratio is still a mere 9.4 : 1.

He closed with the timely exhortation to aim for unmeasurable, but superb quality and delight with empathy, pleasing humans in the process.

Andy Davies: Making sense of Long Animation Frames

Slides. Andy had a great talk on a new API for Long Animation Frames (LoAF). He started out by pointing out that we are shipping more and more JavaScript each year in web pages. While that is a concern for web performance, mere download size in itself is a bad proxy for performance impact, because little JavaScript could still have an outsized impact depending on what's in it.

He then defined LoAFs as any (contiguous) main-thread activity > 50 milliseconds. That activity often is dominated by initial script work, following by style and layout, with painting pixels being the bulk of the latter part of the animation frame.

According to Andy, certain LoAFs are expected: normally one before First Contentful Paint, because processing the initial HTML takes a lot of work, and often one around the domcontentloaded event, since a lot of deferred scripts are then executed in a single task (which, interestingly, might change according to browser vendors).

The new API, available initially on Chromium browsers, adds a new type to performance APIs that can be used like this:

const observer = new PerformanceObserver((list) => {
  console.log(list.getEntries());
});

observer.observe({ type: "long-animation-frame", buffered: true });

I was excited to hear that it can be used to pinpoint the causes of LoAF performance issues, because entries now contain vital information such as script URLs!

He sprinkled his talk with witty questions like whether people drop NextJS after bad performance consequences, and the related pessimistic prediction that "AI is gonna make hype-driven development worse" (because it was trained on oh so many examples featuring hyped technologies).

For more, one can do no better to study the original slides.

Michal Mocny: Interactions & (Soft) Navigations

Slides. Michal is from the Google Chrome Team in Canada and gave an interesting talk about soft navigations, i.e. the type occuring in SPAs without cross-page navigations.

While Chrome DevTools already shows Largest-Contentful-Paint (LCP), it is a static feature calculated at page-load time. Behind a feature flag, Chrome can now measure Soft LCP, i.e. LCPs occuring after the initial load, for example as a result of interacting with a SPA over time.

Again, a new type 'soft-navigation' in the performance APIs unlocks that capability.

He likes an idea by Rich Harris about 'transitional apps' , a mix between Multi-Page Applications (MPAs) and SPAs that work without JavaScript, giving fast initial performance results, while being enhanced by JavaScript later on. He dreams about a 'Transitional' Web full of such performant apps.

Before speculated to be between 15-30%, same-document a.k.a. soft navigation transitions are now measured to be between 20-50%, an astonishingly high number that does cry out for a proper API to measure it objectively.

The talk goes more deeply into the difficulty of defining soft navigation from a browser's perspective, and details about how the new API measures them and what to expect from it as well as some pitfalls to be aware of.

He closed on a humorous note, refusing to answer Rich Harris' question of "Have single-page apps ruined the web" in favour of "… go measure!".

Nadia Makarevitch: React Rendering Techniques — Comparing Initial Load Performance

Slides. Nadia, a developer, writer and speaker, gave an enlightening talk about the three common types of React rendering, client-side (CSR), server-side (SSR) and React Server Components (RSC). She implemented the same reasonably complex app featuring a dashboard and inbox in the three rendering paradigms and reported in detail on their comparative performance characteristics.

As expected, CSR performs the worst due to having to evaluate loads of JavaScript against an almost empty HTML page, giving bad LCP — albeit with good INP once the page is fully operational.

SSR on the other hand gives a better LCP due to shipping HTML and CSS early, at the price of a sometimes multi-second interactivity gap while JavaScript is still loading. This leads to worse INP.

Apps often rely on data that needs to be fetched e.g. from a database. Such data fetching complicates the picture, since it needs to be done before HTML generation in order to influence it, making SSR slower.

According to her measurements, RSC is not the all-around saviour one would expect: splitting components between client and server, only the latter can fetch data asynchronously, saving some time due to parallelization. However, other dimensions failed to impress: she observed only a minimal effect on bundle size, LCP stayed the same as 'traditional' SSR, and the downside of RSC was that it required a major re-architecture of the app.

Barry Pollard: Speculations on Web Performance

Slides. Barry is a Chrome Developer Relations expert at Google and gave an enlightening talk about a new and flexible way to give hints to the browser on what resources to load next, called speculation rules.

Barry first reviewed more well-known ways of hints, like attribute-based loading=lazy on images and fetchpriority=high. To break up long-running JavaScript and improve INP as a result he referred to await Scheduler.yield() to give back control to the browser's event loop, with await globalThis.Scheduler?.yield() making it robust for Safari, which doesn't have it yet.

He then introduced the new script type <script type=speculationrules> which takes a JSON-formatted child text to declaratively describe what resources to prefetch or even prerender. The novelty is this can describe in one place the behaviour of resources referenced on an entire web page, by way of URL-pattern or CSS selector expressions to summarize behaviour of groups of related resources:

<script type="speculationrules">
{
  "prefetch": [{
    "where": {
      "and": [
        { "href_matches": "/*" },
        { "not": {"href_matches": "/wp-admin"}},
        { "not": {"href_matches": "/*\\?*(^|&)add-to-cart=*"}},
        { "not": {"selector_matches": ".do-not-prefetch"}},
        { "not": {"selector_matches": "[rel~=nofollow]"}}
      ]
    }
  }]
}
</script>

Interestingly, this can also be specified via a HTTP header, for a less intrusive, magic way to speed websites up!

He pointed to actual websites like scalemates.com using this in the wild already and a blog post by Etsy, with big eCommerce players also starting to use it.

Fascinating for me was the fact that certain eagerness settings trigger prefetching on sufficiently long mouse hover or touchdown events over a link, without the customary JavaScript that is otherwise necessary.

For now it's Chrome-only, but Webkit and Firefox show signs of following soon.

Vinicius Dellacqua: Teaching Agents about Performance Insights

Slides. Vinicius gave a stimulating overview talk about his efforts building a Performance AI assistant and agentic workflows on top of a fork of Chrome DevTools.

He illustrated the complexity of making sense of the raw, voluminous trace file that is so masterfully converted into visual representation in the Performance panel of DevTools, a task that his AI-powered side project perflab.io has to solve, too. One challenge is the limited context window of current LLMs ingesting such data.

In some other demos he used v0.app with a feedback loop to DevTools now-built-in MCP server (!) to improve apps he built.

I did not know that DevTools has built-in "Debug with AI" functionality for performance traces — need to try this out!

Rich Harris: Fine-grained everything

Slides. Rich is famously the creator of Svelte, among many other things. He gave a thought-provoking talk about his current work with SvelteKit and the ideas and performance implications behind it.

Rich pointed out that we already know about fine-grained reactivity in the form of signals, for which he credits Ryan Carniato, and which made it into Svelte 5. As a result, Rich made the bold claim that rendering is a solved problem.

However, data is not a solved problem: the exact mechanics of requesting data e.g. from a database while an app is already up and running, and doing so repeatedly during the app's lifetime, constitutes a critical factor in app performance and good user experience. Solving that is the missing bit to make everything fine-grained.

Rich criticized React Server Components somewhat, as the server-returned payload describes everything on the page even where the task at hand is only about updating a single counter value, i.e. it is not fine-grained enough for him.

He talked about async Svelte and SvelteKit remote functions as ways to solve the problem. Optimistic local updates are going to be possible in SvelteKit before network updates bring the missing data — he claims that result need both signals and a compiler-based approach (I am sceptical about the need for the latter). The role of the compiler is inter alia to do data-dependency analysis.

He ended with a defense of SPAs against MPAs, which he found flawed despite all the known performance advantages: they are not efficient in the sense that they cause a lot of network-traffic duplication for near-duplicate pages during navigation, and suffer from lots of re-parsing and re-evaluation of CSS, JS assets over the lifetime of a typical MPA user journey.

While I find his judgement correct, I couldn't help but think: what if we somehow downloaded only the delta between the current page and the one about to be navigated to, and stitched it into existing DOM? An interesting avenue to explore further!

Conclusion

This was a super inspirational conference! Can't wait to attend the next one, scheduled to be November 19-20, 2026. And have to study slides and ideas more, lots of food for thought.

html JavaScript web performance signals React English