TNS
VOXPOP
What’s Slowing You Down?
What is your biggest inhibitor to shipping software faster?
Complicated codebase and technical debt.
0%
QA, writing tests, and debugging.
0%
Waiting for PR review or stakeholder approval.
0%
I'm always waiting due to long build times.
0%
Rework due to unclear or incomplete specifications.
0%
Inadequate tooling or infrastructure.
0%
Other.
0%
Observability / Software Development

How Performance Metrics and Distributed Tracing Will Drive User Experience

In this episode of The New Stack Makers podcast, the New Stack Editor in Chief Alex Williams sat down with Buddy Brewer, New Relic’s general manager for client-side monitoring, to talk about the user experience that New Relic's customers are working to deliver to their customers.
Jan 9th, 2020 4:00pm by and
Featued image for: How Performance Metrics and Distributed Tracing Will Drive User Experience

New Relic sponsored this podcast.

Increasingly in software development, user experience reigns supreme. But as architecture becomes more advanced and more distributed, finding not only what went wrong, but what is causing it to be, also becomes nearly impossible. Thankfully tooling is arising to meet the demand for a cross-system breadcrumb trail that not only finds what is wrong but where it’s gone wrong, with a step toward auto-healing in the future.

In this episode of The New Stack Makers podcast, The New Stack publisher Alex Williams sat down with Buddy Brewer, New Relic’s general manager for client-side monitoring, to talk about the user experience that New Relic’s customers are working to deliver to their own customers. They also spoke about how monitoring inside mobile and web apps for end-users — including perceived performance and distributed tracing — helps everyone understand just what that experience is.


Buddy Brewer – Driving the Next-Gen User Experience

When Brewer started in the software industry 20 years ago, building web applications was way more straightforward. Basically you were building monolithic applications for Internet Explorer on PCs via dial-up or wired ethernet. Your app had just three tiers: database, server and browser.

As we know, since the early Aughts, complexity has grown exponentially. We now work with complex systems built around thousands of microservices, on seven tiers, via countless platforms and backed by massive digital organizations. And, as users have become much more sophisticated on so many types of devices, their expectations for excellence continue to grow. Even if we don’t put it in so many words, users are demanding zero latency.

Brewer says all successful development starts with establishing the expected user experience.

“Then work back from there and figure out, OK in order to make sure that that’s happening, what do I need to know, what do I need to be able to observe about my application at any given time in order to make sure that if something ever deviates from the quality of the experience that I want to give my customers, that I know about it, that I can take action on it, and that the information is sufficient to tell me what to do to get back to delivering that type of experience,” Brewer said.

Of course, the complexity of systems is reflected in the complexity of metrics and often the tooling to measure it. And until about a decade ago, there really was no way to truly understand web performance and end-user experience. That’s when World Wide Web Consortium (W3C) created the Web Performance Working Group to build partnerships among natural competitors to bring specifications implemented by the main browsers to track what he refers to as the UX waterfall.

Brewer says that as network performance was monitored and steadied, executing code and managing CPU became the next bottleneck. The typical internet user does not have the state-of-the-art hardware and software setup that most people in tech do. And yet these lesser devices are still executing — or trying to execute — the same megabytes of Javascript. This is where the demand to understand not just when something happens but when the user can take note of it happening. This is known as perceived performance monitoring.

“Fortunately those browser vendors have continued to build things that help us understand not just how long it takes to download all the content but how long it takes to parse, compile, and execute the JavaScript to make the application actually interactive. Which is a very key moment in the applications lifecycle today,” Brewer said.

He continued that it’s also about understanding when that code starts to lock up that main thread and create a bottleneck. Now there is tooling even to understand when JavaScript starts to “choke the application” and to recommend action.

So now we have all that data. New Relic came in to help people make sense of it.

The rest of this conversation between Williams and Brewer dives into the nitty-gritty of setting service level agreements to drive your interactivity and perceived performance metrics, distributed tracing, web performance and anything else you need to leverage real user monitoring. They talk about gaining actionable insight that connects the front-end to the back-end so you can spot the cause of the problem too. And they talk about distributed tracing, the next big thing in the serverless world because it traces the path of a request, uncovers latency in components along that pathway and then pinpoints the bottlenecks.

Photo by Austin Distel on Unsplash.

Group Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.