— PROJECT NAME
Revamp the tracing timeline
— ROLE
Product designer
— COLLABORATE WITH
Wes Oudshoorn
Dimitrios Lytras
A tracing timeline shows the entire journey of a request across multiple services or components in chronological order, helping you quickly identify where the problems, context, and errors are for easier debugging.
In the previous timeline design, we discovered many issues. When we decided to launch OpenTelemetry, we thought it would be a good opportunity to update the tracing timeline. This is a journey how we update the tracing timeline.
We began analyzing the issues with the current tracing timeline based on the previous customer feedback and internal user tests:
We kicked off this project with the goal of making the timeline data clearer and more accessible, so users can quickly find the information they need.
Previous timeline design
Next, we started listing the detailed information contained in each tracing timeline row (we call it a “span”).
Understanding the structure and type of each piece of information is essential to determine how we can filter the data effectively.
We organized the incoming raw data into the following structure.
Special thanks to @Wes about helping the communication.
At the same time, I conducted competitor research and analysis to understand how tools with similar functionalities to ours present information.
I analyzed the common features shared by competitors, aspects where we can differentiate ourselves, and points we can take inspiration from.
This is a little description underneath your image.
To communicate effectively with developers, having mockups is always important. Rather than relying on verbal explanations, creating even simple mockups based on the information gathered so far helps facilitate mutual understanding and enables more efficient communication.
Since AppSignal has a solid design system, I was able to quickly build mockups using existing components instead of wireframes. Using the concrete ideas we’ve developed, we discuss with developers which types of filters would work best.
Since there are multiple design options for each data type, I created them as variants so that they can be quickly presented during meetings. This allows us to combine and compare multiple options within a single screen.
Through many meetings with the developers, we made the following decisions:
Once we've reached a general agreement with the developers on the structure during a prior meeting, I clean up the screens afterward and send them to the backend developers as a first step. After that, I start refining the UI details.
Once the UI started to take shape and we had a beta version ready for testing, we ran user tests.
We chose developers from the team who hadn’t seen the update before, so we could get fresh, unbiased feedback.
Before the test, we clarified what we wanted to learn and came up with specific questions to help uncover any pain points.
Structure user testing
We’ve conducted 3 testings focusing on the qualitative interviews. Through this testing, we identified some common feedback:
In the final version, we added the following details:
Final version
Lastly, we define the interaction model and discuss the implementation with the developers.
1. Highlights grouped by category
2. Clicking a row pins its details to the information panel on the right
3. Icons provided for query, error, and event types, with buttons for easy filtering
4. Vertical guide lines to help users easily compare data starting points
Actual implementation on the product