examples/language-sdk-instrumentation/rust/README.md
[!NOTE]
For documentation on Pyroscope's Rust integration, refer to the Rust push mode documentation.
This example shows a simplified, basic use case of Pyroscope that uses a "ride share" company which has three
endpoints found in main.rs:
/bike : calls the order_bike(search_radius) function to order a bike/car : calls the order_car(search_radius) function to order a car/scooter : calls the order_scooter(search_radius) function to order a scooterThe example also simulates running 3 distinct servers in 3 different regions ( via docker-compose.yml):
Pyroscope lets you tag your data in a way that is meaningful to you. In this case, there are two natural divisions, and so data is "tagged" to represent them:
region: statically tags the region of the server running the codevehicle: dynamically tags the endpoint (similar to how one might tag a controller)Tagging something static, like the region, can be done using PyroscopeAgentBuilder#tags method in the initialization
code in the main function:
let agent = PyroscopeAgent::builder(server_address, app_name.to_owned())
.backend(pprof_backend(PprofConfig::new().sample_rate(100)))
.tags(vec![("region", ®ion)])
.build()?;
Tagging something more dynamically can be done using PyroscopeAgent#tag_wrapper. For example, you'd use code like this for the vehicle tag:
let (add_tag, remove_tag) = agent_running.tag_wrapper();
let add = Arc::new(add_tag);
let remove = Arc::new(remove_tag);
let car = warp::path("car").map(move || {
add("vehicle".to_string(), "car".to_string());
order_car(3);
remove("vehicle".to_string(), "car".to_string());
"Car ordered"
});
This block does the following:
vehicle=carorder_car functionvehicle=carTo run the example, use the following commands:
# Pull latest pyroscope and grafana images:
docker pull grafana/pyroscope:latest
docker pull grafana/grafana:latest
# Run the example project:
docker-compose up --build
# Reset the database (if needed):
# docker-compose down
This example runs all the code mentioned above and also sends some mock-load to the 3 servers as well as
their respective 3 endpoints. If you select rust-ride-sharing-app from the dropdown, you should see a
flame graph that looks like this (below). Wait 20-30 seconds for the flame graph to update, and then click the
refresh button to see 3 functions at the bottom of the flame graph taking CPU resources _proportional to the size_
of their respective search_radius parameters.
To analyze a profile outputted from your application, take note of the largest node which is
where your application is spending the most resources. In this case, it happens to be the order_car function.
ThePyroscope package lets you investigate further as to why the order_car()
function is problematic. Tagging both region and vehicle allows us to test two good hypotheses:
/car endpoint codeTo analyze this, select one or more tags on the "Labels" page:
Since you know there is an issue with the order_car function, select that tag. After inspecting
multiple region tags, the timeline shows that there is an issue with the eu-north region,
where it alternates between high-cpu times and low-cpu times.
Note that the mutex_lock() function is consuming almost 70% of CPU resources during this time period.
While the difference in this case is stark enough to see in the comparison view, sometimes the diff between the two flame graphs is better visualized with them overlayed over each other. Without changing any parameters, you can select the diff view tab and see the difference represented in a color-coded diff flame graph.
We have been beta testing this feature with several different companies and some of the ways that we've seen companies tag their performance data:
We would love for you to try out this example and see what ways you can adapt this to your Rust application. Continuous profiling has become an increasingly popular tool for the monitoring and debugging of performance issues (arguably the fourth pillar of observability).
We'd love to continue to improve our Rust integrations, and so we would love to hear what features you would like to see.