Replies: 1 comment
-
These are all good points, and are generally the way autometrics is being developed. In response to your points specifically:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
The metrics to codebase link enable users to look at the metrics first, which are more readable. So I would prefer the way to monitor metrics, discover issues on the dashboard as far as possible, and then jump into the codebase.
I personally utilize metrics like this:
I saw this project got useful links from functions to metrics, which are great. But IMHO, the codebase is mixed with lots of implementation details, which are not the NO.1 material to read(although the readability of the codebase is important).
Ideally, I can monitor something like the overall metrics, and then I see some functions latency spike, I can: 1. check the function in the codebase. 2. continue exploring the issues on the metrics(eg. click on the point, and I can see the children nodes data, which could help me to find which child caused this issue, recursively). That will be way faster to locate problems for an engineer.
Also, I think it will be great to have the ability to compare different functions(or the same function with different versions) performance on the metrics. Now it seems to only have a time-based comparison to the same function.
We can display some important information(or at least allow users to customize by themself) on metrics, like the most time-consuming function name under the function that we are checking.
Beta Was this translation helpful? Give feedback.
All reactions