On 13 Apr 2016, at 10:23, Marat Khalili <
mkh@rqc.ru>
wrote:
I'm using
LatencyMON widget to monitor my network
performance. It's very convenient. Unfortunately, it
always loads with latency shown in %% (of what?), not in
milliseconds, so I have to make one extra click in order
to view actual milliseconds. Is there some hidden switch
that would make milliseconds the default? Shouldn't it be
initially default in the first place?
Thanks for your comment, I will try to answer and give my
opinion.
According to it: "The relative representation shows, in
percentages, how the values behave compared to the baseline,
which is the minimum latency collected in the time range for
the specific graph. Note that outliers have been removed.
For example, if the latencies collected oscillate between
30 and 90 ms, the y-axis will have a range between 0 and 200%,
as 30 ms will be considered the baseline and 90 ms represents
an increase of 200% over 30 ms.”
The relative representation allows the user to focus on
change in the RTT over time and geographic space, instead of a
pure comparison among milliseconds of the various probes.
Following the user requests and according also to our
internal use, this is the most common use case, especially in
case of outage analysis.
For example if you have a probe in Canada and one in Italy
and the target used in the measurement is in Germany, you
would expect to have some ms more from the one in Canada: this
information it’s just going to pollute the graphs.
Probably if something happens on the network you would like
to know which probes were affected and how. So what is the
difference in RTT compared to what is considered “normal” from
that source.
You can anyway force to open the measurement in ms (the
same goes for all the other parameters) if you embed the
widget in your html page/monitor/dashboard.
Sorry for the delay of the answer, for more information
feel free to contact me personally.
Ciao,
Massimo
--
With Best Regards,
Marat Khalili