Hello Massimo,

Thank you very much for your reply.
You can anyway force to open the measurement in ms (the same goes for all the other parameters) if you embed the widget in your html page/monitor/dashboard.
That's exactly what I'm doing: I've created a web-page on my internal web-server that contains the widget. However, I still cannot find neither parameter nor API that would allow me to select milliseconds. I've read through both documentation page you pointed and poked JavaScript object returned by initLatencymon, but still don't see it.

That said, reference point in milliseconds appeared on charts recently, that somewhat makes it less of a problem for me.

The relative representation allows the user to focus on change in the RTT over time and geographic space, instead of a pure comparison among milliseconds of the various probes.
Following the user requests and according also to our internal use, this is the most common use case, especially in case of outage analysis.
My (mis)use case is different: I'm trying to monitor one particular link that's important for me, using a single probe and multiple nearby destinations. In this case absolute values matter: relative charts may look absolutely normal while absolute values are elevated from 1..2 to 10+ milliseconds because to link overload which is not good.

--

With Best Regards,
Marat Khalili

On 09/05/16 18:25, Massimo Candela wrote:
Hi Marat,

On 13 Apr 2016, at 10:23, Marat Khalili <mkh@rqc.ru> wrote:

I'm using LatencyMON widget to monitor my network performance. It's very convenient. Unfortunately, it always loads with latency shown in %% (of what?), not in milliseconds, so I have to make one extra click in order to view actual milliseconds. Is there some hidden switch that would make milliseconds the default? Shouldn't it be initially default in the first place?


Thanks for your comment, I will try to answer and give my opinion.

Here you can find the documentation: https://atlas.ripe.net/docs/tools-latencymon/
According to it: "The relative representation shows, in percentages, how the values behave compared to the baseline, which is the minimum latency collected in the time range for the specific graph. Note that outliers have been removed.
For example, if the latencies collected oscillate between 30 and 90 ms, the y-axis will have a range between 0 and 200%, as 30 ms will be considered the baseline and 90 ms represents an increase of 200% over 30 ms.”

The relative representation allows the user to focus on change in the RTT over time and geographic space, instead of a pure comparison among milliseconds of the various probes.
Following the user requests and according also to our internal use, this is the most common use case, especially in case of outage analysis.
For example if you have a probe in Canada and one in Italy and the target used in the measurement is in Germany, you would expect to have some ms more from the one in Canada: this information it’s just going to pollute the graphs.
Probably if something happens on the network you would like to know which probes were affected and how. So what is the difference in RTT compared to what is considered “normal” from that source.

You can anyway force to open the measurement in ms (the same goes for all the other parameters) if you embed the widget in your html page/monitor/dashboard.

Sorry for the delay of the answer, for more information feel free to contact me personally.

Ciao,
Massimo




--

With Best Regards,
Marat Khalili