Feature-suggestion: "round-robin" probes
Hello, i'm figuring how RIPE atlas could be most useful for assisting in monitoring our Ancast Network, and came up with the following idea: The Atlas network has a big advantage over other monitoring /measurement networks, which is obviously its sheer size (and topological diversity). Rather than using the Atlas network for pure performance measurements, i would love to be able to use Atlas for reachability measurements (and understand which Anycast location attracts traffic from which region, and how this changes over time). These reachability / topology discovery measurements do not need to be nearly as frequent as the performance measurements, but should utilize all available probes. So, instead of being able to use 10 probes to run a continous performance test (say, DNS query every 300 seconds), i would rather like to be able to use a very high number of probes (best case: "all"), but have each of the probe eg. perform just one traceroute per day (or, even once per week would be enough). This would give me an excellent overview about the topology, but is not possible using the current limits of the UDMs. However, instead of allowing users to allocate measurements to a much higher number of probes, this functionality would also be possible be adding some "probe round-robin" feature to the control infrastructure, for example "randomly allocate a new set of probes every xx seconds". If that feature would exist, i could implement the above reachability test with the following parameters: Test: traceroute Number of probes: 10 Measurement interval: 3600 Swap probe-set interval: 10800 (NEW feature) (which would re-allocate new probes every 3 hours, theoretically working through all 1500 probes about every month?). An obvious alternative would be functionality where i could "queue" "single-shot" measurements for the whole network (since chances are low i would get every probe if they are randomly assigned). comments? Suggestions? Alternatives? Alex Mayrhofer Head of R&D nic.at
Hi Alexander, On 04/27/12 12:20, Alexander Mayrhofer wrote:
i'm figuring how RIPE atlas could be most useful for assisting in monitoring our Ancast Network
Cool, So are we. :-)
comments? Suggestions? Alternatives?
I suppose we would be interested in two parameters: A) get a generic sense of the overall global reachability B) knowing when one of the nodes has a problem For A) we would be fine with 'slow' measurements, performed with a large, or in any case representative amount of probes. For B) only a few, strategically located probes would work, but they should measure on a frequent, near real-time basis (and it would help if they are located on stable network-connections instead of a flaky home-user link). I have believed from the beginning that Atlas could be of use here, in one way or another. And we would be more than happy to discuss any further. We are also interested in doing other DNS research/measurements, namely to the amount of DNSSEC validating resolvers with ISP's. Not sure if the Atlas system could be of use there (don't really see why not though). As SamKnows been mentioned here? There's quite some resemblance between SamKnows and Atlas, and changes for synergy as well. https://www.samknows.eu/ -- Marco
On 4/27/12 13:09 , Marco Davids (SIDN) wrote:
For B) only a few, strategically located probes would work, but they should measure on a frequent, near real-time basis (and it would help if they are located on stable network-connections instead of a flaky home-user link).
There is a plan to install 'Atlas anchor' boxes that are very stable and can be used both as targets and as powerful Atlas probes.
We are also interested in doing other DNS research/measurements, namely to the amount of DNSSEC validating resolvers with ISP's. Not sure if the Atlas system could be of use there (don't really see why not though).
That should come relatively soon. We are almost done with firmware that can generate DNSSEC queries and send queries through the probes' local resolvers.
On Fri, 27 Apr 2012 13:09:39 +0200 "Marco Davids (SIDN)" <marco.davids@sidn.nl> wrote:
As SamKnows been mentioned here? There's quite some resemblance between SamKnows and Atlas, and changes for synergy as well.
Yes, there are both a closed-source with a secret roadmap project for no good reasons that force similar project that prefer FOSS licences and development model to re-implement software that perform active and passive measurement primitive in the FOSS fashion and hopefully with a better thought architecture to cope with a wide range of needs and large scale campaign coordination runt on end-users "computers" :) Cheers. -- Jérôme Benoit aka fraggle La Météo du Net - http://grenouille.com OpenPGP Key ID : 9FE9161D Key fingerprint : 9CA4 0249 AF57 A35B 34B3 AC15 FAA0 CB50 9FE9 161D
On 5/4/12 23:17 , Jérôme Benoit wrote:
On Fri, 27 Apr 2012 13:09:39 +0200 "Marco Davids (SIDN)"<marco.davids@sidn.nl> wrote:
As SamKnows been mentioned here? There's quite some resemblance between SamKnows and Atlas, and changes for synergy as well. Yes, there are both a closed-source with a secret roadmap project for no good reasons that force similar project that prefer FOSS licences and development model to re-implement software that perform active and passive measurement primitive in the FOSS fashion and hopefully with a better thought architecture to cope with a wide range of needs and large scale campaign coordination runt on end-users "computers" :)
I don't know about SamKnows, but for RIPE Atlas, talks contain a lot of details about how the system works. To the extent that there is a well defined road map, we talk about it. As much as I like open source projects, I really don't see it working for the current Atlas system. Probe are just too fragile, the whole system is too complex. Does releasing the Atlas source benefit the RIPE community? I don't know. Fortunately, that is not my decision to make. There is not a lot of magic in Atlas. It is mostly just hard work to get all the details right. If you have a dedicated team in an open source project, then you should be able to duplicate our work. You can always for feedback on any design, or ask how we do things. One word of warning though: try avoid the second system effect. If your system is going to do everything it may never get there.
On Sat, 05 May 2012 12:22:15 +0200 Philip Homburg <philip.homburg@ripe.net> wrote:
On 5/4/12 23:17 , Jérôme Benoit wrote:
On Fri, 27 Apr 2012 13:09:39 +0200 "Marco Davids (SIDN)"<marco.davids@sidn.nl> wrote:
As SamKnows been mentioned here? There's quite some resemblance between SamKnows and Atlas, and changes for synergy as well. Yes, there are both a closed-source with a secret roadmap project for no good reasons that force similar project that prefer FOSS licences and development model to re-implement software that perform active and passive measurement primitive in the FOSS fashion and hopefully with a better thought architecture to cope with a wide range of needs and large scale campaign coordination runt on end-users "computers" :)
I don't know about SamKnows, but for RIPE Atlas, talks contain a lot of details about how the system works. To the extent that there is a well defined road map, we talk about it.
I'm having hard time to find the roadmap and the talks on the RIPE Atlas web site as a public user with no account. But I might have not searched enough. Where are they ?
As much as I like open source projects, I really don't see it working for the current Atlas system.
It work very well, we do it since age but we took a very different approach : The measurement agent is designed as a standalone component with that requirement : * Must be portable, that mean all measurements must be implemented natively. * Datagram-based measurement are divided in four components - packets forge - packets injection - packets capture - packets filter Each component work is mapped to a dedicated thread (an Lwt job more precisely, which bring in a very nice way to manipulate packets in a asynchronous fashion). * Other measurement can be mapped with a plugin dynamically loaded (the API is not stable yet) * A syntax (that is not yet finished) permit to express what a measurement will do. The syntax is thought as a way to express what the software functionalities are and how each component interact with each other. I give a working example just to show the idea : {"probe": // the probe module that will be dynamically loaded {"name":["delay","round_trip","icmp","bin"] // send and receive sample definition, the time // sequence that the measurement will follow while running ,"send": {"seq": // repeat 3 times each 30 min, other keyword are // gamma, uniform, poisson, seq can recurse // under conditions {"repeat": {"count":3 ,"seq":{"periodic":{"period":1800.0}} } } } ,"recv": {"seq": {"repeat": // repeat 2 times each 3 seconds the // the sample measurement {"count":2 ,"seq":{"periodic":{"period":3.0}} } } } ,"parallel": [ // measurement module specific configuration, // here destination and ICMP type 8 packet size {"mark":null ,"data": {"host":"free.fr" ,"size":16 } } ,{"mark":null ,"data": {"host":"google.fr" ,"size":16 } } ] } // where to send the sample result (file, stdout, URI REStful API) ,"sample": {"file":"-" } } I do not have enough time to describe the whole design in one mail but the goal is to permit measurement end to end from the network edge with user participation on end user "computers" or dedicated hardware, coordinate them and compute analysis on measurement results.
Probe are just too fragile, the whole system is too complex.
? I fail to understand this argument. Every single piece of software is a complex system, that has never ever be an argument to not opensource it. And for the fragility, probably a design issue of the probe (but I can't known for sure, no source code to review).
Does releasing the Atlas source benefit the RIPE community? I don't know. Fortunately, that is not my decision to make.
RIPE call. I think RIPE have made a mistake by not going opensource since the beginning.
There is not a lot of magic in Atlas. It is mostly just hard work to get all the details right.
We're working hard and we have the details very right, one piece after an other, the measurement agent first :) For example, the difference between wire time timestamp vs the system time timestamp and timestamping error calibration on the datagram based measurement.
If you have a dedicated team in an open source project, then you should be able to duplicate our work. You can always for feedback on any design, or ask how we do things.
I do not know the list of active measurements an Atlas probe cover.
One word of warning though: try avoid the second system effect. If your system is going to do everything it may never get there.
It's a revamp of an old system that will not be disconnect from your first user base until your user base will have not understood what you're trying to achieve and the migration plan is complete. The release cycle will be thought to cope with scalability issue (we have a huge user base). Don't worry, we know what we are doing, we're not a young project, just like your current running system we're redoing :) -- Jérôme Benoit aka fraggle La Météo du Net - http://grenouille.com OpenPGP Key ID : 9FE9161D Key fingerprint : 9CA4 0249 AF57 A35B 34B3 AC15 FAA0 CB50 9FE9 161D
On 5/5/12 16:43 , Jérôme Benoit wrote:
On Sat, 05 May 2012 12:22:15 +0200 Philip Homburg<philip.homburg@ripe.net> wrote:
I don't know about SamKnows, but for RIPE Atlas, talks contain a lot of details about how the system works. To the extent that there is a well defined road map, we talk about it. I'm having hard time to find the roadmap and the talks on the RIPE Atlas web site as a public user with no account. But I might have not searched enough. Where are they ?
There is not a single comprehensive road map. What is there, is just bits and pieces. Most of it was discussed during the most recent RIPE meeting. The talks by Robert and Daniel should be online. To list some things that have been mentioned: * TTM shutdown, Atlas is expected to provide functionality similar (but not identical) to what TTM provides * DNSMON gets moved to Atlas * Roll out of Atlas Anchor boxes (regular PCs at well connected locations that can serve as the target of measurements and as a more powerful Atlas probe) * Measurements for IPv6 launch * Better UDM interface * UDM for all RIPE members (instead of just probe hosts and sponsors)
As much as I like open source projects, I really don't see it working for the current Atlas system. It work very well, we do it since age but we took a very different approach :
The measurement agent is designed as a standalone component with that requirement :
* Must be portable, that mean all measurements must be implemented natively. * Datagram-based measurement are divided in four components - packets forge - packets injection - packets capture - packets filter Each component work is mapped to a dedicated thread (an Lwt job more precisely, which bring in a very nice way to manipulate packets in a asynchronous fashion). Note that for Atlas, it has to work on an underpowered CPU without MMU and with 8 MB of memory. * Other measurement can be mapped with a plugin dynamically loaded (the API is not stable yet) * A syntax (that is not yet finished) permit to express what a measurement will do. The syntax is thought as a way to express what the software functionalities are and how each component interact with each other. I give a working example just to show the idea :
However, I think this would be a great area the community can work on. Plenty of people has expressed interest in something more general than what is currently implemented in Atlas. So if people can come up with a design that is both secure and can work on 8 MB probes, then maybe that can be used to create a common interface to the different measurement platforms.
Probe are just too fragile, the whole system is too complex. ?
I fail to understand this argument. Every single piece of software is a complex system, that has never ever be an argument to not opensource it. And for the fragility, probably a design issue of the probe (but I can't known for sure, no source code to review).
Running an open source project is not a goal of the RIPE NCC. If that's supposed to be a goal then the members will have to ask for it through the appropriate channels.
Does releasing the Atlas source benefit the RIPE community? I don't know. Fortunately, that is not my decision to make. RIPE call. I think RIPE have made a mistake by not going opensource since the beginning. I guess opinions differ there.
If you have a dedicated team in an open source project, then you should be able to duplicate our work. You can always for feedback on any design, or ask how we do things. I do not know the list of active measurements an Atlas probe cover.
At the moment, ping, traceroute, tdig (dns), httpget, sslgetcert.
On Mon, 07 May 2012 14:15:26 +0200 Philip Homburg <philip.homburg@ripe.net> wrote:
I'm having hard time to find the roadmap and the talks on the RIPE Atlas web site as a public user with no account. But I might have not searched enough. Where are they ?
There is not a single comprehensive road map. What is there, is just bits and pieces. Most of it was discussed during the most recent RIPE meeting. The talks by Robert and Daniel should be online.
To list some things that have been mentioned:
* TTM shutdown, Atlas is expected to provide functionality similar (but not identical) to what TTM provides
You plan to change the measurement control protocol used in TTM ? I do not know which protocol TTM is using but if it's OWAMP ou TWAMP, that will not fit the needs for large scale measurement campaign runt from network edge.
* DNSMON gets moved to Atlas
In what Atlas call a "probe" (and what I call the measurement agent) ?
* Roll out of Atlas Anchor boxes (regular PCs at well connected locations that can serve as the target of measurements and as a more powerful Atlas probe)
Sound like a good idea :) You then should add a tag to the measurement result that will permit to distinguish the type of box running the measurement agent, like "generated": atlas-probe "generated": atlas-box for example.
* Better UDM interface * UDM for all RIPE members (instead of just probe hosts and sponsors)
Eye candies.
* A syntax (that is not yet finished) permit to express what a measurement will do. The syntax is thought as a way to express what the software functionalities are and how each component interact with each other. I give a working example just to show the idea :
However, I think this would be a great area the community can work on. Plenty of people has expressed interest in something more general than what is currently implemented in Atlas.
So if people can come up with a design that is both secure and can work on 8 MB probes, then maybe that can be used to create a common interface to the different measurement platforms.
If you have any document describing the JSON syntax used in Atlas, I can write the code for your measurement agent that will de-/serialize the probe definition to begin with. If you have the same for the REST API, the implentation be done also. grenouille_config (the component that permit to read the probe definition) is modular and pluggable; just like grenouille_sample (the component that send the result). The 8MB limit will be the hard part, since your agent is written is OCaml, it will mainly à matter of tuning the GC and profiling the data structure defined to avoid large allocation of chunk. We do not have securities problem because of OCaml choice on the implementation side. Securities mechanism will probably be the same as of the one you can find on most "web services" (shared secret salted and hashed) to ensure that a REST transaction is legit. I have to think about it some more ... For API and JSON syntax standardisation, the first step is to write the specifications we(grenouille.com) plan to use and Atlas use and plan to use, then discuss and factor out the best of each. We have some writings but most of them are in French :)
I do not know the list of active measurements an Atlas probe cover.
At the moment, ping, traceroute, tdig (dns), httpget, sslgetcert.
Natively implemented or run via a external binary and CLI options ? Regards, -- Jérôme Benoit aka fraggle La Météo du Net - http://grenouille.com OpenPGP Key ID : 9FE9161D Key fingerprint : 9CA4 0249 AF57 A35B 34B3 AC15 FAA0 CB50 9FE9 161D
On Mon, 07 May 2012 14:15:26 +0200 Philip Homburg<philip.homburg@ripe.net> wrote:
To list some things that have been mentioned:
* TTM shutdown, Atlas is expected to provide functionality similar (but not identical) to what TTM provides
You plan to change the measurement control protocol used in TTM ? I do not know which protocol TTM is using but if it's OWAMP ou TWAMP, that will not fit the needs for large scale measurement campaign runt from network edge. TTM boxes have GPS devices for time synchronization. That allows them to
On 5/7/12 21:44 , Jérôme Benoit wrote: perform accurate one-way measurements. This capability will be lost. The TTM network is relatively small and static. Atlas can easily handle that, except you will be limited to two-way measurements.
In what Atlas call a "probe" (and what I call the measurement agent) ?
Yes. Except that an Atlas probe tends to be a physical device as well.
* Roll out of Atlas Anchor boxes (regular PCs at well connected locations that can serve as the target of measurements and as a more powerful Atlas probe) Sound like a good idea :)
You then should add a tag to the measurement result that will permit to distinguish the type of box running the measurement agent, like
"generated": atlas-probe "generated": atlas-box
for example. We still have to figure out where we want to document meta-data. It doesn't make much sense to put all data about a probe in each and every measurement result.
* Better UDM interface * UDM for all RIPE members (instead of just probe hosts and sponsors) Eye candies.
If you have any document describing the JSON syntax used in Atlas, I can write the code for your measurement agent that will de-/serialize the probe definition to begin with. Commands for the probes are not in JSON. For output we are still
No, it is not eye candy. UDM allows users of the Atlas system to measure their own targets using remote probes. transitioning to JSON. Currently the output is a mix of JSON meta data and free form ASCII output. In the next firmware upgrade that should become just JSON. For example for ping: |{| |||"id"||:||"1001"||,| |||"fw"||:||4414||,| |||"time"||:||1331729380||,| |||"name"||:||"193.0.14.129"||,| |||"addr"||:||"193.0.14.129"||,| |||"srcaddr"||:||"193.0.10.135"||,| |||"mode"||:||"ICMP4"||,| |||"ttl"||:||62||,| |||"size"||:||20||,| |||"result"||: [| ||| { ||"rtt"||:||49.101000| |},| ||| { ||"rtt"||:||6.899000| |},| ||| { ||"rtt"||:||4.139000| |} | |||] | |} We have this as internal documentation, but it should be published some time. |
We do not have securities problem because of OCaml choice on the implementation side. Securities mechanism will probably be the same as of the one you can find on most "web services" (shared secret salted and hashed) to ensure that a REST transaction is legit. I have to think about it some more ...
Our security policy goes further than just protecting the probe. We also try to avoid getting the probe hosts in trouble. For example, having a probe visit certain web sites may be a bad idea.
For API and JSON syntax standardisation, the first step is to write the specifications we(grenouille.com) plan to use and Atlas use and plan to use, then discuss and factor out the best of each. We have some writings but most of them are in French :)
Yes.
I do not know the list of active measurements an Atlas probe cover.
At the moment, ping, traceroute, tdig (dns), httpget, sslgetcert.
Natively implemented or run via a external binary and CLI options ? Natively implemented. Creating lots of new processes turned out to be a bad idea on a system without an MMU.
On Wed, 09 May 2012 11:20:38 +0200 Philip Homburg <philip.homburg@ripe.net> wrote:
* TTM shutdown, Atlas is expected to provide functionality similar (but not identical) to what TTM provides You plan to change the measurement control protocol used in TTM ? I do not know which protocol TTM is using but if it's OWAMP ou TWAMP, that will not fit the needs for large scale measurement campaign runt from network edge. TTM boxes have GPS devices for time synchronization. That allows them to perform accurate one-way measurements. This capability will be lost.
Maybe not. There's two way to cope with the lost of GPS device : * The smart one : If you use a Linux or FreeBSD kernel, you can use : http://www.cubinlab.ee.unimelb.edu.au/radclock/ which is very accurate (less than a micro-second) and cheap. * The dumb one : The time keeping feature is done by the control-configuration server, which mean that no time-stamp will be sent to "probe", only the the time difference between the current time and the measurement time-stamp calculated on the server. It's not very accurate but it permit to do measurement without the need to have the "probe" synchronized. The main problem here is that the protocol is so dumb that the time elapsed by packet travel is never counted anywhere. If seconds accuracy is enough, that the simplest solution.
The TTM network is relatively small and static. Atlas can easily handle that, except you will be limited to two-way measurements.
In what Atlas call a "probe" (and what I call the measurement agent) ?
Yes. Except that an Atlas probe tends to be a physical device as well.
I hope the host functionalities and the measurement functionalities have been thought with some kind of separation :p
* Roll out of Atlas Anchor boxes (regular PCs at well connected locations that can serve as the target of measurements and as a more powerful Atlas probe) Sound like a good idea :)
You then should add a tag to the measurement result that will permit to distinguish the type of box running the measurement agent, like
"generated": atlas-probe "generated": atlas-box
for example. We still have to figure out where we want to document meta-data. It doesn't make much sense to put all data about a probe in each and every measurement result.
In the some kind of "hello" command that describe the probe by itself only. In your system, the measurement agent will probably expose two orthogonal API : * One to write a measurement; * One to gather information on the measurement host (OS type, packets in flight, and so on). I say probably because we also have discussed a pipelined architecture of measurements that will not make this kind of difference : measurement 1 -OK-> measurement 2 -OK-> measurement 3 | KO -> stop measurement which permit to mimic a decision tree with not must complexity added : measurement 2 wait for a result that might be conditional to be runt. The latter is probably the better.
* Better UDM interface * UDM for all RIPE members (instead of just probe hosts and sponsors) Eye candies.
No, it is not eye candy. UDM allows users of the Atlas system to measure their own targets using remote probes.
I see. Interface is then the protocol used by users to define their own measurement. The work flow should define is some way a "central authority" that will permit the measurement to be runt. As far as I see it, UDM is a moderated measurement campaign generator that can be hosted anywhere as long as the configuration approved go to the configuration server and are distributed to "probe".
We have this as internal documentation, but it should be published some time.
Let me known when the dust has settled and RIPE publish them.
We do not have securities problem because of OCaml choice on the implementation side. Securities mechanism will probably be the same as of the one you can find on most "web services" (shared secret salted and hashed) to ensure that a REST transaction is legit. I have to think about it some more ...
Our security policy goes further than just protecting the probe. We also try to avoid getting the probe hosts in trouble. For example, having a probe visit certain web sites may be a bad idea.
The policy is enforced on the configuration server ? (that what we intend to implement via some kind of automation when possible) I should write down the big picture after some talks by the end of may.
For API and JSON syntax standardisation, the first step is to write the specifications we(grenouille.com) plan to use and Atlas use and plan to use, then discuss and factor out the best of each. We have some writings but most of them are in French :)
Yes.
Great. We're going to translate some already written and document the whole architecture in details. Cheers. -- Jérôme Benoit aka fraggle La Météo du Net - http://grenouille.com OpenPGP Key ID : 9FE9161D Key fingerprint : 9CA4 0249 AF57 A35B 34B3 AC15 FAA0 CB50 9FE9 161D
On Wed, 9 May 2012 23:32:25 +0200 Jérôme Benoit <jerome.benoit@grenouille.com> wrote:
We have this as internal documentation, but it should be published some time.
Let me known when the dust has settled and RIPE publish them.
For API and JSON syntax standardisation, the first step is to write the specifications we(grenouille.com) plan to use and Atlas use and plan to use, then discuss and factor out the best of each. We have some writings but most of them are in French :) Yes.
Great. We're going to translate some already written and document the whole architecture in details.
First draft online : http://doc.grenouille.com/index.en.html I know, there's still a lot of TODO :) Regards, -- Jérôme Benoit aka fraggle La Météo du Net - http://grenouille.com OpenPGP Key ID : 9FE9161D Key fingerprint : 9CA4 0249 AF57 A35B 34B3 AC15 FAA0 CB50 9FE9 161D
On 4/27/12 12:20 , Alexander Mayrhofer wrote:
So, instead of being able to use 10 probes to run a continous performance test (say, DNS query every 300 seconds), i would rather like to be able to use a very high number of probes (best case: "all"), but have each of the probe eg. perform just one traceroute per day (or, even once per week would be enough). This would give me an excellent overview about the topology, but is not possible using the current limits of the UDMs. Yes, I think this would be useful in many cases. Unfortunately, I've no idea when we will get around to actually implement it.
Well, a suggestion. Suppose that every time a probe does its test, it also reads <something> from RIPE that tells it the address of the target to probe next. This would allow RIPE (passing along your instructions, or executing its own) that could implement any of a variety of scenarios, and change them from time to time. On Apr 27, 2012, at 3:20 AM, Alexander Mayrhofer wrote:
Hello,
i'm figuring how RIPE atlas could be most useful for assisting in monitoring our Ancast Network, and came up with the following idea:
The Atlas network has a big advantage over other monitoring /measurement networks, which is obviously its sheer size (and topological diversity). Rather than using the Atlas network for pure performance measurements, i would love to be able to use Atlas for reachability measurements (and understand which Anycast location attracts traffic from which region, and how this changes over time). These reachability / topology discovery measurements do not need to be nearly as frequent as the performance measurements, but should utilize all available probes.
So, instead of being able to use 10 probes to run a continous performance test (say, DNS query every 300 seconds), i would rather like to be able to use a very high number of probes (best case: "all"), but have each of the probe eg. perform just one traceroute per day (or, even once per week would be enough). This would give me an excellent overview about the topology, but is not possible using the current limits of the UDMs.
However, instead of allowing users to allocate measurements to a much higher number of probes, this functionality would also be possible be adding some "probe round-robin" feature to the control infrastructure, for example "randomly allocate a new set of probes every xx seconds".
If that feature would exist, i could implement the above reachability test with the following parameters:
Test: traceroute Number of probes: 10 Measurement interval: 3600 Swap probe-set interval: 10800 (NEW feature)
(which would re-allocate new probes every 3 hours, theoretically working through all 1500 probes about every month?). An obvious alternative would be functionality where i could "queue" "single-shot" measurements for the whole network (since chances are low i would get every probe if they are randomly assigned).
comments? Suggestions? Alternatives?
Alex Mayrhofer Head of R&D nic.at
On 5/5/12 1:47 , Fred Baker wrote:
Well, a suggestion.
Suppose that every time a probe does its test, it also reads<something> from RIPE that tells it the address of the target to probe next. This would allow RIPE (passing along your instructions, or executing its own) that could implement any of a variety of scenarios, and change them from time to time.
That is already there. Probes are under full control of the Atlas infrastructure. The problem is more that the Atlas backend is very complex distributed system. So it takes time before the necessary datastructures and database tables are in place to really express the concept of looping over all probe.
participants (5)
-
Alexander Mayrhofer
-
Fred Baker
-
Jérôme Benoit
-
Marco Davids (SIDN)
-
Philip Homburg