On Thu, 7 Nov 2013, Jen Linkova wrote:
My concern is as follows: let's say I create UDM to send packets with some dodgy ext. headers combination, run it from a large number of probes and suddenly trigger a nasty bug crashing routers across the Internet. First of all, distributed nature of the measurement might lead to significant impact on Internet stability comparing to me running such experiment from my local machine at home. That's why I believe # of probes running the test simultaneously should be limited.
Sure, if this is easy to implement, we could do this initially. Otherwise my thinking was to use TCP or alike anyhow, so if there was a lot of packet loss, there wouldn't be much packets either. I mean, we're talking access to web servers anyhow, right?
Secondly, probe hosts might get into trouble for their probes sending "malicious" packets so I think hosts should be aware of potentially risky measurements running on their probes. A host might be willing to cooperate but nobody wants unpleasant surprises.
Let's start with the fragmentation header and see what happens then. We could do other tests that are destination options at a later date. -- Mikael Abrahamsson email: swmike@swm.pp.se