i am not against this in principle. but i want to see a hypothesis and an experimental design which can produce something real.
hypothesis: atlas probes show a bias toward clueful locations. unless there is a calibration set with a different, or better yet known, bias, what useful is actually being measured? unless it's a name and shame game. and then you will want to know if things 'improve' over time, which means it is not a one-shot. I do not think RIPE probes are representative of the larger Internet, because they are generally run by more clueful individuals.
my point. i have been calling route views "feeds from the 'clue core'."
In that sense, they represent an "ideal situation".
it may be a bit of a stretch from 'bias' to 'ideal'.
People who know and more often than not care.
some months ago, the ncc said over half the probes were behind nats. i.e. clue core folk took them home, where they are likely behind someone else's broadband network. then again, if you think most of the botnets are behind broadband home networks, it makes an interesting sample. compare spoof density of natted vs un-natted. but then, how you gonna spoof from behind a nat? as i said, a real hypothesis and an experimental design to test it. i guess i am still stuck in middle school science class.
If the results from this ideal situation are that a majority of probes can spoof, we might as well give up on source address validation
ok, i give. with an already flawed measure, how did you decide on the majority of probes? and, if the majority of probes are behind nats, and you can not figure out how to spoof from behind a nat, then you can declare victory, such as it is. as i have said many times, i do not think ranting has done much in recent years. and having some data ain't gonna help. randy