What about the last mile, was: getting DNSSEC deployed
Kudo's to TDC Song, C&W and others. Real nice they're validating data. So what about joe end user. Are there initiatives to offer tsig/sig0/dtls between user and isp ? Are there initiatives to deploy code at the OS level, similar as to what the NLNetLabs and Sparta folk are building for the application level ? Are formentioned providers deploying either of these two sets of solutions to their end users ? Or is it all just security theater ? Bring dns validation to where dns requests are initiated and where it is consumed; at the end user. That part is still vulnerable to spoofing while we're trying to secure the invisible infrastructure. Note that with end user validation, and well established methods to update the end users' certificate store, we might be well on our way. See also: http://dnss.ec/blog/?p=10 Sure, signing the root is crucial, and I'm not convinced dlv is a viable alternative, but thats all meaningless if layer 6/7 don't get some fondling. Roy
* Roy Arends wrote:
Note that with end user validation, and well established methods to update the end users' certificate store, we might be well on our way.
See also: http://dnss.ec/blog/?p=10
IBTD. You can run a caching validating on your own system. If you do not want this, you have to use a stub resolver. A stub resolver means, that you have a established link to an authenitcated resolver. This resolver has to do the DNSSEC validation. If your application want's to validate DNSSEC itself, ther exists a request format to get the responses unvalidated. Following this proposal in the blog, DNSSEC is dead.
Lutz Donnerhacke wrote on 02/16/2007 09:24:33 AM:
* Roy Arends wrote:
Note that with end user validation, and well established methods to update the end users' certificate store, we might be well on our way.
See also: http://dnss.ec/blog/?p=10
IBTD.
Please don't beg.
You can run a caching validating on your own system.
Isn't that what I was saying ? I just don't want to do all the recursion. My ISP's resolver can do that.
If you do not want this, you have to use a stub resolver. A stub resolver means, that you have a established link to an authenitcated resolver. This resolver has to do the DNSSEC validation.
not really. I can also validate on a stub resolver.
If your application want's to validate DNSSEC itself, ther exists a request format to get the responses unvalidated.
Yeah, I think I've read that in some internet draft somewhere.
Following this proposal in the blog, DNSSEC is dead.
Tell me Lutz, how does joe end user run a full featured validating resolver daemon, when he barely understand the concept of DNS. If he shouldn't run this, how does he setup "a established link to an authenticated resolver". You're not really referring to just an bunch of addresses in some resolv.conf or equivalent, since thats hardly an established link. The ISP's resolver hardly knows who's talking to it. Now, lets assume for a sec we don't run into scaling issues, since the "authenticated resolver" needs to do some crypto for the "established link", while doing some crypto to validate messages. Why should I trust data, validated by my ISP ? Them ISPs route me to a search page, while I should've received an NXDOMAIN. but, no fear, the 'ad' bit is set, and I can just blindly trust my ISP, while they're cashing (no typo) in on my unfortunate misspellings. Roy
* Roy Arends wrote:
Lutz Donnerhacke wrote on 02/16/2007 09:24:33 AM:
You can run a caching validating on your own system.
Isn't that what I was saying ? I just don't want to do all the recursion. My ISP's resolver can do that.
So use it for this.
not really. I can also validate on a stub resolver.
I wouldn't call this "stub". A stub resolver is a protocol translator: It offers an well known API to well known protocol. It does nothing more of the protocol itself.
Following this proposal in the blog, DNSSEC is dead.
Tell me Lutz, how does joe end user run a full featured validating resolver daemon, when he barely understand the concept of DNS.
The end user has a stub resolver pointing to a trustwothy validating one. It's this plain simple. If you want to break this behavior, DNSSEC is dead.
If he shouldn't run this, how does he setup "a established link to an authenticated resolver". You're not really referring to just an bunch of addresses in some resolv.conf or equivalent, since thats hardly an established link. The ISP's resolver hardly knows who's talking to it.
I'm responsible for DNS at an ISP: The ISP's resolver know who queries it.
Now, lets assume for a sec we don't run into scaling issues, since the "authenticated resolver" needs to do some crypto for the "established link", while doing some crypto to validate messages.
DNSSEC validating on a larger resolver does scale well, because - that's the important observation I made - a lot of queries can be answered from cached NSEC records without querying further. The whole bunch of NXDOMAIN dropped by about 70% here. Crypto is cheap compared to networking.
Why should I trust data, validated by my ISP?
Because you choose him to do so.
Them ISPs route me to a search page, while I should've received an NXDOMAIN. but, no fear, the 'ad' bit is set, and I can just blindly trust my ISP, while they're cashing (no typo) in on my unfortunate misspellings.
If you do not trust your ISP, you need an other one or you won validating protocols i.e. VPN to a trustwothy point. DNSSEC for end users is not a security issue, it's a deployment issue.
On Feb 16, 2007, at 09:20, Lutz Donnerhacke wrote:
DNSSEC validating on a larger resolver does scale well, because - that's the important observation I made - a lot of queries can be answered from cached NSEC records without querying further. The whole bunch of NXDOMAIN dropped by about 70% here.
It would be good to get some real numbers here. And to find out what happens to the already-crypto-validated-and-cached RRSIGs when their TTLs and "best before" dates change. Dropping the NXDOMAINs by 70% seems very strange. If the same number of queries are being made as before, what answers are they getting back instead of NXDOMAIN? Aha! It must be SERVFAIL because DNSSEC validation failed. :-)
Crypto is cheap compared to networking.
Please explain how you arrive at this conclusion. Crypto is never cheap, especially the cost of the human factors in things like key management. Example: adding a host to some network is much less work than configuring SSH on that host and distributing its host key(s). I would like to know how running a cryptosystem is cheaper than moving bits around, all other things being equal.
* Jim Reid wrote:
It would be good to get some real numbers here.
Yep.
Dropping the NXDOMAINs by 70% seems very strange. If the same number of queries are being made as before, what answers are they getting back instead of NXDOMAIN?
*g* The good answers are usually cached on customer side. Only the bad queries are resend after a short negative caching period. The validating resolver does not itself requery those questions but respond (from a cached and valid NSEC) NXDOMAIN.
Crypto is cheap compared to networking.
Please explain how you arrive at this conclusion.
RRSIG validation does occur on every freshly received record. Then the result of the validation is cached. OTOH resolving a query recursively requires at least one packet exchange with a remote system. This takes time. I compare timing and conclude that time_validating = time_queryDNSSEC + time_validation + n*time_lookup and time_recursing = n*time_query must not be in a strict order for every n. Speaking for the locally hosted signed zones (~500) I observe a big win. The win will be much better if the root where signed (because the resolver knows which TLD does not exists from cache), so that stetting up a signed root for outself is a probable project in the near future.
On Fri, Feb 16, 2007 at 10:20:58AM +0000, Lutz Donnerhacke wrote:
The win will be much better if the root where signed (because the resolver knows which TLD does not exists from cache), so that stetting up a signed root for outself is a probable project in the near future.
If lowering response times for QNAMEs falling into non-existent TLDs (or reducing garbage sent to the root servers) is your goal, why wait for DNSSEC? Just make your recursive server authoritative for the root zone (all caveats apply) and be done. I'm neither questioning nor recommending this approach, but I'm a bit concerned to see side effects (real or perceived) sold as benefits for DNSSEC, where these benefits don't exist. -Peter
* Peter Koch wrote:
I'm a bit concerned to see side effects (real or perceived) sold as benefits for DNSSEC, where these benefits don't exist.
Of course, but the side effects are the cool things which make the pig fly. Examples: SSHFP is great for admins. CERT are great for those guys wanting VeriSign to disappear. Agressive negative caches are great for DNS admins. Zone enumerations are great for 'some' people. Larger responses are great for qmail haters. Security buzzwords are great for finanical instituts suffering phishing. Signing the root is great for fundemantalists. DLVs are great for always alternative geeks. Do I miss something?
On Fri, Feb 16, 2007 at 09:20:09AM +0000, Lutz Donnerhacke wrote:
DNSSEC validating on a larger resolver does scale well, because - that's the important observation I made - a lot of queries can be answered from cached NSEC records without querying further. The whole bunch of NXDOMAIN dropped by about 70% here. Crypto is cheap compared to networking.
Are you suggesting that a) since most of the queries are repeated ones leading to NXDOMAIN you can take advantage of the response being cached and not in need of re-validation, or b) you have and use an implementation, that -- in violation of the DNSSEC specification -- applies "aggressive negative caching"? In case of (a) I'd not understand the drop rate, for (b) I'd like to read a name. -Peter
* Peter Koch wrote:
b) you have and use an implementation, that -- in violation of the DNSSEC specification -- applies "aggressive negative caching"?
Of course, it's a slightly modified bind. What's wrong with using the NSEC data for negative caching? Example: Q: avalon.iks-jena.de. AAAA [query the authoritive] A: avalon NSEC awstats.iks-jena.de. A MX TXT LOC SSHFP RRSIG NSEC Q: avalon.iks-jena.de. HINFO A: avalon NSEC awstats.iks-jena.de. A MX TXT LOC SSHFP RRSIG NSEC Q: avatar.iks-jena.de. A A: avalon NSEC awstats.iks-jena.de. A MX TXT LOC SSHFP RRSIG NSEC I do _not_ extent the lifetime of the NSEC over the TTL based on the RRSIG end date.
On Fri, Feb 16, 2007 at 10:29:41AM +0000, Lutz Donnerhacke <lutz@iks-jena.de> wrote a message of 20 lines which said:
Of course, it's a slightly modified bind. What's wrong with using the NSEC data for negative caching?
RFC 4035, "4.5. Response Caching" In theory, a resolver could use wildcards or NSEC RRs to generate positive and negative responses (respectively) until the TTL or signatures on the records in question expire. However, it seems prudent for resolvers to avoid blocking new authoritative data or synthesizing new data on their own. Resolvers that follow this recommendation will have a more consistent view of the namespace.
* Stephane Bortzmeyer wrote:
On Fri, Feb 16, 2007 at 10:29:41AM +0000, Lutz Donnerhacke <lutz@iks-jena.de> wrote
Of course, it's a slightly modified bind. What's wrong with using the NSEC data for negative caching?
RFC 4035, "4.5. Response Caching"
In theory, a resolver could use wildcards or NSEC RRs to generate positive and negative responses (respectively) until the TTL or signatures on the records in question expire. However, it seems prudent for resolvers to avoid blocking new authoritative data or synthesizing new data on their own. Resolvers that follow this recommendation will have a more consistent view of the namespace.
I do not block new authoritative data, because I listen to NOTIFY on 232.<crc24-of-canonical-soa's-name> and ff35:8000:<crc32-...>. If authoritive server sends a NOTIFY to this group and the zone data is pruned on the listening resolver. Originally the multicast hack was done to update (hidden) secondaries behind firewalls. Using multicast there is no need for peeling holes into the firewall.
Lutz Donnerhacke wrote on 02/16/2007 10:20:09 AM:
* Roy Arends wrote:
Lutz Donnerhacke wrote on 02/16/2007 09:24:33 AM:
You can run a caching validating on your own system.
Isn't that what I was saying ? I just don't want to do all the recursion. My ISP's resolver can do that.
So use it for this.
not really. I can also validate on a stub resolver.
I wouldn't call this "stub". A stub resolver is a protocol translator: It offers an well known API to well known protocol. It does nothing more of
I just explained why I don't want to do that. the
protocol itself.
I'd call this a "security aware stub resolver" (rfc 4033, section 2).
Following this proposal in the blog, DNSSEC is dead.
Tell me Lutz, how does joe end user run a full featured validating resolver daemon, when he barely understand the concept of DNS.
The end user has a stub resolver pointing to a trustwothy validating one. It's this plain simple. If you want to break this behavior, DNSSEC is dead.
explain to me how DNSSEC is dead by doing validation on a stub resolver.
If he shouldn't run this, how does he setup "a established link to an authenticated resolver". You're not really referring to just an bunch of addresses in some resolv.conf or equivalent, since thats hardly an established link. The ISP's resolver hardly knows who's talking to it.
I'm responsible for DNS at an ISP: The ISP's resolver know who queries it.
Now, lets assume for a sec we don't run into scaling issues, since the "authenticated resolver" needs to do some crypto for the "established link", while doing some crypto to validate messages.
DNSSEC validating on a larger resolver does scale well, because - that's
So, what do you offer to your clients? SIG(0), TSIG, DTLS, some VPN method ? How many clients have configured that ? And with 'who queries it', you probably mean that you have some list in place somewhere that discriminates on ip. Note that I can simply passive query your resolver box. You wouldn't even know it is me. the
important observation I made - a lot of queries can be answered from cached NSEC records without querying further. The whole bunch of NXDOMAIN dropped by about 70% here. Crypto is cheap compared to networking.
I find those last two statements highly unlikely, but for argument sake, multiply this by cost(crypto(lastmile))*count(users).
Why should I trust data, validated by my ISP?
Because you choose him to do so.
Them ISPs route me to a search page, while I should've received an NXDOMAIN. but, no fear, the 'ad' bit is set, and I can just blindly
Eh ? No, I rely on it to bring me the data. I'll validate it myself, thank you very much. trust
my ISP, while they're cashing (no typo) in on my unfortunate misspellings.
If you do not trust your ISP, you need an other one or you won validating protocols i.e. VPN to a trustwothy point.
"trust" is not a binary concept. You need to relate trust to a service, and then still, it comes in degrees. I trust my bank to process payments. I trust my ISP to keep my link alive and to have proper peering in place. I _could_ trust my ISP to serve me the right data, but that would only be the right data in their perspective, wouldn't it, and that might not match mine.
DNSSEC for end users is not a security issue, it's a deployment issue.
Eh ? DNSSEC is security backfitted on a widely deployed protocol. This has deployment issues in general. Roy
* Roy Arends wrote:
explain to me how DNSSEC is dead by doing validation on a stub resolver.
You can't update the installed base quick enought to gain the benefits of DNSSEC. If the recursing resolvers do not validate, the whole DNSSEC effect is going to zero. You will find about 100000 validating resolvers at end user sites and nobody will sign a zone for this group of geeks.
I'm responsible for DNS at an ISP: The ISP's resolver know who queries it.
So, what do you offer to your clients? SIG(0), TSIG, DTLS, some VPN method?
Internet Access over our own infrastructure. If you are coming from extern, you have to use TSIG.
How many clients have configured that?
The larger customers. It's about 20000 end users. We do not sell internet access to private users.
And with 'who queries it', you probably mean that you have some list in place somewhere that discriminates on ip. Note that I can simply passive query your resolver box. You wouldn't even know it is me.
I can't see your point here.
I find those last two statements highly unlikely, but for argument sake, multiply this by cost(crypto(lastmile))*count(users).
I do not see the need for crypto on the last mile.
Why should I trust data, validated by my ISP?
Because you choose him to do so.
Eh? No, I rely on it to bring me the data. I'll validate it myself, thank you very much.
You are a geek. But you spoke about end users. And they trust their ISP for the data they received from him. You are still free to do the validation yourself.
If you do not trust your ISP, you need an other one or you won validating protocols i.e. VPN to a trustwothy point.
"trust" is not a binary concept. You need to relate trust to a service, and then still, it comes in degrees.
IBTD, but this is a useless discussion.
I trust my ISP to keep my link alive and to have proper peering in place. I _could_ trust my ISP to serve me the right data, but that would only be the right data in their perspective, wouldn't it, and that might not match mine.
For the joe end user, there is no difference.
DNSSEC for end users is not a security issue, it's a deployment issue.
Eh?
Exactly. Turn on validation on the recursing servers and you are done.
DNSSEC is security backfitted on a widely deployed protocol. This has deployment issues in general.
Pushing deployment (incl. key management) to the end users is the wrong way.
Lutz Donnerhacke wrote on 02/16/2007 11:40:14 AM:
* Roy Arends wrote:
explain to me how DNSSEC is dead by doing validation on a stub resolver.
You can't update the installed base quick enought to gain the benefits of DNSSEC. If the recursing resolvers do not validate, the whole DNSSEC effect is going to zero. You will find about 100000 validating resolvers at end user sites and nobody will sign a zone for this group of geeks.
And with 'who queries it', you probably mean that you have some list in place somewhere that discriminates on ip. Note that I can simply
Ah, you're assuming that folk will en-masse sign their zones for the handfull of validating resolvers ? Meanwhile, my OS/X and windows boxes are configured (by default) to update itself regularly. Some of my applications do that as well. My browsers have validation intergrated. Joe end user would not even see the difference.... but he's better off than before. I don't really expect any demand from end-users in general. I have difficulty believing that there will be any effort from big ISP's to do this. It takes a few support calls to have validation switched off at the ISPs site, or the ISP will already see their very thin margin evaporate (sure sure, you're the exception). That leaves us with pushing code to the end user, in applications and OS, which implies coorperation from and education to software developers. Since you don't sell access to private end users, I assume you sell bulk access, which implies that corps/folks you send access to, have their own resolvers in place. They loose. passive
query your resolver box. You wouldn't even know it is me.
I can't see your point here.
acl's, firewalls, etc, that decide on source ip address if it can query your resolver. I can circumvent that.
I find those last two statements highly unlikely, but for argument sake, multiply this by cost(crypto(lastmile))*count(users).
I do not see the need for crypto on the last mile.
Why should I trust data, validated by my ISP?
Because you choose him to do so.
Eh? No, I rely on it to bring me the data. I'll validate it myself,
That is okay. thank
you very much.
You are a geek. But you spoke about end users. And they trust their ISP for the data they received from him.
I'd advice joe end user to validate locally. Just as I'd advice them validate certificates (which browsers do automagically). Are you saying that end users should blindly trust their http connection, just because it come via their ISP, or the ISP's proxy?
You are still free to do the validation yourself.
Good. I was concerned for a second. I see no point in discussing this further. You may call me a geek, thats fine, I see it as 'early adaptor'. Roy
* Roy Arends wrote:
Lutz Donnerhacke wrote on 02/16/2007 11:40:14 AM:
You can't update the installed base quick enought to gain the benefits of DNSSEC. If the recursing resolvers do not validate, the whole DNSSEC effect is going to zero. You will find about 100000 validating resolvers at end user sites and nobody will sign a zone for this group of geeks.
Ah, you're assuming that folk will en-masse sign their zones for the handfull of validating resolvers ?
Yes. See the reactions to the last announcement of a Swedish ISP.
Meanwhile, my OS/X and windows boxes are configured (by default) to update itself regularly. Some of my applications do that as well. My browsers have validation intergrated. Joe end user would not even see the difference.... but he's better off than before.
He will see a difference, if some spoofing attacks does not longer work.
I don't really expect any demand from end-users in general.
I see a strong demand from commercial banking institutes (not really). Let's assume some major DSL-ISPs does switch on validating. This results in a trusted DNS for about 60% of there customers (may be more). Now consider the phishing buzzword. No, it does not help against clicking on every link and attachment in Outlook.
I have difficulty believing that there will be any effort from big ISP's to do this. It takes a few support calls to have validation switched off at the ISPs site, or the ISP will already see their very thin margin evaporate.
Most DSL markets are death due to dumping. If you really want to keep customers, you have to provide more features. Security is a very valuable feature this days. Adding DNSSEC validating causes a major step-up at least in press release shootouts.
That leaves us with pushing code to the end user, in applications and OS, which implies coorperation from and education to software developers.
Taking this road means: Redo from start. Never get a reasonable deployment. Root will not be signed, because there are not enough installations. More installations will not come up, because the root is not signed and key maintainence is a mess. Catch-22. I prefer the other way.
Since you don't sell access to private end users, I assume you sell bulk access, which implies that corps/folks you send access to, have their own resolvers in place. They loose.
What do they loose?
I can't see your point here.
acl's, firewalls, etc, that decide on source ip address if it can query your resolver. I can circumvent that.
How do you want to do this? Please respond by email directly, it's off-topic.
You are a geek. But you spoke about end users. And they trust their ISP for the data they received from him.
I'd advice joe end user to validate locally.
They are free to do so. They are free to use any nameserver they want. But if they use the ISP's recursive resolver, this will be a validating one.
Just as I'd advice them validate certificates (which browsers do automagically). Are you saying that end users should blindly trust their http connection, just because it come via their ISP, or the ISP's proxy?
No, you confuse the source of the data. The ISP can validate the integrity of DNSSEC-signed zones, and it is good to do so. The ISP can't validate the integrety of HTTPS certificates, because the protocols does not show them to him without serveral crude hacks.
You may call me a geek, thats fine, I see it as 'early adaptor'.
We are all 'early adaptors', because we need the blood from the edge.
So I lied. It wasn't my last response on the subject. Lutz Donnerhacke wrote on 02/16/2007 01:13:40 PM:
* Roy Arends wrote:
Lutz Donnerhacke wrote on 02/16/2007 11:40:14 AM:
You can't update the installed base quick enought to gain the benefits of DNSSEC. If the recursing resolvers do not validate, the whole DNSSEC effect is going to zero. You will find about 100000 validating resolvers at end user sites and nobody will sign a zone for this group of geeks.
Ah, you're assuming that folk will en-masse sign their zones for the handfull of validating resolvers ?
Yes. See the reactions to the last announcement of a Swedish ISP.
So folks with domains under '.se' are signing their zone en-masse ? Haven't seen those reactions.
Meanwhile, my OS/X and windows boxes are configured (by default) to update itself regularly. Some of my applications do that as well. My browsers have validation intergrated. Joe end user would not even see the difference.... but he's better off than before.
He will see a difference, if some spoofing attacks does not longer work.
Stub resolvers can be spoofed trivially. resolver code in dsl routers/cable modems can be spoofed trivially. So, unless the end-user has dnssec deployed locally, spoofing attacks work.
I don't really expect any demand from end-users in general.
I see a strong demand from commercial banking institutes (not really). Let's assume some major DSL-ISPs does switch on validating. This results in a trusted DNS for about 60% of there customers (may be more). Now consider the phishing buzzword.
security theater. Nothing really changes for the end user. Those who tried to spoof resolvers will now change their focus towards the end-users stub. Arms race.
That leaves us with pushing code to the end user, in applications and OS, which implies coorperation from and education to software developers.
Taking this road means: Redo from start.
It _is_ done from the start. We've put in the current standards. Applications can already use. My jabber server uses it. Validation on a stub.
Never get a reasonable deployment. Root will not be signed, because there are not enough installations. More installations will not come up, because the root is not signed and key maintainence is a mess. Catch-22.
That is the _current_ status quo. not enough installations have 'switched on' dnssec, so why bother signing. why bother switching on dnssec if not enough domains are signed.
I can't see your point here.
acl's, firewalls, etc, that decide on source ip address if it can query your resolver. I can circumvent that.
How do you want to do this?
I scan a range, a few boxes will do a reverse lookup. I control the specific reverse address space, hence your resolver is talking to me: window of opportunity. Another way is spraying spam around, antispam code resolve whatever I tell it to resolve. This reminds to finish my article about intrusion detection detection (sic) methods, and develop some anti intrusion detection dection (sic) methods.
Please respond by email directly, it's off-topic.
Well. Others might be interested in this as well, so, there.
You are a geek. But you spoke about end users. And they trust their ISP for the data they received from him.
I'd advice joe end user to validate locally.
They are free to do so. They are free to use any nameserver they want. But if they use the ISP's recursive resolver, this will be a validating one.
Just as I'd advice them validate certificates (which browsers do automagically). Are you saying that end users should blindly trust
http connection, just because it come via their ISP, or the ISP's
What is the use of seeing a bit set in the response that claims that the response is validated, when I can't trust the link !!!!!!! their proxy?
No, you confuse the source of the data.
eh ?
The ISP can validate the integrity of DNSSEC-signed zones, and it is good to do so.
The ISP can't validate the integrety of HTTPS certificates, because the protocols does not show
The ISP can validate the integrity, sure. To me that would be another middlebox fondling with the data. them to
him without serveral crude hacks.
So, basically, if the https protocol would allow it, ISP's like to validate the integrity of certificates as well.... so end users don't have to ? Roy
* Roy Arends wrote:
So folks with domains under '.se' are signing their zone en-masse ?
No, because it was free. Now you can _buy_ security from SE-DNSSEC and the run starts. :-/
Stub resolvers can be spoofed trivially. resolver code in dsl routers/cable modems can be spoofed trivially.
It's hard to spoof in our own infrastructure, but YMMV.
security theater. Nothing really changes for the end user. Those who tried to spoof resolvers will now change their focus towards the end-users stub. Arms race.
End user stubs arn't reachable from outside, due to firewalls etc. pp.. But YMMV.
Taking this road means: Redo from start.
It _is_ done from the start. We've put in the current standards. Applications can already use. My jabber server uses it. Validation on a stub.
Fine! I'm happy with it. But I'm not happy with urging such installation on every end user host by prohibiting verifying recursive resolvers.
Never get a reasonable deployment. Root will not be signed, because there are not enough installations. More installations will not come up, because the root is not signed and key maintainence is a mess. Catch-22.
That is the _current_ status quo. not enough installations have 'switched on' dnssec, so why bother signing. why bother switching on dnssec if not enough domains are signed.
Therefore every validating recursive resolver is a big win. Urging to shut validation down on those resolvers ist the wrong road. But YMMV.
acl's, firewalls, etc, that decide on source ip address if it can query your resolver. I can circumvent that.
How do you want to do this?
I scan a range, a few boxes will do a reverse lookup. I control the specific reverse address space, hence your resolver is talking to me: window of opportunity. Another way is spraying spam around, antispam code resolve whatever I tell it to resolve.
So you do not query my resolver, but my resolver queries you. Let keep such discussions off the list.
They are free to do so. They are free to use any nameserver they want. But if they use the ISP's recursive resolver, this will be a validating one.
What is the use of seeing a bit set in the response that claims that the response is validated, when I can't trust the link !!!!!!!
Oh my godness. Paranoia is offtopic either. You have much more problems than DNS if the link between your system and your ISPs resolver is not trustworthy.
The ISP can validate the integrity of DNSSEC-signed zones, and it is good to do so.
The ISP can validate the integrity, sure. To me that would be another middlebox fondling with the data.
Paranoia again. So do not use the resolver of your ISP!
So, basically, if the https protocol would allow it, ISP's like to validate the integrity of certificates as well.... so end users don't have to?
Of course. There are proxies which break the end-to-end-security of https in order to achive this. But I consider this as a hack. BTW: ISPs do validate a lot more of internet data while flying through. You do not see a lot of crap out there. And you are happy with it.
Lutz, lets recap. I'm urging folk to not forget about the last mile. I'm fine with caching resolvers validating dns data. Roy
Lutz Donnerhacke wrote:
Most DSL markets are death due to dumping. If you really want to keep customers, you have to provide more features. Security is a very valuable feature this days. Adding DNSSEC validating causes a major step-up at least in press release shootouts.
Yes! Certainly ;) It is cheap for implement (just activate some options), but loud to shout ;)
Lutz Donnerhacke wrote:
I find those last two statements highly unlikely, but for argument sake, multiply this by cost(crypto(lastmile))*count(users).
I do not see the need for crypto on the last mile.
And what to do with spoofed answers? -- WBR, Max Tulyev (MT6561-RIPE, 2:463/253@FIDO)
----- Original Message ----- From: "Max Tulyev" <president@ukraine.su>
Lutz Donnerhacke wrote:
I find those last two statements highly unlikely, but for argument sake, multiply this by cost(crypto(lastmile))*count(users).
I do not see the need for crypto on the last mile.
And what to do with spoofed answers?
256bit nonce/id field instead of 16.
Lutz Donnerhacke wrote:
* Max Tulyev wrote:
Lutz Donnerhacke wrote:
I do not see the need for crypto on the last mile. And what to do with spoofed answers?
Spoofing the ISPs resolver on the last mile is not possible due to ingress filtering.
...if it is switched on, if it is at all routers, if the spoofer not in the same ethernet (or other L2 broadcast) segment, if..., if..., if... :) -- WBR, Max Tulyev (MT6561-RIPE, 2:463/253@FIDO)
* Max Tulyev wrote:
Lutz Donnerhacke wrote:
* Max Tulyev wrote:
Lutz Donnerhacke wrote:
I do not see the need for crypto on the last mile.
...if it is switched on, if it is at all routers, if the spoofer not in the same ethernet (or other L2 broadcast) segment, if..., if..., if... :)
<repeat>I'm the ISP.</repeat>
Lutz Donnerhacke wrote:
* Max Tulyev wrote:
* Max Tulyev wrote:
Lutz Donnerhacke wrote:
I do not see the need for crypto on the last mile. ...if it is switched on, if it is at all routers, if the spoofer not in
Lutz Donnerhacke wrote: the same ethernet (or other L2 broadcast) segment, if..., if..., if... :)
<repeat>I'm the ISP.</repeat>
Me too. But what is the difference? -- WBR, Max Tulyev (MT6561-RIPE, 2:463/253@FIDO)
-----Original Message----- From: dns-wg-admin@ripe.net [mailto:dns-wg-admin@ripe.net] On Behalf Of Lutz Donnerhacke
* Max Tulyev wrote:
Lutz Donnerhacke wrote:
<repeat>I'm the ISP.</repeat>
Me too. But what is the difference?
You can't spoof my nameserver in my network. YNMV.
It is not a requirement to be in your network in order to spoof it.
Lutz Donnerhacke wrote:
* Max Tulyev wrote:
Lutz Donnerhacke wrote:
<repeat>I'm the ISP.</repeat> Me too. But what is the difference?
You can't spoof my nameserver in my network. YNMV.
Even beeing in same L2 broadcast segment with you? (assuming your workstation don't have running DNS resolver with enabled DNSSEC) -- WBR, Max Tulyev (MT6561-RIPE, 2:463/253@FIDO)
I think this will not be too much off-topic. What does DNSSEC and Techsec-WG people think about recently revealed pharming attack technique that is based on the end user DNS altering? NEW ATTACK TECHNIQUE THREATENS BROADBAND USERS Millions of high-speed Internet users across the globe are threatened by a new attack technique called drive-by pharming, Symantec and Indiana University researchers warned Thursday. http://go.techtarget.com/r/1004039/1401570 On the practical note, I need to say something definite to my students what DNS experts think? Thanks. Yuri Roy Arends wrote:
Lutz Donnerhacke wrote on 02/16/2007 10:20:09 AM:
* Roy Arends wrote:
Lutz Donnerhacke wrote on 02/16/2007 09:24:33 AM:
You can run a caching validating on your own system. Isn't that what I was saying ? I just don't want to do all the recursion. My ISP's resolver can do that. So use it for this.
I just explained why I don't want to do that.
not really. I can also validate on a stub resolver. I wouldn't call this "stub". A stub resolver is a protocol translator: It offers an well known API to well known protocol. It does nothing more of the protocol itself.
I'd call this a "security aware stub resolver" (rfc 4033, section 2).
Following this proposal in the blog, DNSSEC is dead. Tell me Lutz, how does joe end user run a full featured validating resolver daemon, when he barely understand the concept of DNS. The end user has a stub resolver pointing to a trustwothy validating one. It's this plain simple. If you want to break this behavior, DNSSEC is dead.
explain to me how DNSSEC is dead by doing validation on a stub resolver.
If he shouldn't run this, how does he setup "a established link to an authenticated resolver". You're not really referring to just an bunch of addresses in some resolv.conf or equivalent, since thats hardly an established link. The ISP's resolver hardly knows who's talking to it. I'm responsible for DNS at an ISP: The ISP's resolver know who queries it.
So, what do you offer to your clients? SIG(0), TSIG, DTLS, some VPN method ? How many clients have configured that ? And with 'who queries it', you probably mean that you have some list in place somewhere that discriminates on ip. Note that I can simply passive query your resolver box. You wouldn't even know it is me.
Now, lets assume for a sec we don't run into scaling issues, since the "authenticated resolver" needs to do some crypto for the "established link", while doing some crypto to validate messages. DNSSEC validating on a larger resolver does scale well, because - that's the important observation I made - a lot of queries can be answered from cached NSEC records without querying further. The whole bunch of NXDOMAIN dropped by about 70% here. Crypto is cheap compared to networking.
I find those last two statements highly unlikely, but for argument sake, multiply this by cost(crypto(lastmile))*count(users).
Why should I trust data, validated by my ISP? Because you choose him to do so.
Eh ? No, I rely on it to bring me the data. I'll validate it myself, thank you very much.
Them ISPs route me to a search page, while I should've received an NXDOMAIN. but, no fear, the 'ad' bit is set, and I can just blindly trust my ISP, while they're cashing (no typo) in on my unfortunate misspellings. If you do not trust your ISP, you need an other one or you won validating protocols i.e. VPN to a trustwothy point.
"trust" is not a binary concept. You need to relate trust to a service, and then still, it comes in degrees. I trust my bank to process payments. I trust my ISP to keep my link alive and to have proper peering in place. I _could_ trust my ISP to serve me the right data, but that would only be the right data in their perspective, wouldn't it, and that might not match mine.
DNSSEC for end users is not a security issue, it's a deployment issue.
Eh ?
DNSSEC is security backfitted on a widely deployed protocol. This has deployment issues in general.
Roy
* Yuri Demchenko wrote:
What does DNSSEC and Techsec-WG people think about recently revealed pharming attack technique that is based on the end user DNS altering?
If you use a validating resolver on your end side, DNSSEC detects and prevents this attack.
On Fri, Feb 16, 2007 at 12:14:26PM +0000, Lutz Donnerhacke <lutz@iks-jena.de> wrote a message of 6 lines which said:
If you use a validating resolver on your end side,
You forget "AND if the zone you query is signed".
On Feb 16, 2007, at 12:54, Stephane Bortzmeyer wrote:
If you use a validating resolver on your end side,
You forget "AND if the zone you query is signed".
and you forget "AND you have a valid, properly configured trust anchor for the key that ultimately signs that zone". :-) Cue the Monty Python joke about our children's children's children... :-)
Lutz Donnerhacke wrote:
* Yuri Demchenko wrote:
What does DNSSEC and Techsec-WG people think about recently revealed pharming attack technique that is based on the end user DNS altering?
If you use a validating resolver on your end side, DNSSEC detects and prevents this attack.
How? As I understood, the idea of the attack is to change DNS, not to poison it.
NEW ATTACK TECHNIQUE THREATENS BROADBAND USERS Millions of high-speed Internet users across the globe are threatened by a new attack technique called drive-by pharming, Symantec and Indiana University researchers warned Thursday. http://go.techtarget.com/r/1004039/1401570 This method is variation on the team to users equipment as a coprocessor. In this case, a java applet tries to guess the passwd of the users adsl router and then reconfigures the dns server in the box. Most users have not chnged the original passwd (if an) or use an easy tp gues one. More details got dicussed on nanog. On the practical note, I need to say something definite to my students what DNS experts think? Tell then to switch of java etc. :-). As noted, dnssec can protect against spoofed dns info. jaap
NEW ATTACK TECHNIQUE THREATENS BROADBAND USERS
...
As noted, dnssec can protect against spoofed dns info.
Except DNSSEC wouldn't really be applicable. The attack (as I understand it) provides a new IP address (that of an attacker-owned caching resolver) to clients on a LAN attached to the broadband router, with the attacker-owned caching resolver returning answers to stub resolver queries. Since validation is done at the caching resolver, DNSSEC wouldn't apply. Rgds, -drc
David Conrad wrote:
NEW ATTACK TECHNIQUE THREATENS BROADBAND USERS
...
As noted, dnssec can protect against spoofed dns info.
Except DNSSEC wouldn't really be applicable.
The attack (as I understand it) provides a new IP address (that of an attacker-owned caching resolver) to clients on a LAN attached to the broadband router, with the attacker-owned caching resolver returning answers to stub resolver queries. Since validation is done at the caching resolver, DNSSEC wouldn't apply.
It would apply in the (theoretical) subset of applications that are configured to rely on signed and validated responses, like hopefully windows/osx/mozilla/other software updaters could be configured to do. It could also apply to an even more theoretical future browser feature that uses a mechanism similar to the shiny gold SSL padlock icon to indicate a signed and validated response, but the value of that would be limited to the subset of users who wouldn't just click "go to the site anyway" like they do with SSL warnings now. Doug -- If you're never wrong, you're not trying hard enough
On Feb 16, 2007, at 12:50 PM, Doug Barton wrote:
David Conrad wrote:
NEW ATTACK TECHNIQUE THREATENS BROADBAND USERS
...
As noted, dnssec can protect against spoofed dns info. Except DNSSEC wouldn't really be applicable. It would apply in the (theoretical) subset of applications that are configured to rely on signed and validated responses, like hopefully windows/osx/mozilla/other software updaters could be configured to do.
The question is how do they get the information that the data has been signed and the signatures validated. Since with this attack they'd be going through a compromised server, they lose. The only way out of that hole is if you run a local validating caching server and have appropriate (out-of-band validated) trust anchors configured and if you're running a local caching server, you're already not susceptible to the attack. Rgds, -drc
David Conrad wrote:
On Feb 16, 2007, at 12:50 PM, Doug Barton wrote:
David Conrad wrote:
NEW ATTACK TECHNIQUE THREATENS BROADBAND USERS
...
As noted, dnssec can protect against spoofed dns info. Except DNSSEC wouldn't really be applicable. It would apply in the (theoretical) subset of applications that are configured to rely on signed and validated responses, like hopefully windows/osx/mozilla/other software updaters could be configured to do.
The question is how do they get the information that the data has been signed and the signatures validated. Since with this attack they'd be going through a compromised server, they lose. The only way out of that hole is if you run a local validating caching server and have appropriate (out-of-band validated) trust anchors configured and if you're running a local caching server, you're already not susceptible to the attack.
I was thinking similar like home routers could have default configuration to use DNSSEC responses, and maybe in the future only DNSSEC. As about trust anchors, in other security applications we looking now closely at the TPM (Trusted Platform Module)/TCG technology that provides hardware bound and hardware protected trust anchors. Yuri
Yuri, On Feb 16, 2007, at 2:37 PM, Yuri Demchenko wrote:
The question is how do they get the information that the data has been signed and the signatures validated. Since with this attack they'd be going through a compromised server, they lose. The only way out of that hole is if you run a local validating caching server and have appropriate (out-of-band validated) trust anchors configured and if you're running a local caching server, you're already not susceptible to the attack. I was thinking similar like home routers could have default configuration to use DNSSEC responses, and maybe in the future only DNSSEC.
Wouldn't this mean the nasty javascript merely updates the trust anchors to point to the bad guy's DNS server? That is, if the home user doesn't configure the password and the trust anchors are configurable on the router, the bad guys win. As was mentioned previously, this really isn't a DNS attack. It is a "exploitation of a default password" attack, so DNSSEC doesn't help.
As about trust anchors, in other security applications we looking now closely at the TPM (Trusted Platform Module)/TCG technology that provides hardware bound and hardware protected trust anchors.
This might work if it requires no end user interaction. Rgds, -drc
David Conrad wrote:
NEW ATTACK TECHNIQUE THREATENS BROADBAND USERS
...
As noted, dnssec can protect against spoofed dns info.
Except DNSSEC wouldn't really be applicable.
I know, it would be sloppy use of terms, but when I read the thread I "included" TSIG under the DNSSEC item. That could help, unless the shared secret gets easily compromised, too, and it probably would, assuming that java* or active* is enabled ;-)
The attack (as I understand it) provides a new IP address (that of an attacker-owned caching resolver) to clients on a LAN attached to the broadband router, with the attacker-owned caching resolver returning answers to stub resolver queries. Since validation is done at the caching resolver, DNSSEC wouldn't apply.
Rgds, -drc
Wilfried.
On Fri, Feb 16, 2007 at 12:07:45PM +0100, Yuri Demchenko <demch@chello.nl> wrote a message of 110 lines which said:
What does DNSSEC and Techsec-WG people think about recently revealed pharming attack technique that is based on the end user DNS altering?
I really wonder why do all newspapers present it as a DNS attack. It is an attack against a server (the home router). Once you control it, you can do many things besides changing the DNS configuration (such as setting up a tunnel and diverting all IP data through it, so DNSSEC would be screwed, anyway).
On the practical note, I need to say something definite to my students what DNS experts think?
As I said, the DNS is not involved at all in this attack.
On Feb 16, 2007, at 12:52 AM, Roy Arends wrote:
Tell me Lutz, how does joe end user run a full featured validating resolver daemon, when he barely understand the concept of DNS.
End users run stuff significantly more complicated than a caching resolver without understanding the concepts. Running a caching resolver daemon should not require any end-user configuration. Rgds, -drc
* David Conrad wrote:
Running a caching resolver daemon should not require any end-user configuration.
Keymanagment is not free in the first step.
On Feb 16, 2007, at 9:00 AM, Lutz Donnerhacke wrote:
* David Conrad wrote:
Running a caching resolver daemon should not require any end-user configuration. Keymanagment is not free in the first step.
Right. You have to run software update and click OK when new trust anchors need to be installed. End-users don't run caching servers for historical reasons having to do with CPU cycles and available RAM (and perhaps poor choices regarding configuration files in particular DNS software implementations). Those reasons don't apply anymore. Trusting the infrastructure between you and your ISP is merely creating a new target for attack. Or perhaps highlighting an existing target for attack. It also means you trust your ISP. As more and more ISPs see "sitefinder"-like functionality as a way of making more money faster, that trust is less and less tenable. Rgds, -drc
* David Conrad wrote:
Trusting the infrastructure between you and your ISP is merely creating a new target for attack. Or perhaps highlighting an existing target for attack. It also means you trust your ISP.
Because I am an ISP, my view is slightly biased.
* Roy Arends:
So what about joe end user. Are there initiatives to offer tsig/sig0/dtls between user and isp ? Are there initiatives to deploy code at the OS level, similar as to what the NLNetLabs and Sparta folk are building for the application level ? Are formentioned providers deploying either of these two sets of solutions to their end users ? Or is it all just security theater ?
I've been thinking about this recently (for a fairly tech-savvy user base, but also including end users). My hope was some design that would enable us to add a simple "enable DNSSEC with DLV" switch to the operating system. The basic idea is like this: - Install a local BIND 9 resolver in forward-only mode. Enable DLV (using ISC's zone if allowed). - Modify software which updates /etc/resolv.conf to tweak the BIND configuration instead. In this configuration, /etc/resolv.conf will always point to localhost. - Perhaps modify the libc stub resolver to return better error codes, and update some interactive applications to make use of them. - If necessary, tweak BIND so that it exposes a DNSSEC-less view to applications, and does not request validation from the forwarders. (For instance, we must not exceed the 512 byte size limit on the application interface when the original configuration doesn't.) - Get some banks or other high-profile sites to participate in the DLV project, so that all this actually makes sense. However, I fear that many users are located on networks which do not offer a transparent DNS transport: the forwarders they use are not capable of handling requests for DNSSEC-related RRs for some reason. Perhaps they discard resource records they deem strange, or they have got problems with large responses. A typical setup might look like this: +--------------+ | ISP resolver | +--------------+ similar configuration | at a different ISP | : /---------------\ : | Access Router | : \---------------/ : | : | : /-----\ : | CPE |.................: \-----/ (actually, the CPE is a cheap NAT device) | +------+ | Host | (running a DNSSEC-aware resolver locally) +------+ The CPE typically runs some kind of DNS forwarder (can be a simple destination NAT, but might be an application proxy), and advertises itself as caching resolver to the host via DHCP. The access router might inspect DNS traffic and transparently proxy it. It's not too uncommon that the "ISP" the end user subscribes to switches their subcontractor, so that you can get hooked to a completely different infrastructure over night. The subcontractor might use anycast or load-balancing across different implementations to provide the caching service. If the host is a real mobile device, the picture is much more complicated. There are a couple of things that can go wrong here: ISP resolver, access router (if proxying) and the CPE must be transparent for DNSSEC traffic. It's not sufficient to check this at installation time. It's hard to cache this information, too (thanks to load-balancing). Perhaps these fears are unwarranted; fairly distributed testing would provide us with some assurance that this might actually work. Unfortunately, the real showstopper I see is that you cannot tell an attack from an infrastructure change that happened to break DNSSEC. But we need to provide some kind of fallback in case DNSSEC breaks because we absolutely must ensure that we match plain DNS in terms of availability. (And I don't think yet another security indicator visible to the end user is the answer.) Running name resolution over 443/TCP to some central resolver infrastructure suddenly seems much more attractive, doesn't it? However, I don't like the way this facilitates large-scale interception of DNS traffic (with end user addresses intact, so that you won't call me a hypocrite 8-).
On Sat, 24 Feb 2007 16:18:43 +0100 Florian wrote: FW> Unfortunately, the real showstopper I see is that you cannot tell an FW> attack from an infrastructure change that happened to break DNSSEC. FW> But we need to provide some kind of fallback in case DNSSEC breaks FW> because we absolutely must ensure that we match plain DNS in terms of FW> availability. (And I don't think yet another security indicator FW> visible to the end user is the answer.) Well, you've got yourself painted into a corner here. I don't think you can have a fallback, or you haven't added any security. The only way to get an ISP to sit up and take notice will be the flood of support calls when they do something that breaks DNS, just as it it now. (Of course, this is also probably one of the reasons they are wary of deploying DNSSEC in the first place). FW> Running name resolution over 443/TCP to some central resolver FW> infrastructure suddenly seems much more attractive, doesn't it? Not particularly. Either way, you've got to get the ISPs to buy into a new way of thinking about DNS. Besides, I haven't seen any real detail on how this 443/tcp idea would work. I'm sure that if it got as much scrutiny as DNSSEC has had, it would turn out to not be as simple as it's proponents might think it is. -- Robert Story SPARTA
* Robert Story:
On Sat, 24 Feb 2007 16:18:43 +0100 Florian wrote: FW> Unfortunately, the real showstopper I see is that you cannot tell an FW> attack from an infrastructure change that happened to break DNSSEC. FW> But we need to provide some kind of fallback in case DNSSEC breaks FW> because we absolutely must ensure that we match plain DNS in terms of FW> availability. (And I don't think yet another security indicator FW> visible to the end user is the answer.)
Well, you've got yourself painted into a corner here.
Probably true.
I don't think you can have a fallback, or you haven't added any security.
I'm concerned that I'm *reducing* security (regarding availability as a part of security). I also don't want to create a situation where organizations fear to DLV-enable their zones because a part of the client population is no longer able to access them. To some extent, this has already happened to AAAA records. Of course, this is motivated more by the categorical imperative, and not by actual market share. But you never know. 8-)
FW> Running name resolution over 443/TCP to some central resolver FW> infrastructure suddenly seems much more attractive, doesn't it?
Not particularly. Either way, you've got to get the ISPs to buy into a new way of thinking about DNS.
I think the idea (at least my version of it) is that you use a 443/TCP TLS connection to a resolver to bypass the ISP. The on-the-wire protocol would still be DNS with DNSSEC. The assumption is that the ISP can't do transparent rewriting of TLS connections and will leave the application traffic alone (which is no longer a safe assumption for 53/UDP or 53/TCP -- or 25/TCP for that matter).
participants (14)
-
David Conrad
-
Doug Barton
-
Florian Weimer
-
Jaap Akkerhuis
-
Jim Reid
-
Jørgen Hovland
-
Lutz Donnerhacke
-
Max Tulyev
-
Peter Koch
-
Robert Story
-
Roy Arends
-
Stephane Bortzmeyer
-
Wilfried Woeber, UniVie/ACOnet
-
Yuri Demchenko