My personal opinion (after far too much wine) is that the routers will probably be able to cope with anything that is thrown at them (again, I don't have the facts to back this statement up nor the knowledge to work it out - so don't ask for them Randy, other people know far more about what's on the horizon that I could ever hope to).
I happen to think that you are wrong. There are real technical issues with routers that are hard to solve. The first and most obvious is that as the size of the table gets larger, it requires more and more computing resources in the router and more bandwidth to announce/withdraw routes. It's not just a question of RAM sizes but also CPU power, circuit capacity, and the time required to process stuff. As we converge more time-sensitive applications onto the network, the time delays introduced by huge global routing tables are less acceptable than they were in 1995. The second issue is that all of us have lived through the time when Moore's law, and related laws, have caused electronic devices to get better, cheaper and faster every year. But we are now getting to the point where real physical limits are being reached, i.e. you cannot make circuits thinner than one molecule. Nobody knows the exact implications of this but we can be sure that sometime in this century, there will no longer be any increase in memory capacities, processor speed or circuit capacities. It is wrong to make policies based on the assumption that any problems with routers will be solvable just as they were in the 80's and 90's.
Throwing out conservation just because the v6 address range is so vast is plain stupidity - the past, in my mind, exists for one reason, so that we can learn from it (I might not, but that's a personal problem or at least SWMBO tells me so). ISP's might need a /32 (doubtful in most cases) but enterprises DO NOT. Yes, enterprises need to multihome (same as ISP's do) but they do not need nor will they ever need a /32 (or even a /48 for that matter). How many enterprises currently use a class 'A' in current v4 space - and I mean really use - I think that question answers itself.
Again, learning from history does not mean doing the same thing as was done before. An IPv6 /32 address block is vastly smaller than an IPv4 class A block because it represents a much smaller percentage of the total address space. Therefore, we have learned from history and are not making the same mistake. The other historical lesson that keeps being brought up is "the swamp" although the people who mention this almost never define what the swamp is or why it was bad. In my opinion, the swamp was a set of PI allocations of varying sizes all mixed together in such a way that it was not possible to aggregate them in routing announcements. It was bad because it meant that companies would have to announce many small disjoint blocks, thus consuming global routing table space. In IPv6 this has been fixed. Everybody gets a single /32 PI block and 99% of those LIRs will never in a hundred years grow beyond that allocation. Except for those IPv6 /48 microallocations. Are they really such a good idea? Didn't we learn from the swamp? --Michael Dillon