Are We Spending Too Much on the Network? It’s a reasonable question. Many textbooks about networking have info about aggregating routing information, optimizing link state designs for more optimal flooding and shorter SPF runs, and many other techniques that were mandatory when router CPU and memory were a rarity. Moore’s Law continued to deliver and now we live in a world of abundant compute. I get the impression that we sometimes use this abundance of horsepower to let network control planes run wild with no or minimal aggregation or filtering, no dampening, and who even thinks about link state SPF run times? (hyper scalers notwithstanding)
I feel like many of the offerings from incumbent vendors are products with more CAM and TCAM space than required, and sometimes, too few packets per second, the thing that really matters. I hope nobody is building networks where a 48x1g access layer switch needs enough MAC table space for 60k+ entries and enough TCAM for several thousand routes. “Enterprise” focused edge routers do not need enough memory to hold 6 million routes.
If the cheaper (slower CPU, less memory) equipment was available, if we’re willing to live with the tradeoffs that route aggregation and filtering bring, and we’re willing to invest in the engineering hours, we could revive some design practices and build performant networks for less money. Bring back scoped flooding domains, totally stubby areas, Level 1 areas, and BGP route dampening. There are a lot of edge AS routers out there receiving full tables whose CPU sits there running best path all day, the internet never converges. They’ve already paid for a CPU capable of the job, might as well use it.
Thank you for coming to my thread talk.