It is always tempting and interesting to see how big guys do their networks. We saw some pieces from Google, Facebook, Amazon, LinkedIn and now Twitter comes. Big 5 is all set!
Last week engineering force at Twitter released an article titled ""The Infrastructure Behind Twitter: Scale""
. The article starts off with networking field in focus and covers both DC and Backbone challenges Twitter faced over the time.
Data Center story is no surprise at all, IGP started to give more troubles that benefits and was swapped with ""BGP as IGP"" solution. Classics of modern DC design. By the way, there is a bonus track in the DC section of the article — a nice slide deck explaining how Twitter did this migration on a live network
At the Backbone side Twitter surpisingly had no TE at some point (o.O). Now they most certainly do and they also adopted TE++ from Juniper
 which helps them overcome bin-packet problem (common problem for Big 5, Facebook had this as well). After all RSVP-TE autobandwith has some serious players that have implemented it.
When it comes to Edge, problems like PoP design and steering traffic to PoPs start to pop out. Serving a customer from a closest point available (less RTT means less delay and more goodput) is a corner stone for worldwide scale players. On the latter case Twitter ditched geo-DNS in favor of BGP-anycast. We saw this done by LinkedIn and Facebook
Though from a birds eye view all the probles are the same we should thank Twitter for sharing! Maybe we'll see more goodnes from their Engineering dept. soon.
#Article #Datacenter #RSVPTE #BGP