Shannon Weyrick is Director of Engineering for NS1.
Long gone are the days of loading one floppy disk after another into a giant hard drive to get the latest software – and good riddance. Many forces are at work in the market today to bring about a foundational change in how we receive applications. Deployment automation, globally distributed cloud computing providers, and Infrastructure-as-a-Service (IaaS) are among the tools that have led to applications now being distributed by default.
New Needs for an Old Technology
Distributed applications are possible due to technological advancements, but the tools website operators have at their disposal to effectively route traffic to their newly distributed applications haven’t kept pace. Your app is distributed, but how do you get your users to the right points of presence (POPs)?
The current method of traffic management entails networking techniques like BGP anycasting, capex-heavy hardware appliances with global load balancing add-ons, or leveraging a third-party managed DNS platform. These options are expensive and productivity-hampering.
Since DNS is the entry point for practically every website and application, it is a great place to enact traffic management policies. However, the capabilities of most managed DNS platforms are severely limited because they were not designed with today’s applications in mind. For instance, most managed DNS platforms are built using off-the-shelf software like BIND or PowerDNS, onto which features like monitoring and geo-IP databases are grafted.
DNS platforms serve an important function, but since they have been low-functioning for so long, expectations have been low as well. A best-in-class model has been expected to do two things with regards to traffic management: first, it wouldn’t send users to a server that was down, and second, it would try to return the IP address of the server that’s closest to the end user making the request.
Though the DNS platform performs these duties as required, more is needed today. A real-world analogy would be using an early-model GPS unit to get to a gas station: it can give you the location of one that’s close by and may be open according to its Yellow Pages listing, but that’s about it. Maybe there is roadwork or congestion on the one route you can take to get there. Maybe the gas station is out of diesel, or perhaps it’s open but backed up with lines stretching down the block. Perhaps a gas station that’s a bit farther away would have been a better choice?
Today’s high-performing Internet properties go far beyond proximity and a binary notion of “up/down.” Does the data center have excess capacity? What’s traffic like getting there - is there a fiber cut or congestion to a particular ISP we should route around? Are there any data privacy or protection protocols we need to take into account?
DNS for Today
Clearly, modern infrastructure, capabilities and even regulations call for a new breed of DNS traffic management. Next-gen DNS platforms have been built from the ground up with traffic management at their core, bringing to market exciting capabilities and innovative new tools that allow businesses to enact traffic management in ways that were previously impossible.
A DNS platform built to meet current expectations will have these five elements:
- Proper routing: Regulations for data flow vary around the world. Geofencing can ensure users in the EU are only serviced by EU data centers, for instance, while ASN fencing can make sure all users on China Telecom are served by Chinacache. Using IP fencing will make sure local-printer.company.com automatically returns the IP of your local printer, regardless of which office an employee is visiting. Look for solutions that route users based on their ISP, ASN, IP prefix or geographical location.
- Endpoint monitoring: Can the platform constantly monitor endpoints from the vantage point of the end user? Make sure it can, and that it can then send those coming from each network to the endpoint that will service them best.
- Flexibility: Can the solution use scalable infrastructure to handle planned or unplanned traffic spikes? If your primary colocation environment is becoming overloaded, make sure your are able to dynamically send new traffic to another environment according to your business rules, whether it’s AWS, the next nearest facility or a DR/failover site.
- Filtering: Consider platforms that use filters with weights, priorities and even stickiness by enacting business rules to meet your applications’ needs. Distribute traffic in accordance with commits and capacity. Combine weighted load balancing with sticky sessions (e.g. session affinity) to adjust the ratio of traffic distributed among a group of servers while ensuring that returning users continue to be directed to the same endpoint.
- Overload prevention: Automatically adjusting the flow of traffic to network endpoints, in real time, based on telemetry coming from endpoints or applications, can help prevent overloading a data center without taking it offline entirely and seamlessly route users to the next nearest data center with excess capacity.
DNS has been a reliable Internet workhorse for about 30 years, and not much was expected of it until recently. When the digital ecosystem began to radically shift in light of the IoT, mobility and transformational new capabilities, it became apparent that the old way of managing DNS wasn’t going to work. Fortunately, next-gen DNS platforms are able to offer reliability and performance at Internet scale for high-volume, mission-critical applications. Look for the five elements listed above to ensure fast, reliable application delivery.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.