Load balancer features
Hardware
and software load balancers may have a variety of special features. The fundamental
feature of a load balancer is to be able to distribute incoming requests over a
number of backend servers in the cluster according to a scheduling algorithm.
Most of the following features are vendor specific:
Asymmetric
load: A ratio can be manually assigned to cause some backend servers to get a
greater share of the workload than others. This is sometimes used as a crude
way to account for some servers having more capacity than others and may not
always work as desired.
Priority
activation: When the number of available servers drops below a certain number,
or load gets too high, standby servers can be brought online.
SSL
Offload and Acceleration: Depending on the workload, processing the encryption
and authentication requirements of an SSL request can become a major part of
the demand on the Web Server's CPU; as the demand increases, users will see
slower response times, as the SSL overhead is distributed among Web servers. To
remove this demand on Web servers, a balancer can terminate SSL connections,
passing HTTPS requests as HTTP requests to the Web servers. If the balancer
itself is not overloaded, this does not noticeably degrade the performance
perceived by end users. The downside of this approach is that all of the SSL
processing is concentrated on a single device (the balancer) which can become a
new bottleneck. Some load balancer appliances include specialized hardware to
process SSL. Instead of upgrading the load balancer, which is quite expensive
dedicated hardware, it may be cheaper to forgo SSL offload and add a few Web
servers. Also, some server vendors such as Oracle/Sun now incorporate
cryptographic acceleration hardware into their CPUs such as the T2000. F5
Networks incorporates a dedicated SSL acceleration hardware card in their local
traffic manager (LTM) which is used for encrypting and decrypting SSL traffic.
One clear benefit to SSL offloading in the balancer is that it enables it to do
balancing or content switching based on data in the HTTPS request.
Distributed
Denial of Service (DDoS) attack protection: load balancers can provide features
such as SYN cookies and delayed-binding (the back-end servers don't see the
client until it finishes its TCP handshake) to mitigate SYN floodattacks and
generally offload work from the servers to a more efficient platform.
HTTP
compression: reduces amount of data to be transferred for HTTP objects by
utilizing gzip compression available in all modern web browsers. The larger the
response and the further away the client is, the more this feature can improve
response times. The tradeoff is that this feature puts additional CPU demand on
the Load Balancer and could be done by Web servers instead.
TCP
offload: different vendors use different terms for this, but the idea is that
normally each HTTP request from each client is a different TCP connection. This
feature utilizes HTTP/1.1 to consolidate multiple HTTP requests from multiple
clients into a single TCP socket to the back-end servers.
TCP
buffering: the load balancer can buffer responses from the server and
spoon-feed the data out to slow clients, allowing the web server to free a
thread for other tasks faster than it would if it had to send the entire
request to the client directly.
Direct
Server Return: an option for asymmetrical load distribution, where request and
reply have different network paths.
Health
checking: the balancer polls servers for application layer health and removes
failed servers from the pool.
HTTP
caching: the balancer stores static content so that some requests can be
handled without contacting the servers.
Content
filtering: some balancers can arbitrarily modify traffic on the way through.
HTTP
security: some balancers can hide HTTP error pages, remove server
identification headers from HTTP responses, and encrypt cookies so that end
users cannot manipulate them.
Priority
queuing: also known as rate shaping, the ability to give different priority to
different traffic.
Content-aware
switching: most load balancers can send requests to different servers based on
the URL being requested, assuming the request is not encrypted (HTTP) or if it
is encrypted (via HTTPS) that the HTTPS request is terminated (decrypted) at
the load balancer.
Client
authentication: authenticate users against a variety of authentication sources
before allowing them access to a website.
Programmatic
traffic manipulation: at least one balancer allows the use of a scripting
language to allow custom balancing methods, arbitrary traffic manipulations,
and more.
Firewall:
direct connections to backend servers are prevented, for network security
reasons Firewall is a set of rules that decide whether the traffic may pass
through an interface or not.
Intrusion
prevention system: offer application layer security in addition to
network/transport layer offered by firewall security.
Related Topics
Privacy Policy, Terms and Conditions, DMCA Policy and Compliant
Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.