Improve your Windows server’s performance
Improve your Windows server’s performance: 10 tips
Did you know that there are a few simple and easy tricks and tips to improve server performance? Some of the tips mentioned below can result in considerable increase in performance with minimal expenses. Here are the top three factors that can help improve server performance.
Correct server configuration: It is very important to verify BIOS settings after powering up a new server. The server administrators need to familiarize themselves with the underlying technology so as to identify and make best use of undervalued server features.
Optimizing and monitoring host server performance: This is a very important aspect to consider for virtualization setups. The fundamental rule is that the host operating system must be placed on a dedicated disk drive to avoid disk resource contention.
Use of performance monitoring tools: This is one thing that simply should not and cannot be avoided in a data center environment. Nagios is one of the best tools out there that is open source and free. There are also tools like Cacti that polls system statistics and represents them graphically. These tools help the system administrator in monitoring areas such as the CPU Load, I/O bottlenecks, both disk I/O and network I/O and RAM and swap utilization. These tools help in saving money in the long term. Investing in them and training the support staff to understand them is one of the best things you could do for improving server performance.
Improving Performance from the Server Side
Make sure the web server is not doing reverse DNS lookups.
Web servers are often set by default to take the IP address of the client and do a reverse DNS lookup on it (finding the name associated with the IP address) in order to pass the name to the logging facility or to fill in the REMOTE_HOST CGI environment variable. This is time consuming and not necessary, since a log parsing program can do all the lookups when parsing your log file later. You might be tempted to turn off logging altogether, but that would not be wise. You really need those logs to show how much bandwidth you’re using, whether it’s increasing, and lots of other valuable performance information. CGIs can also do the reverse lookup themselves if they need it.
Increase the TCP retransmit timeout.
TCP will assume a segment has been lost if it has not been acknowledged within a certain amount of time, typically 200 milliseconds. For some slow Internet connections, this is not long enough. TCP segments may be arriving safely at the browser, only to be counted as lost by the server, which then retransmits them. Turning up the TCP retransmit timeout will fix this problem, but it will reduce performance for fast but lossy connections, where the reliability is poor even if the speed is good.
Locate near your users.
Internet Protocol data packets must go through a number of forks in the road on the way from the server to the client. Dedicated computers called routers make the decision about which fork to take for every packet. That decision, called a router “hop,” takes some small but measurable amount of time. Servers should be located as few router hops away from the audience as possible; this is what I mean by “near.” ISPs usually have their own high-speed network connecting all of their dial-in points of presence (POPs). A web surfer on a particular ISP will probably see better network performance from web servers on that same ISP than from web servers located elsewhere, partly because there are fewer routers between the surfer and the server.
National ISPs are near a lot of people. If you know most of your users are on AOL, for example, get one of your servers located inside AOL. The worst situation is to try to serve a population far away, forcing packets to travel long distances and through many routers. A single HTTP transfer from New York to Sydney can be painfully slow to start and simply creep along once it does start, or just stall. The same is true for transfers that cross small distances but too many routers.
Buy a bigger pipe.
The most effective blunt instrument for servers and users alike is a better network connection, with the caveat that it’s rather dangerous to spend money on it without doing any analysis. For example, a better network connection won’t help an overloaded server in need of a faster disk or more RAM. In fact, it may crash the server because of the additional load from the network.
Buy a better server.
While server hardware is rarely the bottleneck for serving static HTML, a powerful server is a big help if you are generating a lot of dynamic content or making a lot of database queries. Upgrade from PC hardware to workstation hardware from DEC, HP, Sun, or SGI. They have much better I/O subsystems and scalability.
Buy more RAM and increase cache sizes.
RAM accesses data thousands of times faster than any disk. Really. So getting more data from RAM rather than from disk can have a huge positive impact on performance. All free memory will automatically be used as filesystem cache in most versions of Unix and in NT, so your machine will perform repetitive file serving faster if you have more RAM. Web servers themselves can make use of available memory for caches. More RAM also gives you more room for network buffers and more room for concurrent CGIs to execute.
But memory doesn’t solve all problems. If the network is the bottleneck, then more RAM may not help at all. More RAM also won’t help if all of your content is already in memory, because you’ve already eliminated the disk as a potential bottleneck. Another way to boost performance is with level 2 cache RAM, also known as SRAM, which is several times as fast as ordinary DRAM but also several times more expensive. More cache RAM will help performance by keeping frequently used code close to the CPU.
Buy better disks.
Get the disks with the lowest seek time, because disks spend most of their time seeking (moving the arm to the correct track) in the kind of random access typical of web serving. A collection of small disks is often better than a single large disk. 10000 rpm is better than 7200 rpm. Bigger disk controller caches are better. SCSI is better than IDE or EIDE.
Set up mirror servers.
Use multiple mirrored servers of the same capacity and balance the load between them. See Chapter 2, for information on load balancing. Your load will naturally be balanced to some degree if you are running a web site with an audience scattered across time zones or around the world such, as a web site for a multinational corporation.
Get the latest software.
Software generally gets faster and better with each revision. At least that’s how things are supposed to work. Use the latest version of the operating system and web server and apply all of the non-beta patches, especially the networking- and performance-related patches.
Dedicate your web server.
Don’t run anything unnecessary for web service on your web server. In particular, your web server should not be an NFS server, an NNTP server, a mail server, or a DNS server. Find those things other homes. Kill all unnecessary daemons, such as lpd. Don’t even run a windowing system on your web server. You don’t really need it, and it takes up a lot of RAM. Terminal mode is sufficient for you to administer your web server.
Use server APIs rather than CGIs or SSIs.
While CGI is easy to program and universally understood by servers, it relies on forking additional processes for every CGI-generated page view. This is a big performance penalty. Server-side includes, also known as server-parsed HTML, often fork new processes as well and suffer the additional penalty of the parsing, which is compute-intensive. If you need to insert dynamic content such as the current date or hit count in the HTML you return, your performance will be much better if you use your server’s API rather than CGI or SSI, even though the API code won’t be portable across web servers. Other options that have better performance than CGIs are FastCGI, Java servlets, and Apache’s Perl module, but these are not quite as fast as server API programs. If you do use CGI, compile the CGI rather than using an interpreted language. See Chapter 15, for additional tips.
By the way, SSI and CGI also have associated security hazards. For example, if FTP upload space is shared with the web server, then it may be possible for a user to upload a page including an SSI directive and to execute whatever he or she likes. With CGI, users may write naive scripts which inadvertently accept commands from anyone viewing the web page.
Examine custom applications closely.
Any custom applications used by the web server should be profiled and carefully examined for performance. It is easy to write an application with correct functionality but poor performance.
Preprocess all of your dynamic content.
Unless you are generating content that depends on a huge number of possible inputs from the user, you can trade storage for performance by using all possible inputs to create static HTML. Serving static HTML is much faster than creating any kind of dynamic content.
Let a professional host your server.
Use one of the web hosting services like Genuity, Globalcenter, or Exodus to host your web site at multiple locations close to the Network Access Point (NAP). It’s expensive, but they do all the hosting work for you, leaving you free to work on your content.
As you can tell from the list above, data center management is surely not an easy job and needs professional experts to manage, monitor and administer. For some of the best folks in the business in Indianapolis, get in touch with https://www.dlines.co.in/ and schedule a free consultation today.