By Elmalla A. on January 10, 2017
Originally written for i-Awcs.
Full Stack Web Performance is written for anyone grappling with the challenges of performance in a DevOps environment. Whether you’re a web developer, a DevOps engineer, an engineering manager or an architect, we think you’ll glean something useful from this practical how-to by Tom Barker.
We’re in the midst of a giant leap forward in software engineering and IT. Cross-functional DevOps teams are the order of the day, and Full Stack Web Performance addresses how web performance fits into this ever-changing environment. Topics in our book are organized into three high-level areas of focus in a product development group:
Client-side – the user-facing piece of the application that generally runs on the user’s hardware Infrastructure – consisting of the facilitating pieces of our application, commonly the CDN and cloud service Operations – the practices we put in place to monitor and alert the health of our applications
Full Stack Web Performance also presents ways to leverage existing tools and libraries for huge payoffs. The recommendations and solutions outlined in our book can be measured in days and weeks rather than months and years. Read Now
In Chapter 1 we discuss client-side issues. Here browser makers are implementing their own performance improvements with new and incremental changes creating new complexity for developers and ops. One way to keep up with the changes is to run synthetic performance testing, such as speed tests.
These testing tools load a site and run a battery of tests against it, using a dictionary of performance best practices as the criteria. There are many quality performance-testing tools on the market. We look at WebPageTest as an example and give you a step-by-step tutorial on how it works.
Performance testing tools are necessary in a network environment, but they don’t always work in a sensible ad hoc solution. We recommend working them into your existing continuous integration environment, and we show you a variety of ways it can be done. Download Here
In Chapter 2, we look at infrastructure performance optimizations worth implementing. We firmly believe there are significant wins you can achieve by simply leveraging your existing architecture. A content delivery network (CDN) in particular can show immediate and significant performance improvements.
A CDN is a globally distributed network used for hosting and serving data. Our book specifically discusses two commercial options available via a CDN: edge caching and global traffic management.
Latency issues are a big concern for all websites. To avoid delays many companies deploy multiple data centers across the country to keep things running smoothly. Proximity of your end users to the machines serving your application is key. With edge caching you can serve content from the same state or even the same city.
Global traffic management (GTM) is another feature of a CDN that helps balance traffic between data centers. This system automatically follows a certain criteria for routing traffic: availability, proximity and performance. In this way data is disseminated at an optimal rate.
In addition, using a cloud service provider to create an infrastructure that scales to accommodate heavy traffic helps avoid performance-killing bottlenecks. The basic architecture of a website running on a cloud platform looks very similar to traditional architecture. There are application nodes (connection points) running in availability zones. And there is a load balancer routing incoming traffic upfront.
Even cloud providers go offline occasionally. We look at options to keep downtime to a minimum and not rely solely on one tool.