Unpacking Cedexis; Creating Multi-CDN and Multi-Cloud applications

package

Technology is incredibly complex and, at the same time, incredibly unreliable. As a result, we build backup measures into the DNA of everything around us. Our laptops switch to battery when the power goes out. Our cellphones switch to cellular data when they lose connection to WiFi.

At the heart of this resilience, technology is constantly choosing between the available providers of a resource with the idea that if one provider becomes unavailable, another provider will take its place to give you what you want.

When the consumers are tightly coupled with the providers, for example, a laptop consuming power at Layer 1 or a server consuming LAN connectivity at Layer 2, the choices are limited and the objective, when choosing a provider, is primarily one of availability.

Multiple providers for more than availability.

As multi-provider solutions make their way up the stack, however, additional data and time to make decisions enable choosing a provider based on other objectives like performance. Routing protocols, such as BGP, operate at Layer 3. They use path selection logic, not only to work around broken WAN connections but also to prefer paths with higher stability and lower latency.

As pervasive and successful as the multi-provider pattern is, many services fail to adopt a full stack multi-provider strategy. Cedexis is an amazing service which has come to change that by making it trivial to bring the power of intelligent, real-time, provider selection your application.

I first implemented Multi-CDN using Cedexis about 2 years ago. It was a no-brainer to go from Multi-CDN to Multi-Cloud. The additional performance, availability, and flexibility for the business became more and more obvious over time. Having a good multi-provider solution is key in cloud-based architectures and so I set out to write up a quick how-to on setting up a Multi-Cloud solution with Cedexis; but first you need to understand a bit about how Cedexis works.

Cedexis Unboxed

Cedexis has a number of components:

  1. Radar
  2. OpenMix
  3. Sonar
  4. Fusion

OpenMix

OpenMix is the brain of Cedexis. It looks like a DNS server to your users but, in fact, it is a multi-provider logic controller. In order to setup multi-provider solutions for our sites, we build OpenMix applications. Cedexis comes with the most common applications pre-built but the possibilities are pretty endless if you want something custom. As long as you can get the data you want into OpenMix, you can make your decisions based on that data in real time.

Radar

Radar is where Cedexis really turned the industry on their heads. Radar uses a javascript tag to crowdsource billions of Real User Monitoring (RUM) metrics in real time. Each time a user visits a page with the Radar tag, they take a small number of random performance measurements and send the data back to Cedexis for processing.

The measurements are non-intrusive. They only happen several seconds after your page has loaded and you can control various aspects of what and how much gets tested by configuring the JS tag in the portal.

It’s important to note that Radar has two types of measurements:

  1. Community
  2. Private.

Community Radar

Community measurements are made against shared endpoints within each service provider. All Cedexis users that implement the Radar tag and allow community measurements get free access to the community Radar statistics. The community includes statistics for the major Cloud compute, Cloud storage, and CDNs making Radar the first place I go to research developments and trends in the Cloud/CDN markets.

Community Radar is the fastest and easiest way to use Cedexis out of the box and the community measurements also have the most volume so they are very accurate all the way up to the “door” of your service provider. They do have some disadvantages though.

The community data doesn’t account for performance changes specific to each of the provider’s tenants. For example, community Radar for Amazon S3 will gather RUM data for accessing a test bucket in the specified region. This data assumes that within the S3 region all the buckets perform equally.

Additionally, there are providers which may opt out of community measurements so you might not have community data for some providers at all. In that case, I suggest you try to connect between your account managers and get them included. Sometimes it is just a question of demand.

Private Radar

Cedexis has the ability to configure custom Radar measurements as well. These measurements will only be taken by your users, the ones using your JS tag.

Private Radar lets you monitor dedicated and other platforms which aren’t included in the community metrics. If you have enough traffic, private Radar measurements have the added bonus of being specific to your user base and of measuring your specific application so the data can be even more accurate than the community data.

The major disadvantage of private Radar is that low volume metrics may not produce the best decisions. With that in mind, you will want to supplement your data with other data sources. I’ll show you how to set that up.

Situational Awareness

More than just a research tool, Radar makes all of these live metrics available for decision-making inside OpenMix. That means we can make much more intelligent choices than we could with less precise technologies like Geo-targeting and Anycast.

Most people using Geo-targeting assume that being geographically close to a network destination is also closer from the networking point of view. In reality, network latency depends on many factors like available bandwidth, number of hops, etc. Anycast can pick a destination with lower latency, but it’s stuck way down in Layer 3 of the stack with no idea about application performance or availability.

With Radar, you get real-time performance comparisons of the providers you use, from your user’s perspectives. You know that people on ISP Alice are having better performance from the East coast DC while people on ISP Bob are having better performance from the Midwest DC even if both these ISPs are serving the same geography.

Sonar

Whether you are using community or low volume private Radar measurements, you ideally want to try and get more application specific data into OpenMix. One way to do this is with Sonar.

Sonar is a synthetic polling tool which will poll any URL you give it from multiple locations and store the results as availability data for your platforms. For the simplest implementation, you need only an address that responds with an OK if everything is working properly.

If you want to get more bang for your buck, you can make that URL an intelligent endpoint so that if your platform is nearing capacity, you can pretend to be unavailable for a short time to throttle traffic away before your location really has issues.

You can also use the Sonar endpoints as a convenient way to automate diverting traffic for maintenance windows- No splash pages required.

Fusion

Fusion is really another amazing piece of work from Cedexis. As you might guess from its name, Fusion is where Cedexis glues services together and it comes in two flavors:

  1. Global Purge
  2. Data Feeds

Global Purge

By nature, one of the most apropos uses of Cedexis is to marry multiple CDN providers for better performance and stability. Every CDN has countries where they are better and countries where they are worse. In addition, maintenance windows in a CDN provider can be devastating for performance even though they usually won’t cause downtime.

The downside of a Multi-CDN approach is the overhead involved in managing each of the CDNs and most often that means purging content from the cache. Fusion allows you to connect to multiple supported CDN providers (a lot of them) and purge content from all of them from one interface inside Cedexis.

While this is a great feature, I have to add that you shouldn’t be using it. Purging content from a CDN is very Y2K and you should be using versioned resources with far futures expiry headers to get the best performance out of your sites and so you never have to purge content from a CDN ever again.

Data Feeds

This is the really great part. Fusion lets you import data from basically anywhere to use in your OpenMix decision making process. Built in, you will find connections to various CDN and monitoring services, but you can also work with Cedexis to setup custom Fusion integrations so the sky’s the limit.

With Radar and Sonar, we have very good data on performance and availability (time and quality) both from the real user perspective and a supplemental synthetic perspective. To really optimize our traffic we need to account for all three corners of the Time, Cost, Quality triangle.

With Fusion, we can introduce cost as a factor in our decisions. Consider a company using multiple CDN providers, each with a minimum monthly commitment of traffic. If we would direct traffic based on performance alone, we might not meet the monthly commitment on one provider but be required to pay for traffic we didn’t actually send. Fusion provides usage statistics for each CDN and allows OpenMix to divert traffic so that we optimize our spending.

Looking Forward

With all the logic we can build into our infrastructure using Cedexis, it could almost be a fire and forget solution. That would, however, be a huge waste. The Internet is always evolving. Providers come and go. Bandwidth changes hands.

Cedexis reports provide operational intelligence on alternative providers without any of the hassle involved in a POC. Just plot the performance of the provider you’re interested in against the performance of your current providers and make an informed decision to further improve your services. When something better come along, you’ll know it.

The Nitty Gritty

Keep an eye out for the next article where I’ll do a step by step walk-through on setting up a Multi-Cloud solution using Cedexis. I’ll cover almost everything mentioned here, including Private and Community Radar, Sonar, Standard and Custom OpenMix Applications, and Cedexis Reports.

Virtual Block Storage Crashed Your Cloud Again :(

darthvader

You know it’s bad when you start writing an incident report with the words “The first 12 hours.” You know you need a stiff drink, possibly a career change, when you follow that up with phrases like “this was going to be a lengthy outage…”, “the next 48 hours…”, and “as much as 3 days”.

That’s what happened to huge companies like NetFlix, Heroku, Reddit,Hootsuite, Foursquare, Quora, and Imgur the week of April 21, 2011. Amazon AWS went down for over 80 hours, leaving them and others up a creek without a paddle. The root cause of this cloud-tastrify echoed loud and clear.

Heroku said:

The biggest problem was our use of EBS drives, AWS’s persistent block storage solution… Block storage is not a cloud-friendly technology. EC2, S3, and other AWS services have grown much more stable, reliable, and performant over the four years we’ve been using them. EBS, unfortunately, has not improved much, and in fact has possibly gotten worse. Amazon employs some of the best infrastructure engineers in the world: if they can’t make it work, then probably no one can.

Reddit said:

Amazon had a failure of their EBS system, which is a data storage product they offer, at around 1:15am PDT. This may sound familiar, because it was the same type of failure that took us down a month ago. This time however the failure was more widespread and affected a much larger portion of our servers

While most companies made heartfelt resolutions to get off of EBS, NetFlix was clear to point out that they never trusted EBS to begin with:

When we re-designed for the cloud this Amazon failure was exactly the sort of issue that we wanted to be resilient to. Our architecture avoids using EBS as our main data storage service…

Fool me once…

As Reddit mentioned in their postmortem, AWS had similar EBS problems twice before on a smaller scale in March. After an additional 80+ hours of downtime, you would expect companies to learn their lesson, but the facts are that these same outages continue to plague clouds using various types of virtual block storage.

In July 2012, AWS experienced a power failure which resulted in a huge number of possibly inconsistent EBS volumes and an overloaded control plane. Some customers experienced almost 24 hours of downtime.

Heroku, under the gun again, said:

Approximately 30% of our EC2 instances, which were responsible for running applications, databases and supporting infrastructure (including some components specific to the Bamboo stack), went offline…
A large number of EBS volumes, which stored data for Heroku Postgres services, went offline and their data was potentially corrupted…
20% of production databases experienced up to 7 hours of downtime. A further 8% experienced an additional 10 hours of downtime (up to 17 hours total). Some Beta and shared databases were offline for a further 6 hours (up to 23 hours total).

AppHarbor had similar problems:

EC2 instances and EBS volumes were unavailable and some EBS volumes became corrupted…
Unfortunately, many instances were restored without associated EBS volumes required for correct operation. When volumes did become available they would often take a long time to properly attach to EC2 instances or refuse to attach altogether. Other EBS volumes became available in a corrupted state and had to be checked for errors before they could be used.
…a software bug prevented this fail-over from happening for a small subset of multi-az RDS instances. Some AppHarbor MySQL databases were located on an RDS instance affected by this problem.

The saga continues for AWS who continued to have problems with EBS later in 2012. They detail ad nauseam, how a small DNS misconfiguration triggered a memory leak which caused a silent cascading failure of all the EBS servers. As usual, the EBS failures impacted API access and RDS services. Yet again Multi-AZ RDS instances didn’t failover automatically.

Who’s using Virtual Block Storage?

Amazon EBS is just one very common example of Virtual Block Storage and by no means, the only one to fail miserably.

Azure stores the block devices for all their compute nodes as Blobs in their premium or standard storage services. Back in November, a bad update to the storage service sent some of their storage endpoints into infinite loops, denying access to many of these virtual hard disks. The bad software was deployed globally and caused more than 10 hours of downtime across 12 data centers. According to the post, some customers were still being affected as much as three days later.

HP Cloud provides virtual block storage based on OpenStack Cinder. See related incident reports here, here, here, here, here. I could keep going back in time, but I think you get the point.

Also based on Cinder, Rackspace offers their Cloud Block Storage product. Their solution has some proprietary component they call Lunr, as detailed in this Hacker News thread so you can hope that Lunr is more reliable than other implementations. Still, Rackspace had major capacity issues spanning over two weeks back in May of last year and I shudder to think what would have happened if anything went wrong while capacity was low.

Storage issues are so common and take so long to recover from in OpenStack deployments, that companies are deploying alternate cloud platforms as a workaround while their OpenStack clouds are down.

What clouds won’t ruin your SLA?

Rackspace doesn’t force you to use their Cloud Block Storage, at least not yet, so unless they are drinking their own kool-aid in ways they shouldn’t be, you are hopefully safe there.

Digital Ocean also relies on local block storage by design. They are apparently considering other options but want to avoid an EBS-like solution for the reasons I’ve mentioned. While their local storage isn’t putting you at risk of a cascading failure, they have been reported to leak your data to other customers if you don’t destroy your machines carefully. They also have other fairly frequent issueswhich take them out of the running for me.

The winning horse

As usual, Joyent shines through on this. For many reasons, the SmartDataCenter platform, behind both their public cloud and open source private cloud solutions, supports only local block storage. For centralized storage, you can use NFS or CIFS if you really need to but you will not find virtual block storage or even SAN support.

Joyent gets some flack for this opinionated architecture, occasionally even from me, but they don’t corrupt my data or crash my servers because some virtual hard disk has gone away or some software upgrade has been foolishly deployed.

With their recently released Docker and Linux Binary support, Joyent is really leading the pack with on-metal performance and availability. I definitely recommend hitching your wagon to their horse.

The Nooooooooooooooo! button

If it’s too late and you’re only finding this article post cloud-tastrify, I refer you to the ever amusing Nooooooooooooooo! button for some comic relief.

Ynet on AWS. Let’s hope we don’t have to test their limits.

tightrope

In Israel, more than in most places, no news is good news. Ynet, one of the largest news sites in Israel, recently posted a case study (at the bottom of this article) on handling large loads by moving their notification services to AWS.

“We used EC2, Elastic Load Balancers, and EBS… Us as an enterprise, we need something stable…”

They are contradicting themselves in my opinion. EBS and Elastic Load Balancers (ELB) are the two AWS services which fail the most and fail hardest with multiple downtimes spanning multiple days each.

EBS: Conceptually flawed, prone to cascading failures

EBS, a virtual block storage service, is conceptually flawed and prone to severe cascading failures. In recent years, Amazon has improved reliability somewhat, mainly by providing such a low level of service on standard EBS, that customers are default to paying extra for reserved IOPS and SSD backed EBS volumes.

Many cloud providers avoid the problematic nature of virtual block storage entirely, preferring compute nodes based on local, direct attached storage.

ELB: Too slow to adapt, silently drops your traffic

In my experience, ELBs are too slow to adapt to spikes in traffic. About a year ago, I was called to investigate availability issues with one of our advertising services. The problems were intermittent and extremely hard to pin down. Luckily, as a B2B service, our partners noticed the problems. Our customers would have happily ignored the blank advertising space.

Suspecting some sort of capacity problem, I ran some synthetic load tests and compared the results with logs on our servers. Multiple iterations of these tests with and without ELB in the path confirmed a gruesome and silent loss of 40% of our requests when traffic via Elastic Load Balancers grew suddenly.

The Elastic Load Balancers gave us no indication that they were dropping requests and, although they would theoretically support the load once Amazon’s algorithms picked up on the new traffic, they just didn’t scale up fast enough. We wasted tons of money in bought media that couldn’t show our ads.

Amazon will prepare your ELBs for more traffic if you give them two weeks notice and they’re in a good mood but who has the luxury of knowing when a spike in traffic will come?

Recommendations

I recommend staying away from EC2, EBS, and ELB if you care about performance and availability. There are better, more reliable providers like Joyent. Rackspace without using their cloud block storage (basically the same as EBS with the same flaws) would be my second choice.

If you must use EC2, try to use load balancing AMIs from companies like Riverbed or F5 instead of ELB.

If you must use ELB, make sure you run synthetic load tests at random intervals and make sure that Amazon isn’t dropping your traffic.

Conclusion

In conclusion, let us hope that we have no reasons to test the limits of Ynet’s new services, and if we do, may it only be good news.

Linux and Solaris are Converging but Not the Way You Imagined

tux

In case you haven’t been paying attention, Linux is in a mad dash to copy everything that made Solaris 10 amazing when it launched in 2005. Everyone has recognized the power of Zones, ZFS and DTrace but licensing issues and the sheer effort required to implement the technologies has made it a long process.

ZFS

ZFS is, most probably, the most advanced file system in the world. The creators of ZFS realized, before anyone else, that file systems weren’t built to handle the amounts of data that the future would bring.

Work to port ZFS to Linux began in 2008 and a stable port of ZFS from Illumos was announced in 2013. That said, even 2 years later, the latest release still hasn’t reached feature parity with ZFS on Illumos. With developers preferring to develop OpenZFS on Illumos and the licensing issues preventing OpenZFS from being distributed as part of the Linux Kernel, it seems like ZFS on Linux (ZOL) may be doomed to playing second fiddle.

DTrace

DTrace is the most advanced tool in the world for debugging and monitoring live systems. Originally designed to help troubleshoot performance and other bugs in a live Solaris kernel, it quickly became extremely useful in debugging userland programs and run times.

Oracle has been porting DTrace since at least 2011 and while they both own the original and have prioritized the most widely used features, they still haven’t caught up to the original.

Zones

Solaris Zones are Operating System level virtual machines. They are completely isolated from each other but all running on the same kernel so there is only one operating system in memory. Zones have great integration with ZFS, DTrace, and all the standard system monitoring tools which makes it very easy to support and manage servers with hundreds of Zones running on them. Zones also natively support a mechanism called branding which allows the kernel to provide different interfaces to the guest zone. In Oracle Solaris, this is used to support running zones from older versions of Solaris on a machine running a newer OS.

Linux containers of some type or another have been around for a while, but haven’t gotten nearly as mature as Zones. Recently, the continued failure of traditional hypervisors to provide bare metal performance in the cloud, coupled with the uptake of Docker, has finally gotten the world to realize the tremendous benefits of container based virtualization like Zones.

The current state of containers in Linux is extremely fractured with at least 5 competing projects that I know of. LXC, initially released in 2008, seems to be the favorite but historically had serious privilege separation issues but has gotten a little better if you can meet all the system requirements.

Joyent has been waiting at the finish line.

While Linux users wait and wait for mature container solutions, full OS and application visibility, and a reliable and high performance file system, Joyent has been waiting to make things a whole lot easier.

About a year ago, David Mackay showed some interest in Linux Branded Zones, work which had been abandoned in Illumos. In the spring of 2014, Joyent started work on resurrecting lx-zones and in September, they presented their work. They already have working support for 32 bit and some 64 bit Linux binaries in Linux branded SmartOS Zones. As part of the process, they are porting some of the main Linux libraries and facilities to native SmartOS which will make porting Linux code to SmartOS much easier.

The upshot of it is that you can already get ZFS, Dtrace, and Linux apps inside a fully isolated, high performance, SmartOS zone. With only 9 months or so of work behind it, there are still some missing pieces to the Linux support, but, considering how long Linux has been waiting, I’m pretty sure SmartOS will reach feature parity with Linux a lot faster than Linux will reach feature parity with SmartOS.

Your Graphic Designer is Tanking Your Site!

mona lisa

Your graphic designer is an artist, a trained expert in aesthetics, a master at conveying messages via images and feelings via fonts. He may also be slowing your site down so much that nobody is seeing it.

Artists tend to be heavy on the quality and lighter on the practicality of what they deliver. It’s not entirely their fault. Even the most conscientious and experienced designer needs to sell his work and quality sells. Do marketing departments want to see their company advertised in 4K glory or on their mom’s 19″ LCD?

Quality isn’t worth the cost.

The reality of the Internet is that too much quality costs more than it’s worth. It costs bandwidth and it costs customers who aren’t willing to wait for heavy sites to load.

Is there a subliminal value to high quality graphics? The answer is yes but only if someone sees them.

How much do the images make a difference?

Here is a quick experiment you can do to visualize some of the improvement you could see. Go into your browser settings and disable all images. Then go back and visit your website to see how it loads without the bloat.

I just tried this on a the homepage of a major news network. With images the site was over 4MB. You don’t even see most of the pictures that were loaded. Without images the site was under 2MB (still very high to be honest). That basically means that, according to the laws of physics and in the best case scenario, the site with images will take at least twice as long to load.

You might say they are a huge media site. They know what they’re doing. They need the images. The sad truth is that they are wasting your bandwidth for no good reason as I’ll demonstrate shortly.

How to fix it?

I won’t tell you to get rid of all the images on your site, though if that can fit with your needs, less is always faster. You do, however, need to optimize your images for the web.

As a performance engineer who has examined the performance of literally hundreds of websites, the number one problem is always images that haven’t been optimized. Just optimizing your images properly can cut the download time of a website in half, possibly more.

As a continuation of our experiment above I took this picture from their page and tried optimizing it to see what kind of savings I could get. Here is the original 41KB version:

Here is the optimized 15KB version:

The result is almost 1/3 the size and I don’t think you can tell the difference. Would you notice any difference if this was just another image on a website? You too could get a 50-60% performance boost, just by optimizing your images.

If you are using some type of automated deployment for your sites, there are tools which will optimize your images automatically. They are OK for basic optimizations.

To really optimize your images, you need a person with a realistic eye (graphic designers are always biased towards heavier higher quality images) and you need to follow these basic rules:

Use the right images for the job.

In general, properly compressed, Progressive JPEG files are the best to use but they don’t support transparency. After weighing carefully if you need a transparent image or not, use Progressive JPEG files for any image that has more colors than you can count on your hand and doesn’t require transparency. Otherwise use a PNG file.

Optimize the images.

Optimizing JPEG files

First, manually reduce the file’s quality level until you reach the setting with acceptable levels of pixelation in the sharp edges and gradient surfaces of the image.

Note that you should always start optimizing from the original image at it’s best quality. Repeatedly editing a JPEG file will degrade the quality of the image with each generation.

After you have reached the optimal quality level, run them through a tool likeImageOptim which will remove any thumbnails or metadata adding weight to the image.

Optimizing PNG files

Optimize PNG files first by running them through a tool like ImageAlpha orTinyPNG. These tools use lossy compression techniques to reduce the number of colors in your image fitting them into a smaller bitmap, and resulting in better compression than a PNG would normally have.

Note: ImageAlpha gives you more control over the process letting you decide how many colors should be used in the resulting image. This is very useful for transparent PNGs with very few colors.

Then run the images through ImageOptim or similar tool to squeeze some extra space out of them.

Once you have mastered the above, your site should be much lighter. If you want to optimize further (and you should), look into the following techniques:

Don’t use images where you don’t have to.

Many of the most common icons used on websites have been implemented as fonts. Once a font is loaded, each icon is a single character and making the icon larger or smaller is as simple as using a larger font size.

Note: There is some overhead in loading and using the fonts (additional CSS) so use wisely.

Combine small images.

Similar to the idea of using fonts, is the CSS Sprite. These combine multiple small images into a single transparent PNG file and then show only one image at a time on your site using CSS tricks.

The advantage of this technique is that it saves requests to the web server. Each request to a web server has latency, sometimes as much as 200ms, which can’t be eliminated and for small images, this request latency can sometimes account for more time than downloading the image itself. By combining the smaller images in your site, you save that overhead.

There are tools which will generate the combined images and the corresponding CSS for you.

Summary

I’ve used these image optimization techniques in hundreds of websites resulting in significant savings of bandwidth and increased performance. Most importantly, though, these techniques result in better visitor engagement.

If you’re interested in optimizing your site for best cost/performance, feel free to contact me via LinkedIn or via https://donatemyfee.org/.