Skip to main content
  • Cybersecurity

Live by the cloud – possible service outages call for a different approach to managing IT risk

Broadband ubiquity has enabled organizations to move more and more services off to the cloud - but there are risks.
By Rob Williamson
Marketing Manager

Broadband ubiquity has enabled organizations to move more and more services off to the cloud – but there are risks.

Broadband ubiquity has enabled organizations to move more and more services off to the cloud. It may be tempting to think that the benefits have been to reduce the costs for organizational IT infrastructure but the reality for many is that the real benefit is agility. In order to deliver enterprise-class services to tens of thousands of users big companies didn’t need to get out bulldozers and build new datacenters.  What’s more they can benefit from best-in-class infrastructure from dedicated vendors AND service-level-agreements that they could not likely meet on their own. 

Moreover, turning on new online services for everything from a pilot project through to a full scale deployment can happen with largely variable costs. Companies love variable costs because they are easy to attach to the revenue, easy to turn on and off as demand warrants, and don’t require expensive technology staff to operate. These benefits extend right down to the smallest businesses who can now get high-quality software that was previously unavailable to them.

This explosion in cloud-based software has created whole new companies (salesforce.com) and entirely new players.  For instance, who would have guessed that a little online bookstore (Amazon) would leverage investment in global infrastructure and connectivity to become to one of the most formidable IT companies in the world. While it was happening it was weird. Strange. But so easy. And cheap.

Today everything from the smallest flower shop to the largest bank relies on cloud vendors for their online presence. This includes “infrastructure” like processing, storage, and broadband up to applications for website building, event management, CRM systems and honestly, everything you can possibly imagine under the sun. If you can think of something you want to do, somebody probably has a software service for it. 

The problem is that this creates concentration and when the cloud service fails it can impact tens of thousands of customers and hundreds of millions of users. It would be safe to assume that technical failures from individual cloud vendors don’t happen nearly as often as those that occur with in-house developed applications but their impact is much farther reaching.  Moreover, with this concentration it has created high-value targets for hackers. 

Enterprises and small businesses alike have built a house of cards as it relates to vendors. It is no more complicated than the house of cards they had before, almost certainly much more stable…but different. And because it is different, it requires different strategies to mitigate outages.

Let’s look at a few recent large-impact outages:

  • May 2016 – Salesforce has an outage related to a circuit-breaker that cascaded into some other problems
  • April 2016 – Google Compute Engine outage due to small issue with an unused IP block deletion triggering a bug
  • October 2016 – DynDNS was hit with a massive DDoS attack that brought down large parts of the Internet as it related to the DNS
  • February 2017 – Amazon Web Services had an outage with their S3 service taking many customers offline

And these are just the big newsworthy headlines from the world’s most sophisticated vendors.  DDoS attacks, errors and equipment failures happen across the IT landscape. Moreover, as smaller cloud software vendors rely on larger ones you have risks that you may not even know you had. 

The lesson in all this is that as business departments rely more-and-more on cloud services that they realize it is no longer okay to rely on an SLA with an individual vendor – especially when most organizations, especially smaller ones, don’t even read their SLAs. 

What do you do if you are, for instance, a small online retailer? Make sure you have all the code for your web properties and are ready to move your store on a moment’s notice. If you are a mid-to-large organization make sure you understand the capabilities of internally AND externally run services. Read your SLA and know what it means. Consider not keeping all services with one vendor and have redundant systems where possible and keep backups where this isn’t the case.  

If you are a cloud vendor then you need to think like a truly enterprise-class vendor. At CIRA one of our core services is making sure that “our” portion of all .CA websites DNS can resolve. This means that they can point to your DNS as authoritative for your domain. As a result our datacenters and systems are built with redundancy on top of redundancy and, where logical, we include multiple vendors – especially for the DNS. This redundancy means that we don’t need to take down DNS servers for maintenance – so they don’t need to appear offline. Is this necessary? Not really. But, as an enterprise-grade service, with our customer’s businesses relying on us, we need to be as close as we can to perfect.

Related Products: D-Zone Anycast DNS – Secondary DNS service to help keep your most important web properties and services online.  

About the author
Rob Williamson

Rob brings over 20 years of experience in the technology industry writing, presenting and blogging on subjects as varied as software development tools, silicon reverse engineering, cyber-security and the DNS. An avid product marketer who takes the time to speak to IT professionals with the information and details they need for their jobs.

Loading…