Almost exactly a year ago today, DreamHost experienced its last unplanned power outage.
Last ever so far! Who knows what the future holds? (Besides me.)
Who here is glad DreamHost is in sunny, safe, earthquake, mudslide, forest fire, riot, tsunami-free, Los Angeles now? And who here is publicly enjoying that 365 Main is not?
Here’s a big hint: he’s really good looking and wrote this post.
Of course, the real reason we had no problems is not because our data center is finally super reliable, or that Los Angeles itself never has so much as a cloudy day, or even that we’re just lucky.
Of course, that’s not really true either. I’m not in Chicago; as everyone knows, I’m a compulsive liar. In fact, this statement is a lie.
But, even if I was at hosting con (and everybody knows we don’t go to hosting conventions), my ability to break DreamHost systems knows no boundary of time or space, and strikes at any time, usually without warning and definitely without mercy.
Why were we were spared this time?
The honest truth is that any data center can, at any time and for any reason, no matter what precautions they take, have an outage! You’d think making a reliable data center would be a lot easier than making a reliable software service, seeing as how it’s all just power cables, air conditioning, and gasoline.
And yet somehow, it seems like all even the best and most expensive data centers can do is make the outages a little less frequent.
What IS a poor host to do?
I mean, the only way you can really achieve “five nines” uptime is by having an entire architecture designed around the assumption that ANYTHING can fail… and at the worst possible time. Duh.
However, like most Las Vegas escorts, that sort of redundancy does not come easily. Or cheap. And the truth of the matter is unless you’re Google, most likely an entire day of downtime once a year is not going to cost you as much as it would to truly prevent it.
In fact, I wish there were some low-reliability data centers out there! I bet if somebody made an ultra low-cost data center, one that provided “adequate” cooling, network, and power capacity, but no UPS, fire-suppression, generators, crazy physical security, or extra earthquake protection, they would clean UP.
They could probably charge around half of any data center I’ve ever seen, and I bet with only twice the downtime… and that would be very appealing.
I mean, think about it… how many of you could deal with an extra day of downtime per year for half the price? Heck, you’d probably be fine with FOUR days of downtime a year if it meant 75% off.. but would you pay double to save 12 hours of downtime a year? Would you pay FOUR times as much to save 18? Eight times as much to save 21?
That’s pretty much how it works, and I’m guessing not a lot of you would.
Of course, maybe I’m over-estimating the cost savings of skimping on redundancy in a data center a little, and maybe I’m under-estimating the reliability hit a tiny bit. On the other hand, my blog posts have never been wrong before.
AND, if somebody did come out with a “Crap-of-the-Art” data center, it’d make it a lot more feasible for those who really need reliability to get two; thereby keeping all their company’s eggs out of one risqué basket.
In fact, what we’ve been doing over the last year is breaking our system down into smaller and smaller isolated “clusters,” and distributing them between three data centers (all in LA). The idea being, data centers will go down.. let’s at least try and keep the eggs in our other baskets un-scrambled. And since we’re not really counting on much reliability from them anyway, it sure would be nice if those data centers all charged a lot less!
Of course our network still has a single (though redundant) point of failure, but we are working towards eventually making each data center a complete stand-alone “node”… some day.
This day, however, I think I’ll just go to bed… while taking pleasure in the fact that it was somebody else this time!