Google Goes Out With “A Glitch”

Your Google “power” went out yesterday. I didn’t even notice, since I took a mid-morning siesta with my staff watched Jack Ryan: Shadow Recruit (highly recommended). But, even if you were stuck for a matter of minutes without Google’s services, just be glad you’re not this guy. And apparently he wasn’t the only one.

Google explained the outage:

[A]n internal system that generates configurations—essentially, information that tells other systems how to behave—encountered a software bug and generated an incorrect configuration. The incorrect configuration was sent to live services over the next 15 minutes, caused users’ requests for their data to be ignored, and those services, in turn, generated errors

Ho-ly . . . I think we all know where this is going:

Fortunately the glitch only lasted about 1 hour. Obviously, Google has a much better fix it time than the US government.

If you’re wondering how significant the outage was, the last time this happened web traffic shrank by 40%.

Maybe it’s time for those tin foil hats . . .

Google Too Big to Fail

Here’s a repost of Google’s full explanation:

Earlier today, most Google users who use logged-in services like Gmail, Google+, Calendar and Documents found they were unable to access those services for approximately 25 minutes. For about 10 percent of users, the problem persisted for as much as 30 minutes longer. Whether the effect was brief or lasted the better part of an hour, please accept our apologies—we strive to make all of Google’s services available and fast for you, all the time, and we missed the mark today.

The issue has been resolved, and we’re now focused on correcting the bug that caused the outage, as well as putting more checks and monitors in place to ensure that this kind of problem doesn’t happen again. If you’re interested in the technical explanation for what occurred and how it was fixed, read on.

At 10:55 a.m. PST this morning, an internal system that generates configurations—essentially, information that tells other systems how to behave—encountered a software bug and generated an incorrect configuration. The incorrect configuration was sent to live services over the next 15 minutes, caused users’ requests for their data to be ignored, and those services, in turn, generated errors. Users began seeing these errors on affected services at 11:02 a.m., and at that time our internal monitoring alerted Google’s Site Reliability Team. Engineers were still debugging 12 minutes later when the same system, having automatically cleared the original error, generated a new correct configuration at 11:14 a.m. and began sending it; errors subsided rapidly starting at this time. By 11:30 a.m. the correct configuration was live everywhere and almost all users’ service was restored.

With services once again working normally, our work is now focused on (a) removing the source of failure that caused today’s outage, and (b) speeding up recovery when a problem does occur. We’ll be taking the following steps in the next few days:

  1. Correcting the bug in the configuration generator to prevent recurrence, and auditing all other critical configuration generation systems to ensure they do not contain a similar bug.
  2. Adding additional input validation checks for configurations, so that a bad configuration generated in the future will not result in service disruption.
  3. Adding additional targeted monitoring to more quickly detect and diagnose the cause of service failure.

Posted by Ben Treynor, VP Engineering

Let's discuss this (you can use Markdown in your comment)

Jeff Taylor

I’m just an ordinary guy living an extraordinary life. I’m also an attorney and I blog about Android for lawyers. You can follow me on Twitter, LinkedIn, YouTube, or Google+.