New: A daily newsletter defining what matters in business and markets

Stories

Amazon blames human error for Tuesday's massive AWS outage

Canonicalized / Flickr cc

Amazon offered up an explanation for this week's major outage in its AWS service, blaming human error and promising to change its procedures to avoid a similar incident.

The outage, Amazon said, was caused while trying to remove a small number of servers. A command was incorrectly entered by an employee, resulting in more servers than expected to be taken offline and requiring the overall system to be fully rebooted.

We want to apologize for the impact this event caused for our customers. While we are proud of our long track record of availability with Amazon S3, we know how critical this service is to our customers, their applications and end users, and their businesses. We will do everything we can to learn from this event and use it to improve our availability even further.

Amazon said it is making a number of changes, including safeguards that should prevent too many servers from being taken offline.

Here are the key parts of Amazon's statement:

The Amazon Simple Storage Service (S3) team was debugging an issue causing the S3 billing system to progress more slowly than expected. At 9:37AM PST, an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended. The servers that were inadvertently removed supported two other S3 subsystems. ...
While removal of capacity is a key operational practice, in this instance, the tool used allowed too much capacity to be removed too quickly. We have modified this tool to remove capacity more slowly and added safeguards to prevent capacity from being removed when it will take any subsystem below its minimum required capacity level. This will prevent an incorrect input from triggering a similar event in the future. We are also auditing our other operational tools to ensure we have similar safety checks. We will also make changes to improve the recovery time of key S3 subsystems. ...
Finally, we want to apologize for the impact this event caused for our customers. While we are proud of our long track record of availability with Amazon S3, we know how critical this service is to our customers, their applications and end users, and their businesses. We will do everything we can to learn from this event and use it to improve our availability even further. ...
More stories loading.