At AWS Summit in New York today, Amazon announced that its S3 storage service now holds more than 2 trillion objects. That’s up from 1 trillion last June and 1.3 trillion in November, when the company last updated these numbers at its re:Invent conference. As Amazon’s Chief Evangelist for AWS Jeff Barr notes in a blog post today, it took Amazon six years to grow to get to one trillion stored objects, “and less than a year to double that number.” S3, Barr also writes, now regularly sees peaks of over 1.1 million requests per second.
To put this into contexts, Amazon says that with about 400 billion stars in our galaxy, this now means there are five objects in S3 for every star in the galaxy. It’s worth noting that Amazon’s S3 objects are defined as a single blob of data – and the size of these can be anywhere between 1 byte and 5 terabyte. Amazon sadly did not reveal any information about the size of the average S3 object.
Amazon once again dropped some of its pricing for S3 when it cut its prices for API requests by half earlier this month. Pricing is probably only one reason for Amazon’s success with S3. Despite its occasional outages, the AWS has long become the go-to service for companies who want to host large amounts of data in the cloud. It’s large and ever-expanding feature set has drawn in many small startups as well as many large and rapidly growing services like Pinterest and Dropbox. According to some analysts, AWS could become a $20 billion business by the end of the decade.
Despite efforts by other players, including Microsoft with its Azure platform, nobody has yet managed to challenge Amazon’s dominance in this space. Microsoft even recently said that it would match every AWS price drop. Microsoft says Azure currently has about 200,000 customers and is signing up about 1,000 new ones per day. (from www.techcrunch.com)