Racks inside a Google data center. (Courtesy of Google)
Racks inside a Google data center. (Courtesy of Google)
Racks inside a Google data center. (Courtesy of Google)

(Forbes) – The name, Bigtable, is not new. Google began developing its internal data storage system back in 2004, and wrote about it in a 2006 research paper. Google’s ideas influenced a generation of developers, all of whom were searching for cheap, distributed ways to store and query data at massive scale. The United States’ National Security Agency (NSA) built Accumulo, in part using Bigtable as an inspiration. Facebook built Cassandra, Powerset built HBase, LinkedIn built Voldemort, and a range of other big names in technology also developed their own solutions to problems very like the ones Bigtable was designed to address. And, over the years, all have grown beyond the companies that developed them, becoming commercial products, open source projects, or both. All except Bigtable. Until today.

Now Google is launching the public beta for a hosted version of Bigtable, running in its cloud, backed by its engineering talent, and available to all comers: meet Google Cloud Bigtable. A decade after its ideas were fresh and new, years after some of Google’s biggest competitors launched equivalent services of their own, can Google and Bigtable still compete? Or are they too late?

Google’s new service takes the Bigtable capabilities that already underpin internal applications like Gmail, and makes them available to developers with a need to run NoSQL processes rapidly and at scale. According to Google, the new service is fast enough to feasibly serve web-scale applications directly. Competitor services don’t always display the right performance profile to act in this way, forcing developers to insert caches or additional infrastructure to cope with the transition between operational data and back-end processing or analytics.

READ MORE

Leave a comment

Your email address will not be published. Required fields are marked *