From bringing people together at the World Cup, to improving the way employees talk to each other, Google Cloud Platform Services Partners help customers unlock the full potential of our products.
To help our partners focus more on their customers’ experiences, we are pleased to announce that we’re now accepting applications for a reselling option from eligible, existing Google Cloud Platform services partners and we anticipate expanding to new partner program applicants in early fall.

As a reseller of Cloud Platform, partners will be able to provision and manage their customers via the new Cloud Platform reseller console. Google Cloud Platform resellers will:
  • Fully manage  their customers’ Google Cloud Platform experience, from onboarding through implementation
  • Provide the first line of support and be responsible for customer problem resolution
  • Provide customers with a billing service that matches their specific requirements and in local currency

The ability to resell will be especially beneficial to partners aiming to bundle multiple Cloud Platform services and present one consolidated bill to their customers.

The reseller console showcases deep insights into our customers' engagement with the platform, allowing us to make informed recommendations in terms of best practices and opportunities available to our customers. As a trusted solutions partner, it's paramount for us to provide white glove services to make their transition to the cloud as seamless as possible."
           -- Tony Safoian, Sada Systems CEO
If you’re an existing services partner and want to learn more about your organization's eligibility for reselling, visit our application page on Google for Work Connect. And if you’re new to Google Cloud Platform and interested in becoming a services partner, visit our site at

- Posted by Adam Massey - Director, Global Partner Business

While containers make packaging apps easier, a powerful cluster manager and orchestration system is necessary to bring your workloads to production.  Today, Google Container Engine is generally available and production ready, backed by Google’s 99.5% service level agreement.  Container Engine makes it easy for you to set up a container cluster and manage your application, without sacrificing infrastructure flexibility.  Try it today.

Set Up a Managed Container Cluster in a Few Clicks
With Container Engine, you can create a managed cluster that’s ready for container deployment, in just a few clicks. Container Engine is fully managed by Google reliability engineers, so you don’t have to worry about cluster availability or software updates.

Container Engine also makes application management easier.  Your cluster is equipped with common capabilities, such as logging and container health checking, to give you insight into how your application is running.  And, as your application’s needs change, resizing your cluster with more CPU or memory is easy.

Image result for porch logo
“We chose Kubernetes to get the most out of our application infrastructure, and we chose to move to Google Container Engine from another cloud provider to get the most out of Kubernetes. Our infrastructure on Container Engine runs at about 40% of its original deployment on the other cloud provider, and Google’s sustained use discounts and per minute pricing have led to further cost savings.”

-- Jay Allen, Porch CTO

Declarative Container Scheduling and Management
Many applications take advantage of multiple containers; for example, a web application might have separate containers for the webserver, cache, and database.  Container Engine is powered by Kubernetes, the open source orchestration system, making it easy for your containers to work together as a single system.

Container Engine schedules your containers into your cluster and manages them automatically, based on requirements that you declare.  Simply define your containers’ needs, such as the amount of CPU/memory to reserve, number of replicas, and keepalive policy, and Container Engine will actively ensure requirements are met.

“The declarative nature of Kubernetes has proven to be very powerful in streamlining and simplifying spinning up application components, which include Django, Geoserver, ELK stack, Redis, PostGIS, and then GeoMesa interfacing with our Google Cloud Bigtable instance.“

-- Tim Kelton, co-founder Descartes Labs.

Cloud Flexibility with Kubernetes
Most customers live in a multi-cloud world, using both on-premises and public cloud infrastructures to host their applications.  With Red Hat, Microsoft, IBM, Mirantis OpenStack, and VMware -- and the list keeps growing -- integrating Kubernetes into their platforms, you’ll be able to move workloads, or take advantage of multiple cloud providers, more easily.  Container Engine and Kubernetes provide you with flexibility, whether you use on-premises, hybrid, or public cloud infrastructure.

“When we implemented our new, microservice-based container architecture, we chose Kubernetes because we needed a single, simple, standardized runtime platform that we could easily and quickly deploy across multiple environments. We use Container Engine in conjunction with our own infrastructure (powered by Mirantis OpenStack) and other public clouds to diversify our infrastructure risk.”

-- Lachlan Evenson, Lithium Cloud Platform Engineering

Ready for Production
Everything at Google, from Search to Gmail, is packaged and run in a Linux container. Each week we launch more than 2 billion container instances across our global data centers.  Container Engine represents the best of our experience with containers and we’re excited for you to give it a spin.  Get started with Container Engine today.

As a very small token of thanks for your support, we’re giving away 1,000 Container Engine t-shirts.  Simply be one of the first 1,000 people to tweet @googlecloud with the hashtag #imakubernaut about why you love Container Engine or Kubernetes and get a t-shirt (some conditions apply).

- Posted by Craig Mcluckie, Product Manager

We recently announced that Google Cloud Dataflow and Google Cloud Pub/Sub graduated to general availability. You can now leverage these easy to use and inexpensive large-scale fully managed big data services with Google BigQuery to find valuable business information and insights.

BigQuery is a No-Ops analytics database that seamlessly scales in seconds, requires no instance or cluster management, offers unbeatable performance out-of-the-box, and lets you pay only for what you consume. Today, we’re releasing a new version of BigQuery that is easier to use, more powerful and more open.

With new features such as User-Defined Functions (UDFs) and an improved user interface (UI), BigQuery is now simpler and easier to use.
  • User-Defined Functions (UDFs). Expressed in Javascript, UDFs allow you to extend SQL and execute arbitrary code within BigQuery. For example, you can now easily express complex conditional logic in your queries and you have much more flexibility than provided by regular expressions. Head over to our documentation to learn more.
  • Query files in Google Cloud Storage from BigQuery. It is now possible to run queries without loading files into BigQuery first. This functionality also simplifies data import into BigQuery.  In addition to the existing straight “import” mechanism, you can now write queries which read from Cloud Storage files and write the results to BigQuery tables. Federated query documentation offers more details.
  • Increased query limits. You will now be able to run 50 simultaneous queries, and 100,000 queries per day (up from 20 and 20,000). In addition, there will no longer be limits on “maximum simultaneous bytes processed” and “maximum simultaneous large queries”. These changes give you more freedom within the BigQuery ecosystem.
  • UI Improvements. We’ve added several new features, including a new “Format Query” button, automatic organization of date-sharded tables, and the ability to download query results in JSON.

We also wanted to make BigQuery more powerful and performant to help you save time and increase productivity.
  • Dynamic query optimization. Improves reliability and performance for complex queries such as large JOIN or GROUP BY operations.  You can expect to see your project activated in the coming weeks.  Users will no longer need to specify the EACH keyword, which greatly simplifies the writing of queries, particularly for applications that programmatically generate SQL such as visualization tools and dashboards.
  • Enhancements to the query execution engine will result in increased performance and scale of queries that use lots of resources, such as large JOINs, analytic functions, and high-cardinality aggregations.

And lastly, we added new features to BigQuery to be more open.
  • BigQuery Slots. One unique feature of BigQuery is the ability to dip into the vast shared pool of resources to scale into thousands of cores for a query. BigQuery Slots offer customers the ability to expand and allot the resources available to them, regardless of system load. Use cases include latency-sensitive SaaS, ETL, and business reporting workloads.
  • High-Compute Pricing Tiers. With release of UDFs, Dynamic query optimization, and execution engine improvements, BigQuery now supports queries that consume large amounts of compute resources relative to “bytes scanned”. To enable this higher resource consumption, we are introducing High Compute Pricing Tiers. For more information, head over to our pricing page.

BigQuery is fully-managed by Google, so customers automatically get all these benefits right away. Make use of the of better UI, better performance, and additional functionalities - no action needed, and no downtime. Solve your big data problems the way we solve ours!

Learn how BigQuery can help you, take a look at the documentation, and try it out! First terabyte processed is on us!

- Posted by Tino Tereshko, Technical Program Manager

Do you have backup tapes sitting in a local closet or some third party storage facility? If yes, I have some good news for you because you can no longer afford to let this data sit on a shelf and collect dust.

To help you stay competitive and make it easy to import your old data backups to the cloud, we’re introducing Offline Media Import/Export. This is a solution that allows you to load data into any Google Cloud Storage class (Standard, DRA and Nearline) by sending your physical media -- such as hard disk drives (HDDs), tapes, and USB flash drives -- to a third party service provider who uploads data on your behalf. Offline Media Import/Export is helpful if you’re limited to a slow, unreliable, or expensive Internet connection. It’s also a great complement to the newly released Google Cloud Storage Nearline, a simple, low-cost, fast-response storage service with quick data backup, retrieval and access.

Offline Media Import/Export is fast, simple and can include a chain-of-custody process.
It’s faster than doing it yourself: Popular business DSL plans feature download speeds that exceed 10Mbps (megabits per second). However, upload speeds generally top out at 1Mbps, with most plans providing just 768kbps (kilobits per second) for upload. This means that uploading a single terabyte (TB) of data will take more than 100 days! This also assumes that no one else is using the same network connection. With Offline Media Import/Export, this process can now be completed in days instead of months.

It’s simple: Save and encrypt your data to the media of your choice (hard drives, tapes, etc.) and ship them to the third party service provider through your preferred courier service.

It’s protected: The encrypted data will be uploaded to Google Cloud Storage using high speed infrastructure. Third party service providers like Iron Mountain can offer a chain-of-custody process for your data. Once data upload is complete,  Iron Mountain can send the hard drive back to you, store it within their vault or destroy it.

Get Started!
More information can be found on the “Offline Media Import / Export” webpage.

- Posted by Ben Chong, Product Manager

Networking. It’s one of the most critical elements of a datacenter, connecting machines, applications and locations to one another to transfer information, data and documents. Its what enables your mobile device to provide you with access to your email, to send messages to your friends, to post photos to social networks and to check in at the places you visit.

And yet the only time you think about it is when it’s not there!

Amin Vahdat, Google’s Technical Fellow for Networking posted today on the Google Research blog details about the investments that Google has made in networking in order to deliver on our stated mission to organize the world’s information and to make it universally accessible.  As Amin states, ten years ago we realized that we could not purchase, at any price, a datacenter network that could meet the combination of our scale and speed requirements.

So we built our own!

To date, we have built and deployed five generations of our datacenter network infrastructure. Our latest-generation Jupiter network has improved capacity by more than 100x relative to our first generation network, delivering more than 1 petabit/sec of total bisection bandwidth. To put this in perspective, this provides capacity for 100,000 servers to exchange information at 10Gb/s each, enough to read the entire scanned contents of the Library of Congress in less than 1/10th of a second.

Here is a look at the hardware innovations over the years:
Firehose (2005-2006)
  • Chassis based solution (but no backplane)
  • Bulky CX4 copper cables restrict scale
Copy of FH_cabling (1).jpg
WatchTower (2008)
  • Chassis with backplane
  • Fiber (10G) in all stages
  • Scale to 82 Tbps fabric
  • Global deployment
Saturn (2009)
  • 288x10G port chassis
  • Enables 10G to hosts
  • Scales to 207 Tbps fabric
  • Reuse in WAN

Jupiter (2012)
  • Enables 40G to hosts
  • External control servers
  • OpenFlow

All of this innovation in networking is available to you as a customer of Google Cloud Platform through Cloud Networking. Cloud Networking provides three key capabilities to our customers:
  • Cloud Interconnect - connect your datacenter to ours through an encrypted VPN, Direct Peering or via your Carrier.
  • Load Balancing - spread load across applications with HTTP/S or across machines with TCP/UDP.
  • Cloud DNS - Reliable and low latency DNS serving from Google's worldwide network of Anycast DNS servers

You can learn more about Cloud Networking at and in the documentation. You can also dive into the technical details on five generations of our in-house data center network architecture by downloading the whitepaper.

- Posted by Adam Hall, Head of Technical Product Marketing, Cloud Platform

In the world of eCommerce, website speed and efficiency matter – a lot.

Recent research1 shows that  a 1 second delay in page load time equates to a 7% loss in conversions.  And 81% of shoppers will actually pay more for a product in exchange for a better, more efficient shopping experience2.  This level of speed and performance needs to be delivered for every customer, every time, no matter how many shoppers are visiting a site at the same time.  Consistently meeting these goals requires a dynamic and responsive architecture.

At Lagrange Systems, we deliver architectures that accelerate and provide reliability for platforms including eCommerce through our software ADC solution, CloudMaestro.  In order to achieve success it is imperative that the underlying cloud-based resources are as responsive as our software.  When we heard about Google Compute Engine, we were intrigued and decided to set up an experiment between the leading cloud providers in an apples-to-apples comparison of relative price and performance.

The experiment went like this:
  • We set up and configured a simple eCommerce store utilizing the open source performance toolkit recently made available by our friends at Magento
  • Using their best practice guide, we defined three architectures utilizing two, four, and six application servers each, integrated with CloudMaestro
  • In each cloud provider, we selected similarly spec'd compute resources and collected the cost per resource.  

Example Architecture with 6 application servers

Our goal: 90% of page load times under 2 seconds
Our goal was to drive the maximum number of users per hour, employing the performance toolkit, that each architecture could support before the page response time became unacceptable (90% of page load times under 2 seconds).  We set out to measure throughput as well as the cost to support this peak level of performance.

Even though the architectures were a fixed size, our solution is dynamic and responsive to the number of customers and the load they create on the system, so we also measured the time it took for the infrastructure to spin up new compute resources and to verify those resources are available and responding to requests.  

Compute Engine delivered better performance and lower cost than the average of other tested top cloud providers
Compute Engine showed up in a major way and when managed by CloudMaestro, delivered a higher level of performance AND a lower cost per user than the average (mean) of all other tested cloud providers.  

Throughput Comparison (higher is better)
Cost Comparison (lower is better)
Quantifiably, Compute Engine offered up to 25% more throughput at 19% lower unit cost, meaning our customers’ dollars go further, and that Magento eCommerce sites with CloudMaestro support higher levels of traffic and more customers completing purchases.

The compute resources on Compute Engine were the quickest online, network accessible, and responding to HTTP requests. Compute Engine is over 111% faster than the next quickest cloud provider and 307% faster than the mean time of all measured cloud providers.

Comparison of Compute Resource Time to Availability (lower is better)
We were pleasantly surprised by the results and grateful to learn that whenever we need to expand our resources, we can count on Compute Engine to be there delivering what we need quickly and efficiently online.

Find out how Lagrange Systems provides reliable platforms through CloudMaestro
during Cloud Platform’s webinar on 8/25, where we’ll discuss why speed and performance is key for eCommerce sites.

You can also learn more at our website, or email at

- Posted by Jason Walp, Senior Sales Engineer, and Emily Friedberg, VP of Corporate Development, both at Lagrange Systems.