Kubernetes is an increasingly key part of the application deployment strategies at large organizations, and one of the most recommended options for the teams we work with. An idea brought to life at Google, organizations throughout the world use containers and Kubernetes on-premises, in Google Cloud, or in a multi-cloud scenario, and it has emerged as a leading application deployment platform. And because it’s open source, anyone can pop the hood, so to speak, to see how each component of it works, creating a trusted, verifiable framework that users can rely on.
Customers begin their Kubernetes and containerization journeys using various offerings in our product portfolio. Some, like Colgate, started their modernization journey with Kubernetes. Colgate is an $18B global consumer products company with ~34,000 diverse and dedicated people serving over 200 countries and territories. Through science-led innovation, they drive growth and reimagine a healthier future for all people, their pets, and our planet.
During ideation, they talked through various considerations: What technology strengths does their organization have? What skills do their teams need? What will this initiative look like a decade from now? They implemented the following Kubernetes-focused architecture:
At Colgate-Palmolive, we rely on Kubernetes and have embraced Google Kubernetes Engine as our preferred way to manage it. Our DevOps, Architecture, and newly-established Open Source groups use this industry standard, open source platform, and find that GKE reduces the effort necessary to run our workloads.
Supporting a variety of use cases and teams
Google Cloud also helped Colgate break new ground over the years, especially in the areas of cloud-native networking, security, monitoring, pub/sub, managed containerization, and multi-tenant environments.
Over time Colgate began to leverage Google’s managed container portfolio, which includes Cloud Run. Cloud Run lets you run containers on top of a serverless platform, unlocking workload possibilities for public websites, private services, APIs and batch jobs and eliminating a lot of the time spent on infrastructure management. Cloud Run also requires no prior knowledge of Kubernetes or containers.
When we first came to Google Cloud, our preferred languages were JavaScript, Python, and Java. This made it easy for small and/or event-based applications to hit the ground running with Cloud Functions. Expertise in Linux and the Open Container Initiative made onboarding to Cloud Run simple and smoothed the learning curve as we experimented with Kubernetes.
Simplified enterprise-level management
Many teams have found that they prefer the serverless, hands-off approach that Cloud Run provides, and Colgate now evaluates Google Cloud’s serverless solutions alongside GKE for any applications destined for the cloud. At the same time, Cloud Run lets them continue to leverage their investment in workloads based on the Open Container Initiative.
For example, Cloud Run is designed for Kubernetes compatibility with consistent management capabilities such as the ability to manage resources using kubectl via the config controller, and the ability to browse logs and metrics from both platforms in Cloud Logging and Monitoring. Cloud Run and GKE data planes are also interoperable, allowing Cloud Run and GKE services to be exposed behind a VPC behind private IPs using an internal load balancer. Cloud Run as an option has contributed to faster innovation, allowing Colgate to bring smiles to many more faces globally.
We have a growing number of applications running in the Cloud — migrations of legacy applications as well as new initiatives we’re starting to build. These applications exist on a wide spectrum of operating requirements, ages, and sizes, and we needed to find a foundation that could handle the heterogeneous demands our users had across our organization. We were pleased to find this in Google Cloud when many internal teams organically started exploring containers.
Google’s managed container offerings provide a composable and comprehensive set of solutions for customers’ applications. At Colgate, container-based managed services are used across the stack: on the front-end, where they use Identity-Aware Proxy to manage authentication and External Load Balancers to handle incoming traffic with high availability and low latency; at the application layer, where they can choose from Cloud Functions, Cloud Run, or GKE, depending on the level of control they need over the application; and at the internal load balancing level where NGINX® controllers serve internal applications. Together, these managed services ensure that Colgate has the flexibility to choose the right toolchain and maximize their goals for each use case.
Colgate wanted to build internal applications on Cloud Run in a way that complied with their organization’s policies while maximizing developer productivity. They were able to use new features like the Cloud Run Identity Aware Proxy GA to build a secure, serverless deployment for their applications.
A great partnership with Google Cloud
Colgate and Google Cloud have enjoyed a deep partnership for many years, engaging across many technologies, teams, and design patterns.
Our deep partnership with the Google Cloud team has allowed us to get early access to capabilities that we’ve found valuable, give feedback on what can be improved, and enable our organization with best practices coming straight from the folks who built this technology.
They’ve engaged with Product and Engineering across compute, networking, Kubernetes, and serverless as they brought this new way of thinking to their users.
For Colgate and many of our customers looking to address the needs of the modern user, Google’s managed container offerings are a breath of fresh air. Its reliability, scalability, and control offer the flexibility to build applications that meet the demands of both internal and external consumers. In addition, the variety of container offerings available on Google Cloud — GKE and Cloud Run — allow customers to make app deployment decisions based on the amount of Day 2 operations the users are willing to take on. Platform administrators appreciate the reduced management effort, users enjoy reduced downtime, and developers can simply deploy.
Google Cloud contributors: Rex Orioko, Rachel Tsao
Colgate-Palmolive contributors: Matthew Tattoli, Nicholas Farley, and David Wiser