Shared VPCs adhere to the principle of least privilege, where only the centralized network and security teams have permissions to create and manage the networking infrastructure in the host project. Developers, on the other hand, only have permissions to deploy compute, storage, and other resources in the service projects.
This approach works well in scenarios where applications require access to other resources running in Google Cloud, on-premises, or on other cloud providers. However, not all use-cases require such access. For example, consider a dataflow pipeline that works based on data in BigQuery(BQ) and/or Google Cloud Storage(GCS) which only requires access to a local network for the purpose of creating compute machines for job execution.
Another illustrative example is a Dev team that has a Google Kubernetes Engine (GKE) auto-pilot cluster that they use to host a set of internal applications and do not require extended connectivity beyond the cluster.
For use cases that do not require outside connectivity, dependency on the access to centralized shared VPC is unnecessary. To provide more flexibility, categorize applications by connectivity requirements. Place applications that require connectivity to on-premises or other networks in the Shared VPC and deploy applications that do not require outside connectivity in a VPC created in the service project. It is important to note that a service project can use both, a Shared VPC from the host project and local VPCs created and managed in the service project.
This blog discusses design considerations for managing network connectivity through categorized VPCs and how they can be used to offer a developer centric networking while maintaining security.
Flexibility, Security and Scalability
The objective is to provide developers with flexibility and autonomy in designing their service project local VPC with the network team without compromising on security and scalability. This provides developers with an easy and quick on-boarding experience for their applications in Google Cloud, without being blocked by central network controls such as change requests for firewall rules, IAM access, and VPCs. This is especially important for workloads that are isolated and do not require any outside connectivity.
Allowing developers to create a VPC network is a good thought, but security guardrails are important to be maintained. Therefore, it’s recommended that VPC creation is automated as part of project creation Infrastructure as code (IaC). The network team supervises the guidelines of VPC creation IaC, and developers are given access to a pre-created VPC. Changes to the local VPC go through a github pull request type workflow where central network admin and/or Google Cloud platform admin teams become the reviewers for application team driven config for these project local VPCs.
Refer to organization policies to restrict the networking components that developers can deploy in the service project VPC. For example, organization policies can be used to block the assignment of public IP addresses to VMs, the creation of external load balancers, VPC peerings, VPNs, NAT gateways, and such. Consider using hierarchical firewalls policies at the folder or org level to control network access at a higher level.
Some VPC quotas and limits apply per network. By moving isolated workloads to the local service project VPC, you can lower quota consumption on central Shared VPC.
Design aspects to understand
Use these reference designs as baselines to build something that meets your needs.
1. VPC categorization
Project local and Shared VPC access topology
This design is capable of delivering a service project having access to both project local VPC and Shared VPC. The idea is that applications/services internal to the projects leverage the local VPC and use Shared VPC or other connectivity mechanism, as applicable, for extended connectivity.
For example, a Cloud Run app is expected to access public artifacts through corporate proxies. Similarly, a Cloud Build running on private worker pools connected to SharedVPC can download build time dependencies from public registries. Another example is that a private GKE cluster deployed on the local VPC can host applications that communicate locally within the cluster.
It’s important to understand that there is no default connectivity between VPCs, and no changes to Shared VPC design are proposed.
2. Connectivity to Google Managed Services
A project is not mandated to have access to Shared VPC and can only include a local VPC within the project. Clients in a local VPC can access PSA services such as Filestore and Cloud SQL or services that use Google Cloud APIs. This design supports quick go-to-market, private only deployments, while allowing access to Google managed services/APIs. Often, users only need access to Google Cloud APIs.
Access to a centrally administered Shared VPC should only be granted on a selective basis when needed. Networking admins and/or SREs should audit shared VPC usage for unused projects and restrict access to Shared VPC to active projects that require it. VPC access logs can be sent to BigQuery for analysis.
By default, this design does not provide connectivity to/from on-prem/other networks.
3. Cross Project/Org Network connectivity
Access to endpoints outside of the VPC can be selectively exposed through Private Service Connect (PSC), VPC Network peering, or a suitable load balancer LB stack for ingress. PSC managed services is a mechanism to extend Local or Shared VPC connectivity, either within the same Google Cloud organization or across organizations. As an alternative, VPC peering provides broader connectivity for both ingress/egress.
In conclusion, this blog presents a developer oriented network design perspective that provides a balance between security and flexibility. The design includes a project-specific local VPC and selective access to a centrally managed shared VPC. This design allows enterprises to categorize network access and minimize the risk of unauthorized access.
Learn More
To learn more about private networking concepts and specific examples, please check the following
- Networking options for Dataflow service
- Publishing services through Private Service Connect
- GKE Private clusters