Network configuration for compute nodes

  1. Home
  2. Network configuration for compute nodes

Go back to GCP Tutorials

In this tutorial we will learn and understand about Network configuration for compute nodes.

Sole-tenancy, on the other hand, gives you exclusive access to a sole-tenant node, which is a real Compute Engine server devoted solely to your project’s VMs. To keep your VMs physically distinct from VMs in other projects, use sole-tenant nodes.

Virtual machines operating on sole-tenant nodes have access to the same Compute Engine functionalities as regular virtual machines. Transparent scheduling and block storage are included, as well as a layer of hardware isolation. Each sole-tenant node retains a one-to-one mapping to the physical server that is supporting the node to provide you complete control over the VMs on the physical server.

You can provide several VMs on machine types of varied sizes inside a sole-tenant node. This also allows you to make optimal use of the dedicated host hardware’s underlying resources. You may also fulfill security or compliance needs with workloads that require physical isolation from other workloads or VMs because you aren’t sharing the host hardware with other projects. If your workload only requires a single tenancy for a short period of time, you can change the VM tenancy as needed.

Node templates
  • The attributes of each node in a node group are defined by a node template, which is a regional resource. When using a node template to construct a node group.
  • The node template’s characteristics are then replicated immutably to each node in the node group.
  • You may now specify a node type and, if desired, node affinity labels when creating a node template. A node template is the sole place where you may define node affinity labels. A node group also lacks the ability to set node affinity labels.

Node groups and VM provisioning

The attributes of a node group are defined by sole-tenant node templates. However, before building a node group in a Google Cloud zone, you must first construct a node template. When you establish a group, you may set the node group’s VM instance maintenance policy. Also, the node group’s total number of nodes. A node group, on the other hand, can have zero or more nodes.

When you don’t need to run any VM instances on nodes in a node group, for example, you can lower the number of nodes in the group to zero. You may also use the node group autoscaler to automatically adjust the size of the node group.

  • Further, before provisioning VMs on sole-tenant nodes, you must create a sole-tenant node group. A node group is a homogeneous set of sole-tenant nodes in a specific zone. Node groups can contain multiple VMs running on machine types of various sizes, as long as the machine type has 2 or more vCPUs.
  • Next, when you create a node group, enable autoscaling so that the size of the group adjusts automatically to meet the requirements of your workload. If your workload requirements are static, you can manually specify the size of the node group.
  • After creating a node group, you can provision VMs on the group or on a specific node within the group. For further control, use node affinity labels to schedule VMs on any node with matching affinity labels.
  • After you’ve provisioned VMs on node groups, and optionally assigned affinity labels to provision VMs on specific node groups or nodes, consider labeling your resources to help manage your VMs.
gcp cloud architect practice tests

Maintenance policies

You may wish to limit the amount of physical cores used by your VMs depending on your licencing conditions and workloads. It is possible that the maintenance insurance you select will be influenced by. Your licencing or compliance needs, for example, or a policy that allows you to limit the use of physical servers. With all of these settings in place, your virtual machines will continue to run on dedicated hardware.

Compute Engine live migrates all of the VMs on the host to a new sole-tenant node as a group during maintenance, but in rare instances, Compute Engine may divide up the VMs into smaller groups and live to migrate each smaller group of VMs to its own sole-tenant node.

  • Firstly, Default This is the default maintenance policy, and VMs on nodes groups configured with this policy follow traditional maintenance behavior for non-sole-tenant VMs. That is, depending on the on-host maintenance setting of the VM’s host, VMs live migrate to a new sole-tenant node in the node group before a host maintenance event, and this new sole-tenant node will only run the customer’s VMs.
  • Secondly, Restart in place. When you use this maintenance policy, Compute Engine stops VMs during maintenance events. And. then restarts the VMs on the same physical server after the maintenance event. You must set the VM’s on host maintenance setting to TERMINATE when using this policy.
  • Lastly, Migrate within node group. When using this maintenance policy, Compute Engine live migrates VMs within a fixed-sized group of physical servers during maintenance events. This helps limit the number of unique physical servers used by the VM.
Unplanned maintenance

In the rare case of a critical hardware failure, Compute Engine:

  • Firstly, retires the physical server and its unique identifier.
  • Secondly, revokes your project’s access to the physical server.
  • Thirdly, replaces the failed hardware with a new physical server that has a new unique identifier.
  • Next, moves the VMs from the failed hardware to the replacement node.
  • Lastly, restarts the affected VMs if you configured them to automatically restart.

Node affinity and anti-affinity

  • Sole-tenant nodes ensure that your VMs do not share host hardware with VMs from other projects. However, you still might want to group several workloads together on the same sole-tenant node or isolate your workloads from one another on different nodes. For example, to help meet some compliance requirements, you might need to use affinity labels to separate sensitive workloads from non-sensitive workloads.
  • Further, when you create a VM, you request sole-tenancy by specifying node affinity or anti-affinity, referencing one or more node affinity labels. You specify custom node affinity labels when you create a node template.
  • And then, Compute Engine automatically includes some default affinity labels on each node.
    • By specifying affinity when you create a VM, you can schedule VMs together on a specific node or nodes in a node group.
    • By specifying anti-affinity when you create a VM, you can ensure that certain VMs are not scheduled together on the same node or nodes in a node group.
However, Node affinity labels are key-value pairs assigned to nodes, and are inherited from a node template. Affinity labels let you:
  • Firstly, control how individual VM instances are assigned to nodes.
  • Secondly, control how VM instances created from a template. This is for those created by a managed instance group, are assigned to nodes.
  • Lastly, group sensitive VM instances on specific nodes or node groups, separate from other VMs.
Network configuration for compute nodes GCP cloud architect  online course

Reference: Google Documentation

Go back to GCP Tutorials

Menu