Run a calculation on a Cloud TPU VM using TensorFlow

This quickstart shows you how to create a Cloud TPU, install TensorFlow and run a calculation on a Cloud TPU. For a more in depth tutorial showing you how to train a model on a Cloud TPU see one of the Cloud TPU Tutorials.

Before you begin

Before you follow this quickstart, you must create a Google Cloud account, install the Google Cloud CLI. and configure the gcloud command. For more information, see Set up an account and a Cloud TPU project.

Create a Cloud TPU VM with gcloud

Launch a Compute Engine Cloud TPU using the gcloud command. For more information on the gcloud command, see the gcloud reference.

When launching the TPU VM, you can either use the default TPU software version shown in the command or you can refer to Cloud TPU software versions to specify another supported TPU software version.

  $ gcloud compute tpus tpu-vm create tpu-name
    --zone=europe-west4-a
    --accelerator-type=v3-8
    --version=tpu-vm-tf-2.16.1-pjrt
  

Command flag descriptions

tpu-name
The name of the Cloud TPU to create.
zone
The zone where you plan to create your Cloud TPU.
accelerator-type
The accelerator type specifies the version and size of the Cloud TPU you want to create. For more information about supported accelerator types for each TPU version, see TPU versions.
version
The TPU software version.

Connect to your Cloud TPU VM

You must explicitly connect to your TPU VM using SSH. If you are not automatically connected, use the following command.

  $ gcloud compute tpus tpu-vm ssh tpu-name
      --zone europe-west4-a
   

Run an example using TensorFlow

Once you are connected to the TPU VM, set the following environment variable.

  (vm)$ export TPU_NAME=local
  

When creating your TPU, if you set the --version parameter to a version ending with -pjrt, set the following environment variables to enable the PJRT runtime:

  (vm)$ export NEXT_PLUGGABLE_DEVICE_USE_C_API=true
  (vm)$ export TF_PLUGGABLE_DEVICE_LIBRARY_PATH=/lib/libtpu.so

Create a file named tpu-test.pyin the current directory and copy and paste the following script into it.

import tensorflow as tf
print("Tensorflow version " + tf.__version__)

@tf.function
  def add_fn(x,y):
  z = x + y
  return z

  cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver()
  tf.config.experimental_connect_to_cluster(cluster_resolver)
  tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
  strategy = tf.distribute.TPUStrategy(cluster_resolver)

  x = tf.constant(1.)
  y = tf.constant(1.)
  z = strategy.run(add_fn, args=(x,y))
  print(z)

Run this script with the following command:

(vm)$ python3 tpu-test.py

This script performs a computation on a each TensorCore of a TPU. The output will look similar to the following:

PerReplica:{
  0: tf.Tensor(2.0, shape=(), dtype=float32),
  1: tf.Tensor(2.0, shape=(), dtype=float32),
  2: tf.Tensor(2.0, shape=(), dtype=float32),
  3: tf.Tensor(2.0, shape=(), dtype=float32),
  4: tf.Tensor(2.0, shape=(), dtype=float32),
  5: tf.Tensor(2.0, shape=(), dtype=float32),
  6: tf.Tensor(2.0, shape=(), dtype=float32),
  7: tf.Tensor(2.0, shape=(), dtype=float32)
}

Clean up

To avoid incurring charges to your Google Cloud account for the resources used on this page, follow these steps.

  1. Disconnect from the Compute Engine instance, if you have not already done so:

    (vm)$ exit
    

    Your prompt should now be username@projectname, showing you are in the Cloud Shell.

  2. Delete your Cloud TPU.

      $ gcloud compute tpus tpu-vm delete tpu-name 
    --zone=europe-west4-a

  3. Verify the resources have been deleted by running gcloud compute tpus tpu-vm list. The deletion might take several minutes.

      $ gcloud compute tpus tpu-vm list --zone=europe-west4-a
      

What's next

For more information about Cloud TPU, see: