Configure SQL Server on Google Cloud Platform using Cloud Volumes Service


This tutorial provides an overview of how to configure a highly resilient Google Cloud environment for SQL Server installed on Server Message Block (SMB) fileshare storage using NetApp Cloud Volumes Service (CVS) for Google Cloud.

Microsoft SQL Server Always On availability groups (AG) allow you to replicate databases across multiple SQL Server Enterprise instances. Traditional SQL Server Always On instances typically require each node to maintain a copy of the data, which can quickly become costly as the databases grow. For example, in a traditional SQL Server Always On deployment, if the databases are 10TB in size, each replica must have at least 10TB of storage provisioned. One of the benefits of having a Windows Server Failover Cluster (WSFC) deployed for SQL Server is that you only need to have one copy of the data, which is shared between the nodes of the cluster. Therefore, running SQL Server Always On in a failover cluster instance (FCI) on a WSFC can immediately realize cost savings by reducing the amount of storage required.

NetApp Cloud Volumes Service is a cloud storage solution for running high-performance SQL Server workloads in combination with compute engine instances. SQL Server database over SMB on Cloud Volumes Service provides a very high level of resiliency between the Google Cloud instances and the storage service. SMB Transparent Failover enables maintenance operations on the Cloud Volumes Service without interrupting connectivity to server applications storing and accessing data on SMB volumes. To support SMB Transparent Failover, Cloud Volumes Service supports the SMB continuous availability (CA) shares option for use with SQL Server workloads. This provides performance improvements, scalability, and cost benefits for single instances, Always On failover cluster (AOFC) instances, and Always On availability group deployments.

This page is intended for NetApp Cloud Volumes Service and SQL Server database administrators who are responsible for deploying Microsoft SQL Server on Google Cloud. It assumes the following:

  • You are familiar with the various components of the solution
  • You have already deployed Active Directory on Google Cloud

Objectives

This tutorial aims to achieve the following objectives:

  • Create a Cloud Volumes Service volume for SQL Server
  • Deploy Always On failover cluster over SMB on Cloud Volumes Service

Costs

This tutorial uses billable components of Google Cloud, including:

Use the pricing calculator to generate a cost estimate based on your projected usage.

Before you begin

For this tutorial, you need a Google Cloud project. You can create a new one, or select a project you already created:

  1. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  2. Make sure that billing is enabled for your Google Cloud project.

  3. An Active Directory domain with at least one domain controller. You can create an Active Directory domain by using Managed Microsoft AD. Alternatively, you can deploy a custom Active Directory environment on Compute Engine and set up a private DNS forwarding zone that forwards DNS queries to your domain controllers.
  4. An Active Directory user that has permission to join computers to the domain and can log in by using RDP. If you're using Managed Microsoft AD, you can use the setupadmin user.
  5. A Google Cloud project and VPC with connectivity to your Active Directory domain controllers.
  6. Make sure that NetApp Cloud Volumes API is enabled for your Google Cloud project.
  7. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

Prepare your project and network

To prepare your Google Cloud project and Virtual Private Cloud (VPC) for the deployment of SQL Server Always On availability groups, do the following:

  1. In the Google Cloud console, open Cloud Shell by clicking the Activate Cloud Shell Activate Cloud Shell. button.

    Go to the Google Cloud console

  2. Initialize the following variables:

    VPC_NAME=VPC_NAME
    SUBNET_NAME=SUBNET_NAME
    

    Replace the following:

    • VPC_NAME: name of your VPC
    • SUBNET_NAME: name of your subnet
  3. Set your default project ID:

    gcloud config set project PROJECT_ID
    

    Replace PROJECT_ID with the ID of your Google Cloud project.

  4. Set your default region:

    gcloud config set compute/region REGION
    

    Replace REGION with the ID of the region you want to deploy in.

  5. Set your default zone:

    gcloud config set compute/zone ZONE
    

    Replace ZONE with the ID of the zone you want to deploy in.

Create SMB on Cloud Volumes Service for SQL Server

To create a Cloud Volumes Service volume, you must create an Active Directory policy, private services access, storage pool, and storage volume.

  1. To create a new Private Service Access, do the following:

    1. In the Google Cloud console, go to the VPC network page.

      Go to VPC network

    2. Click the VPC network name you will use.

    3. Click Private Services Access tab, and then click Allocate IP Range.

      1. Enter a name for the Internal IP range.
      2. Select Automatic, and enter the value 24 for the Prefix length.
      3. Click Allocate button.
    4. Click Private Connections To Services tab, and do the following:

      1. Click Create Connection button.
      2. Select the previously allocated IP range for the Assigned allocation drop down.
      3. Click Connect button.
  2. To create a new Active Directory policy, do the following:

    1. In the Google Cloud console, go to the NetApp Volumes page.

      Go to NetApp Volumes

    2. In the Navigation pane, click Active Directory policies under the Policies section.

    3. Click Create button.

    4. Specify a name for the active directory policy.

    5. Under Region, select the available region closest to where the SQL Server instances will be hosted.

    6. In the Active Directory connection details section, provide the following information:

      1. Name of the domain
      2. Active Directory DNS server addresses
      3. Active Directory site name
      4. Name of the organizational unit
      5. NetBIOS name prefix
    7. In the Active Directory credentials section, provide the credentials for a user who is allowed to join computers to the specified domain.

    8. In the SMB settings section, provide the Backup Operators and Security privilege users, if applicable.

    9. Click Create to complete the process of creating a new Active Directory policy.

  3. To create a new storage pool, do the following:

    1. In the Navigation pane, click Storage pools under the Storage section.
    2. Click Create button.
    3. Enter a name for the storage pool.
    4. Under Location, select the available region closest to where the SQL Server instances will be hosted.
    5. Select a Service Level that would satisfy the expected storage performance.
    6. Enter the Capacity for the storage pool. Storage pool capacity defines the total storage capacity that can be allocated for all the volumes within the pool.
    7. In the Network list, select the VPC that the SQL Server instances will be hosted in, as follows:
      1. Click Set Up Connection button.
      2. Select the IP address range reserved in previous steps and click Continue.
      3. Click Create Connection button.
    8. In the Active Directory policy section, select Assign an Active Directory policy to the storage pool and select the Active Directory policy created in the previous step.
    9. Click Create button to complete the process of creating a new storage pool.
  4. To create a new storage volume, do the following:

    1. In the Navigation pane, click Volumes under the Storage section.
    2. Click Create button.
    3. Specify a name for the storage volume.
    4. Under Storage pool details, click Select storage pool to select the storage pool created in the previous step.
    5. In the Volume details section, provide the following information:
      1. Share name
      2. Volume capacity
    6. In the Large capacity configuration section, select SMB protocol.
    7. In the Configuration for selected protocol(s) section, select Enable CA share support for SQL Server, FSLogix.
    8. Click Create button to complete the process of creating a new storage volume.

Create firewall rules

To allow clients to connect to SQL Server, allow communication between the Windows Server Failover Cluster (WSFC) nodes, and to enable the load balancer to perform health checks, you need to create several firewall rules. To simplify the creation of these firewall rules, you can use network tags:

  • The WSFC you use are annotated with the wsfc-node tag.
  • All servers, including the witness, are annotated with the wsfc tag.

To create firewall rules that use these network tags, use the following steps:

  1. Return to your existing Cloud Shell session.
  2. Create firewall rules for the WSFC nodes.

    SUBNET_CIDR=$(gcloud compute networks subnets describe $SUBNET_NAME --format=value\('ipCidrRange'\))
    
    gcloud compute firewall-rules create allow-all-between-wsfc-nodes \
      --direction=INGRESS \
      --action=allow \
      --rules=tcp,udp,icmp \
      --enable-logging \
      --source-tags=wsfc \
      --target-tags=wsfc \
      --network=$VPC_NAME \
      --priority 10000
    
    gcloud compute firewall-rules create allow-sql-to-wsfc-nodes \
      --direction=INGRESS \
      --action=allow \
      --rules=tcp:1433 \
      --enable-logging \
      --source-ranges=$SUBNET_CIDR \
      --target-tags=wsfc-node \
      --network=$VPC_NAME \
      --priority 10000
    
  3. Create a firewall rule that allows health checks from the IP ranges of the Google Cloud probers:

    gcloud compute firewall-rules create allow-health-check-to-wsfc-nodes \
      --direction=INGRESS \
      --action=allow \
      --rules=tcp \
      --source-ranges=130.211.0.0/22,35.191.0.0/16 \
      --target-tags=wsfc-node \
      --network=$VPC_NAME \
      --priority 10000
    

Create VM instances

You will now deploy three VM instances for the failover cluster.

Node-1 will host the primary replica of the SQL Server database and node-2 will host the secondary replica. The two VM instances must:

  • be located in the same region so that they can be accessed by an internal passthrough Network Load Balancer
  • have WSFC and SQL Server 2022 installed using SQL Server premium image

  • have Compute Engine WSFC support enabled

To provide a tie-breaking vote and achieve a quorum for the failover scenario, you deploy a third VM instance node-3 that serves as a file share witness.

  1. Return to your existing Cloud Shell session.
  2. Create a script for the WSFC nodes. The script installs the necessary Windows feature and creates firewall rules for WSFC and SQL Server:

    cat << "EOF" > specialize-node.ps1
    
    $ErrorActionPreference = "stop"
    
    # Install required Windows features
    Install-WindowsFeature Failover-Clustering -IncludeManagementTools
    Install-WindowsFeature RSAT-AD-PowerShell
    
    # Open firewall for WSFC
    netsh advfirewall firewall add rule name="Allow SQL Server health check" dir=in action=allow protocol=TCP localport=59997
    
    # Open firewall for SQL Server
    netsh advfirewall firewall add rule name="Allow SQL Server" dir=in action=allow protocol=TCP localport=1433
    
    # Open firewall for SQL Server replication
    netsh advfirewall firewall add rule name="Allow SQL Server replication" dir=in action=allow protocol=TCP localport=5022
    
    EOF
    
  3. Create the VM instances. On the two VMs that serve as WSFC nodes, attach an additional data disk and enable the Windows Server Failover Clustering by setting the metadata key enable-wsfc to true:

    REGION=$(gcloud config get-value compute/region)
    PD_SIZE=200
    MACHINE_TYPE=n2-standard-8
    
    gcloud compute instances create node-1 \
      --zone $REGION-a \
      --machine-type $MACHINE_TYPE \
      --subnet $SUBNET_NAME \
      --image-family sql-ent-2022-win-2022 \
      --image-project windows-sql-cloud \
      --tags wsfc,wsfc-node \
      --boot-disk-size 50 \
      --boot-disk-type pd-ssd \
      --boot-disk-device-name "node-1" \
      --metadata enable-wsfc=true \
      --metadata-from-file=sysprep-specialize-script-ps1=specialize-node.ps1
    
    gcloud compute instances create node-2 \
      --zone $REGION-b \
      --machine-type $MACHINE_TYPE \
      --subnet $SUBNET_NAME \
      --image-family sql-ent-2022-win-2022 \
      --image-project windows-sql-cloud \
      --tags wsfc,wsfc-node \
      --boot-disk-size 50 \
      --boot-disk-type pd-ssd \
      --boot-disk-device-name "node-2" \
      --metadata enable-wsfc=true \
      --metadata-from-file=sysprep-specialize-script-ps1=specialize-node.ps1
    
    gcloud compute instances create "witness" \
      --zone $REGION-c \
      --machine-type n2-standard-2 \
      --subnet $SUBNET_NAME \
      --image-family=windows-2022 \
      --image-project=windows-cloud \
      --tags wsfc \
      --boot-disk-size 50 \
      --boot-disk-type pd-ssd \
      --metadata sysprep-specialize-script-ps1="add-windowsfeature FS-FileServer"
    
  4. To join the 3 VM instances to Active Directory, do the following for each of the 3 VM instances:

    1. Monitor the initialization process of the VM by viewing its serial port output:

      gcloud compute instances tail-serial-port-output NAME
      

      Replace NAME with the name of the VM instance.

      Wait for a few minutes until you see the output Instance setup finished, then press Ctrl+C. At this point, the VM instance is ready to be used.

    2. Create a username and password for the VM instance

    3. Connect to the VM using Remote Desktop and sign in using the username and password created in the previous step.

    4. Right-click Start button (or press Win+X) and click Windows PowerShell (Admin).

    5. Confirm the elevation prompt by clicking Yes.

    6. Join the VM instance to your Active Directory domain and restart:

      Add-Computer -Domain DOMAIN -Restart
      

      Replace DOMAIN with the DNS name of your Active Directory domain.

      Wait for the VM instance to restart.

Assign storage permissions

To assign permissions to the cloud volume storage, complete the following steps:

  1. From the Google Cloud console, select Cloud Volumes and select the volume that was just created.

    Cloud Volume

  2. Connect to node-1 by using Remote Desktop. Sign in with your domain user account.

  3. Open Windows Explorer and right-click This PC.

  4. Select Map network drive.

  5. Paste the copied SMB path in the folder.

    Cloud Volume

  6. At sign-in, disable the Reconnect option.

  7. Click Finish.

  8. Right-click the mapped network drive and select Properties.

    Cloud Volume

  9. From the Security tab, click Edit.

  10. Remove Everyone and add the SQL Server installer and SQL Server service accounts with full control permissions.

    Cloud Volume

  11. Confirm the elevation prompt by clicking Yes.

Reserve cluster IP addresses

Reserve two static IP addresses in your VPC, one for the WSFC cluster IP address and another for the internal load balancer.

  1. Reserve a static IP for the internal load balancer and capture the address in a new environment variable named LOADBALANCER_ADDRESS:

    gcloud compute addresses create wsfc \
      --region $(gcloud config get-value compute/zone | rev | cut -c 3- | rev) \
      --subnet $SUBNET_NAME
    
    LOADBALANCER_ADDRESS=$(gcloud compute addresses describe wsfc \
      --region $(gcloud config get-value compute/zone | rev | cut -c 3- | rev) \
      --format=value\(address\)) && \
    echo "Load Balancer IP: $LOADBALANCER_ADDRESS"
    

    Note the IP address, you'll need it later.

  2. Reserve another static IP address that you use as cluster IP:

    gcloud compute addresses create wsfc-cluster \
      --subnet $SUBNET_NAME \
      --region $(gcloud config get-value compute/zone | rev | cut -c 3- | rev) && \
    CLUSTER_ADDRESS=$(gcloud compute addresses describe wsfc-cluster \
      --region $(gcloud config get-value compute/zone | rev | cut -c 3- | rev) \
      --format=value\(address\)) && \
    echo "Cluster IP: $CLUSTER_ADDRESS"
    

    Note the IP address, you'll need it later.

Your project and VPC are now ready for the deployment of the WSFC and SQL Server.

Create a witness file share

To prepare witness to serve as file share witness, create a file share and grant yourself and the two WSFC nodes access to the file share:

  1. Connect to witness by using Remote Desktop. Sign in with your domain user account.
  2. Right-click Start button (or press Win+X) and click Windows PowerShell (Admin).
  3. Confirm the elevation prompt by clicking Yes.
  4. Create the witness folder and share the folder:

    New-Item "C:\QWitness" –type directory
    
    icacls C:\QWitness\ /grant 'node-1$:(OI)(CI)(M)'
    icacls C:\QWitness\ /grant 'node-2$:(OI)(CI)(M)'
    
    New-SmbShare `
      -Name QWitness `
      -Path "C:\QWitness" `
      -Description "SQL File Share Witness" `
      -FullAccess  $env:username,node-1$,node-2$
    

Deploying the failover cluster

You will now use the VM instances to deploy a WSFC and SQL Server.

Deploy WSFC

You are now ready to create the failover cluster:

  1. Connect to node-1 by using Remote Desktop. Sign in with your domain user account.
  2. Right-click Start button (or press Win+X) and click Windows PowerShell (Admin).
  3. Confirm the elevation prompt by clicking Yes.
  4. Create a new cluster:

    New-Cluster `
      -Name windows-fci `
      -Node node-1,node-2 `
      -NoStorage `
      -StaticAddress CLUSTER_ADDRESS
    

    Replace CLUSTER_ADDRESS with the cluster IP address that you created earlier.

  5. Return to the PowerShell session on witness and grant the virtual computer object of the cluster permission to access the file share:

    icacls C:\QWitness\ /grant 'windows-fci$:(OI)(CI)(M)'
    Grant-SmbShareAccess `
      -Name QWitness `
      -AccountName 'windows-fci$' `
      -AccessRight Full `
      -Force
    
  6. Return to the PowerShell session on node-1 and configure the cluster to use the file share on witness as a cluster quorum:

    Set-ClusterQuorum -FileShareWitness \\witness\QWitness
    
  7. Verify that the cluster was created successfully:

    Test-Cluster
    

    You might see some warnings that can be safely ignored:

    WARNING: System Configuration - Validate All Drivers Signed: The test reported some warnings..
    WARNING: Network - Validate Network Communication: The test reported some warnings..
    WARNING:
    Test Result:
    HadUnselectedTests, ClusterConditionallyApproved
    Testing has completed for the tests you selected. You should review the warnings in the Report.  A cluster solution is
    supported by Microsoft only if you run all cluster validation tests, and all tests succeed (with or without warnings).
    

    You can also launch the Failover Cluster Manager MMC snap-in to review the cluster's health by running cluadmin.msc.

  8. If you're using Managed AD, add the computer account used by WSFC to the Cloud Service Domain Join Accounts group so that it can join VM instances to the domain:

    Add-ADGroupMember `
      -Identity "Cloud Service Domain Join Accounts" `
      -Members windows-fci$
    

Remove the default SQL Server installation

You will now remove the default SQL Server installation from the two nodes and replace it with a new FCI configuration.

For each of the two WSFC nodes, node-1 and node-2, perform the following steps:

  1. Right-click Start button (or press Win+X) and click Windows PowerShell (Admin).
  2. Confirm the elevation prompt by clicking Yes.
  3. Remove the default SQL Server instance:

    C:\sql_server_install\Setup.exe /Action=Uninstall /FEATURES=SQL,AS,IS,RS /INSTANCENAME=MSSQLSERVER /Q
    
  4. Remove Microsoft OLE Driver:

    Get-Package -Name "Microsoft OLE*" | Uninstall-Package -Force
    
  5. Remove Microsoft ODBC Driver:

    Get-Package -Name "Microsoft ODBC*" | Uninstall-Package -Force
    
  6. Restart the computer:

    Restart-Computer
    
  7. Wait for the VM instance to restart.

Install SQL Server FCI

Before you install the new FCI configuration, verify that the node-1 is the active node in the cluster:

  1. Reconnect to node-1 by using Remote Desktop and sign in using your domain user.
  2. Right-click Start button (or press Win+X) and select Run
  3. Enter cluadmin.msc and select OK.
  4. In the left window pane, navigate to Failover Cluster Manager > windows-fci, and do the following:

    1. Verify that the current host server is set to node-1.

    2. If the current host server is set to node-2, right-click windows-fci in the left window pane and select More actions > Move core cluster resources > Select node… > node-1 and click OK.

You will create a new SQL Server failover cluster installation on node-1:

  1. Right-click Start button (or press Win+X) and click Windows PowerShell (Admin).
  2. Confirm the elevation prompt by clicking Yes.

  3. Start the SQL Server setup:

    & c:\sql_server_install\setup.exe
    
  4. In the menu on the left, select Installation.

  5. Select New SQL Server failover cluster installation.

  6. On the Edition page, enter the product key, select the software license type, and click Next.

  7. On the License Terms page, review the terms and, if you accept, click Next.

  8. On the Microsoft Update page, click Next to start the installation.

  9. On the Install Failover Cluster Rules page, you see a Warning MSCS cluster verification warnings and Windows firewall. You can ignore these warnings and click Next.

  10. On the Feature Selection page, select Database Engine Services and click Next.

  11. On the Instance Configuration page, enter sql as network name and click Next.

  12. On the Cluster Resource Group page, you can ignore the qualification warnings and click Next.

  13. On the Cluster Disk Selection page, click Next.

  14. On the Cluster Network Configuration page, configure the following settings, then click Next:

    • DHCP: clear
    • IP address: enter the IP address of the internal load balancer
  15. On the Server configuration page, configure the following settings for both SQL Server Agent and SQL Server Database Engine:

    • Account name: DOMAIN\sql_server where DOMAIN is the NetBIOS name of your Active Directory domain
    • Password: enter sql_server user's password
  16. Select the Collation tab and select the collation that you want to use. Then click Next.

  17. On the Database Engine Configuration page, select Add current user to designate the current user as SQL Server administrator. In the Data Directories tab, enter the SMB path in the Data root directory field and select Next. A message window will appear informing you to verify that the SQL Server service account has full control share permission. Select Yes to proceed.

    Cloud Volume

  18. On the Ready to Install page, review the settings, then select Install.

  19. After the installation completes, select Close.

Now add node-2 to the SQL Server failover cluster:

  1. Connect to node-2 by using Remote Desktop and sign in using your domain user.
  2. Right-click Start button (or press Win+X) and click Windows PowerShell (Admin).
  3. Confirm the elevation prompt by clicking Yes.
  4. Start the SQL Server setup:

    & c:\sql_server_install\setup.exe
    
  5. In the menu on the left, select Installation.

  6. Select Add node to a SQL Server failover cluster.

  7. Follow the instructions of the installation wizard and accept the default settings until you reach the page Service Accounts.

  8. On the Service Accounts page, enter the password that you created earlier for both SQL Server Agent and SQL Server Database Engine. Then select Next.

  9. On the Ready to Install page, review the settings, then select Install.

  10. After the installation completes, select Close.

Configure health checks

As a final step, configure the cluster to expose a health check endpoint that can be used by an internal load balancer:

  1. Return to the PowerShell session on node-2
  2. Initialize a variable with the IP address of the load balancer.

    $LoadBalancerIP = 'IP_ADDRESS'
    

    Replace IP_ADDRESS with the IP address of the wsfc address that you reserved earlier.

  3. Configure the Failover Cluster to respond to the health check service:

    $SqlGroup = Get-ClusterGroup |
      Where-Object {$_.Name.StartsWith("SQL Server")}
    $SqlIpAddress = Get-ClusterResource |
      Where-Object {$_.Name.StartsWith("SQL IP Address")}
    
    $SqlIpAddress | Set-ClusterParameter -Multiple @{
     'Address'=$LoadBalancerIP;
     'ProbePort'= 59997;
     'SubnetMask'='255.255.255.255';
     'Network'= (Get-ClusterNetwork).Name;
     'EnableDhcp'=0; }
    
  4. Restart the cluster resource:

    $SqlIpAddress | Stop-ClusterResource
    $SqlIpAddress | Start-ClusterResource
    
  5. Restart the cluster group:

    $SqlGroup | Stop-ClusterGroup
    $SqlGroup | Start-ClusterGroup
    

Create an internal load balancer

To provide a single endpoint for SQL Server clients, you will now deploy an internal load balancer. The load balancer uses a health check which ensures that traffic is directed to the active node of the WSFC.

  1. Return to your existing Cloud Shell session.
  2. Create an unmanaged instance group, and add the two nodes to the group:

    REGION=$(gcloud config get-value compute/region)
    
    gcloud compute instance-groups unmanaged create wsfc-group-a \
      --zone $REGION-a
    gcloud compute instance-groups unmanaged add-instances wsfc-group-a \ 
      --zone $REGION-a --instances node-1
    
    gcloud compute instance-groups unmanaged create wsfc-group-b \
      --zone $REGION-b
    gcloud compute instance-groups unmanaged add-instances wsfc-group-b \ 
    --zone $REGION-b --instances node-2
    
  3. Create a health check that the load balancer can use to determine which is the active node.

    gcloud compute health-checks create tcp wsfc-healthcheck \
      --check-interval="2s" \
      --healthy-threshold=1 \
      --unhealthy-threshold=2 \
      --port=59997 \
      --timeout="1s"
    

    The health check probes port 59997, which is the port you previously configured as ProbePort for the WSFC cluster IP address.

  4. Create a backend service and add the instance group:

    gcloud compute backend-services create wsfc-backend \
      --load-balancing-scheme internal \
      --region $REGION \
      --health-checks wsfc-healthcheck \
      --protocol tcp
    
    gcloud compute backend-services add-backend wsfc-backend \
      --instance-group wsfc-group-a \
      --instance-group-zone $REGION-a \
      --region $REGION
    gcloud compute backend-services add-backend wsfc-backend \
      --instance-group wsfc-group-b \
      --instance-group-zone $REGION-b \
      --region $REGION
    
  5. Create the internal load balancer:

    gcloud compute forwarding-rules create wsfc-sql \
      --load-balancing-scheme internal \
      --address $LOADBALANCER_ADDRESS \
      --ports 1433 \
      --network $VPC_NAME \
      --subnet $SUBNET_NAME \
      --region $REGION \
      --backend-service wsfc-backend
    

Testing the failover cluster

You've completed the installation of the failover cluster, but you still have to test whether the cluster works correctly.

Prepare a client

Create a new VM instance which you can use to connect to the failover cluster:

  1. Return to your existing Cloud Shell session.
  2. Create a new VM instance:

    gcloud compute instances create sqlclient \
      --machine-type n2-standard-2 \
      --subnet $SUBNET_NAME \
      --image-family sql-ent-2022-win-2022 \
      --image-project windows-sql-cloud \
      --boot-disk-size 50 \
      --boot-disk-type pd-ssd
    
  3. Monitor the initialization process of the VM by viewing its serial port output:

    gcloud compute instances tail-serial-port-output sqlclient
    

    Wait for a few minutes until you see the output Instance setup finished, then press Ctrl+C. At this point, the VM instance is ready to be used.

  4. Create a username and password for the VM instance

  5. Connect to the VM by using Remote Desktop and sign in using the username and password created in the previous step.

  6. Right-click Start button (or press Win+X) and click Windows PowerShell (Admin).

  7. Confirm the elevation prompt by clicking Yes.

  8. Join the VM instance to your Active Directory domain:

    Add-Computer -Domain DOMAIN
    

    Replace DOMAIN with the DNS name of your Active Directory domain.

  9. Restart the VM instance:

    Restart-Computer
    

    Wait for the VM instance to restart.

Run the test

Use the sqlclient VM to test that you can connect to the failover cluster and to verify that the failover works correctly:

  1. Connect to sqlclient by using Remote Desktop and sign in using your domain user.
  2. Right-click Start button (or press Win+X) and click Windows PowerShell.
  3. Connect to SQL Server cluster by using its network name sql and query the dm_os_cluster_nodes table:

    & "$env:ProgramFiles\Microsoft SQL Server\Client SDK\ODBC\170\Tools\Binn\SQLCMD.EXE" `
       -S sql -E -Q "SELECT * FROM sys.dm_os_cluster_nodes"
    

    The output should look like this:

    NodeName                       status      status_description is_current_owner
    ------------------------------ ----------- ------------------ ----------------
    NODE-1                                   0 up                                1
    NODE-2                                   0 up                                0
    
    (2 rows affected)
    

    Notice that node-1 is the current owner of the SQL Server failover cluster resource.

  4. Return to Cloud Shell and bring down the node-1 VM to test the failover scenario:

    gcloud compute instances stop node-1
    
  5. Repeat the query:

    & "$env:ProgramFiles\Microsoft SQL Server\Client SDK\ODBC\170\Tools\Binn\SQLCMD.EXE" `
       -S sql -E -Q "SELECT * FROM sys.dm_os_cluster_nodes"
    

    The output should now look like this:

    NodeName                       status      status_description is_current_owner
    ------------------------------ ----------- ------------------ ----------------
    NODE-1                                   1 down                              0
    NODE-2                                   0 up                                1
    
    (2 rows affected)
    

    Notice that despite the loss of node-1, the query succeeds, and shows that node-2 is now the current owner of the failover cluster.

Clean up

After you finish the tutorial, you can clean up the resources that you created so that they stop using quota and incurring charges. The following sections describe how to delete or turn off these resources.

Deleting the project

The easiest way to eliminate billing is to delete the project that you created for the tutorial.

To delete the project:

  1. In the Google Cloud console, go to the Manage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then click Delete.
  3. In the dialog, type the project ID, and then click Shut down to delete the project.