Partner Guide - Consul NIA, Terraform, and Zscaler
Migrating to the cloud enables organizations to scale their applications and increase velocity. However, the increased velocity may come at the cost of greater management complexity and a higher volume of manual tasks. The complexity of managing the security policies and compliance for applications in the cloud is exacerbated when security teams use manual processes for change management. This complexity may lead to delays in implementation and operations, as well as security risks.
HashiCorp and Zscaler collaborated on a strategy to address this challenge using Consul-Terraform-Sync (CTS) for network infrastructure automation (NIA). This works by triggering a Terraform workflow that automatically updates Zscaler Private Access (ZPA) and Zscaler Internet Access resources based on changes it detects from the Consul catalog.
In this tutorial, you will automatically configure a ZPA application segment resource using Consul-Terraform-Sync (CTS). You can use the workflow presented as a blueprint to get familiar with the pattern and accelerate your own networking infrastructure management.
Prerequisites
To complete this tutorial, you need the following components as well as previous experience with Zscaler Private Access (ZPA).
- Terraform >= 1.0.0
- Consul-Terraform-Sync >= 0.7.0
- A Zscaler Private Access tenant.
- Zscaler Private Access API Credentials. Terraform and CTS will use these credentials to authenticate with ZPA cloud to create and manage your resources.
Clone example repository
First, clone the repository.
Next, navigate to the repository's root folder.
The terraform
directory provisions the underlying infrastructure required for this tutorial and has two sub-directories:
base_ac
contains a dedicated example used to call each module to setup the environment on both AWS and Zscaler Private Access cloud.modules
contains dedicated modules to support the overall deployment of both AWS and ZPA constructs.terraform-zpa-app-connector-group
deploys a dedicated ZPA App Connector Group.terraform-zpa-provisioning-key
deploys a dedicated ZPA Provisioning key associated with theterraform-zpa-app-connector-group
module.terraform-zpa-server-group
deploys a dedicated ZPA Server Group.terraform-zsac-acvm-aws
deploys a App Connector and AWS EC2 Instance.terraform-zsac-asg-web-aws
deploys a AWS Auto-Scaling Group, which will deploy the NGIX web servers.terraform-zsac-consul-server
deploys a dedicated consul and optionally a Vault server.terraform-zsac-iam-aws
creates an IAM role required for the Zscaler App Connectors, Consul Server and ASG.terraform-zsac-network-aws
deploys a dedicated network to support the consul-terraform-sync deployment.terraform-zsac-sg-aws
deploys the required security groups to allow proper connectivity.
Set up AWS credentials
The demo code deploys a Consul datacenter on AWS and the ZPA App Connector instances.
First, configure AWS credentials for your environment so that Terraform can authenticate with AWS to create resources.
Set your AWS access key ID as an environment variable.
Now, set your secret key.
If you have temporary AWS credentials, you must also add your AWS_SESSION_TOKEN
as an environment variable.
Note
If you don't have access to IAM user credentials, refer to the AWS provider documentation to learn how to use another authentication method.
Set up Zscaler private access credentials
Since the Terraform module uses the Zscaler Private Access Terraform provider to set up the pre-requisites in the ZPA Cloud, you must set the ZPA API Key credentials using environment variables.
First, set your ZPA client ID.
Then, set your ZPA client secret.
Finally, set your ZPA customer ID.
For more details about how to create the ZPA API Key credentials, refer to the ZPA Terraform provider documentation.
Provision Infrastructure
The Terraform configuration for deploying the Consul datacenter is located under the terraform/base_ac
folder.
Navigate to the terraform/base_ac
folder.
Use the terraform.tfvars.example
template file to create a terraform.tfvars
.
Edit the file to customize the attributes based on your desired values. Once you have configured this file to your environment, you can deploy the infrastructure with Terraform.
First, initialize your Terraform directory.
Then, apply the changes to provision the base infrastructure. Enter yes
to confirm the run.
This may take several minutes to complete. Once Terraform completes, it will return something similar to the following. This output displays the information that you can use to access your infrastructure.
Verify access to private web servers
Conceptually, all web servers are hosted behind the Zscaler Private Access infrastructure and not accessible to the outside world. For this reason, you need to be connected to the Zscaler cloud using the Zscaler Client Connector agent.
Connect to the Zscaler infrastructure using the Zscaler Client Connector Agent.
Once you have connected to the client connector agent, access one of the web servers provisioned as part of the autoscaling group.
Verify Consul services
Open the Consul UI by opening the address specified in output 2) HTTP Access to Consul Server UI
in your browser. Then, verify that your Consul datacenter contains 2 instances of the Web NGINX services.
Configure Vault (optional)
The Terraform module also deploys Vault to store the Zscaler Private Access API credentials. Expand the dropdown to initialize Vault. You only need to do this if you want to use Vault to store the Zscaler credentials instead of environment variables.
In addition to deploying the base infrastructure, the tutorial Terraform module also auto-initialize Vault and auto-unseal. For this reason, you do not need to manually initialize Vault.
Note
The Vault configuration in this tutorial leverages AWS Auto-Unseal option. As a result, when you created the base configuration, Terraform created an AWS KMS.
Retrieve the Terraform output to retrieve the SSH information. You should find something similar to the following.
Run the command in your terminal to connect to your Consul/Vault server. This command will be unique to your Terraform output.
In the Vault instance, retrieve the Vault recovery keys and the initial root token.
If Terraform deployed the Consul datacenter successfully, you should find the Vault as a registered service within the Consul UI.
In your terminal, set the Vault token and address environment variables so that Terraform can authenticate to the Vault server to create your credentials.
First, set the Vault token environment variable and set it to the root Vault token from the previous command.
Then, set the Vault address environment variable and set it to the Vault address from the Terraform output.
Navigate to the Vault directory.
Open zpa-terraform-consul-webinar/vault/main.tf
.
Replace all REPLACE_ME
in the vault_kv_secret_v2.zpacloudprod
resource with your respective Zscaler credentials. This will store the credentials in Vault. You only need to update the client_id
, client_secret
, and customer_id
attributes.
Initialize and apply the Terraform configuration to mount Vault and create a KV secret.
Configure CTS
Once you have deployed the base infrastructure, you can use Consul-Terraform-Sync (CTS) to monitor the Consul catalog for changes in your web server instances and modify your ZPA Application Segment configuration reducing the manual tasks the ZPA administrator needs to perform.
First, navigate to the zpa_nia/example
directory.
Then, configure CTS.
Note
If you deployed the infrastructure using the Terraform module, the module automatically creates the configuration file (config.hcl
) in the zpa_nia/example
folder. The following is an example configuration file generated by the Terraform module.
Before you start the consul-terraform-sync
daemon, authenticate with the ZPA API credentials using environment variables or Vault so that CTS can authenticate properly.
If you deployed this tutorial using the Terraform module provided in this GitHub Repository, by default, the scenario uses Vault to store the Zscaler Private Access API credentials.
Start CTS
Finally, start CTS.
The CTS execution will automatically create a new application segment within the Zscaler Private Access portal, containing the FQDNs of each one of the web servers with its corresponding service ports.
Verify NIA automation workflow
This scenario will trigger CTS by increasing the number of servers allocated to the autoscaling group. CTS will update the ZPA application segment configuration to include the new NGINX instances created by increasing the autoscaling group capacity.
In /terraform/base_ac
, open the terraform.tfvars
file and update the desired capacity
and max_size
values for the autoscaling group variables from 2
to 4
. The following snippet shows the original autoscaling group variables.
Apply the changes to increase the number of servers in your autoscaling group.
Once the changes are applied on AWS, Consul will show the new instances on the Services tab.
CTS will detect the change from the Consul catalog and update the application segments to reflect the new web
services.
Verify Zscaler resources
Once all the components are deployed and connectivity is fully established, you can test access to the backend applications by connecting to the Zscaler infrastructure using the Zscaler Client Connector Agent.
The NGINX servers are already in an auto scaling group with Consul agents running and sending all information to Consul. When you destroy or create a NGINX server in the auto-scaling group, Consul will automatically deregister the removed servers, and trigger a call to Consul-Terraform-Sync, which in turn will update the ZPA Application Segment domain entries by removing the servers. Access the NGINX web server and verify traffic is being balanced across four instances.
Clean your environment
Once you have completed the tutorial, you can clean up your environment by stopping CTS and using Terraform to destroy all tutorial-related infrastructure.
Stop CTS
You can stop CTS by:
- Entering
CTRL+C
in the shell running the daemon, or - Sending the SIGINT signal to the process.
Before stopping, the daemon will log the requested shutdown.
Destroy resources
If you used the Vault instance, navigate to the Terraform configuration for Vault.
Then, destroy the resources associated with Vault. This should destroy 2 resources.
Then, navigate to the directory that contains the Terraform configuration for your base infrastructure.
Then, destroy the resources. This should destroy approximately 56 resources.
Optionally, clean the Git repository to remove files created during the tutorial.
Next steps
In this tutorial, you learned how to automate the configuration of your Zscaler Private Access tenant using CTS to create and update application segment resources.
Review the CTS tutorial to learn how to secure the CTS instance and other best practices to integrate it into a production environment.
To learn even more about CTS and Zscaler, check out the following resources: