Imperva eDSF Kit is a Terraform toolkit designed to automate the deployment of Imperva's Data Security Fabric.
eDSF Kit enables you to deploy the full suite of the DSF sub-products - DSF Hub & Agentless Gateway (formerly Sonar), DAM (Data Activity Monitoring) MX and Agent Gateway and DRA (Data Risk Analytics) Admin and Analytics.
Currently, eDSF Kit supports deployments on AWS cloud. In the near future, it will support other major public clouds, on-premises (vSphere) and hybrid environments.
This guide is intended for Imperva Sales Engineers (SE) for the purpose of Proof-of-Concept (POC) demonstrations and preparing for these demonstrations, aka, Lab.
It is also intended for Imperva Professional Services (PS) and customers for actual deployments of DSF.
This guide covers the following main topics. Additional guides are referenced throughout, as listed in the Quick Links section below.
- How to deploy Imperva’s Data Security Fabric (DSF) with step-by-step instructions.
- How to verify that the deployment was successful using the eDSF Kit output.
- How to undeploy DSF with step-by-step instructions.
This guide uses several text styles for an enhanced readability and several call-out features. Learn about their meaning from the table below.
Convention | Description |
Code, commands or user input |
|
Instruction to change code, commands or user input |
|
Placeholder |
${placeholder}: Used within commands to indicate that the user should replace the placehodler with a value, including the $, { and }. |
Hyperlinks | Clickable URLs embedded within the guide are blue and underlined. E.g., www.imperva.com |
This guide references the following information and links, some of which are available via the Documention Portal on the Imperva website: https://docs.imperva.com. (Login required)
Link | Details |
Data Security Fabric v1.0 | DSF Overview |
Sonar v4.12 | DSF Components Overview |
Imperva Terraform Modules Registry | |
eDSF Kit GitHub Repository | |
Download Git | |
Download Terraform | Latest Supported Terraform Version: 1.5.x. Using a higher version may result in unexpected behavior or errors. |
Request access to DSF installation software - Request Form | Grants access for a specific AWS account to the DSF installation software. |
The following table lists the released eDSF Kit versions, their release date and a high-level summary of each version's content.
Date | Version | Details |
3 Nov 2022 | 1.0.0 | First release for SEs. Beta. |
20 Nov 2022 | 1.1.0 | Second Release for SEs. Beta. |
3 Jan 2023 | 1.2.0 | 1. Added multi accounts example. 2. Changed modules interface. |
19 Jan 2023 | 1.3.4 | 1. Refactored directory structure. 2. Released to terraform registry. 3. Supported DSF Hub / Agentless Gateway on RedHat 7 ami. 4. Restricted permissions for Sonar installation. 5. Added the module's version to the examples. |
26 Jan 2023 | 1.3.5 | 1. Enabled creating RDS MsSQL with synthetic data for POC purposes. 2. Fixed manual and automatic installer machine deployments. |
5 Feb 2023 | 1.3.6 | Supported SSH proxy for DSF Hub / Agentless Gateway in modules: hub, agentless-gw, federation, poc-db-onboarder. |
28 Feb 2023 | 1.3.7 |
1. Added the option to provide a custom security group id for the DSF Hub and the Agentless Gateway via the 'security_group_id' variable.
2. Restricted network resources and general IAM permissions. 3. Added a new installation example - single_account_deployment. 4. Added the minimum required Terraform version to all modules. 5. Added the option to provide EC2 AMI filter details for the DSF Hub and the Agentless Gateway via the 'ami' variable. 6. For user-provided AMI for the DSF node (DSF Hub and the Agentless Gateway) that denies execute access in '/tmp' folder, added the option to specify an alternative path via the 'terraform_script_path_folder' variable. 7. Passed the password of the DSF node via AWS Secrets Manager. 8. Added the option to provide a custom S3 bucket location for the Sonar binaries via the 'tarball_location' variable. 9. Bug fixes. |
16 Mar 2023 | 1.3.9 |
1. Added support for deploying a DSF node on an EC2 without outbound internet access by providing a custom AMI with the required dependencies and creating VPC endpoints.
2. Replaced the installer machine manual and automatic deployment modes with a new and simplified single installer machine mode. 3. Added support for storing the Terraform state in an AWS S3 bucket. 4. Made adjustments to support Terraform version 1.4.0. |
27 Mar 2023 | 1.3.10 |
1. Added support for supplying a custom key-pair for ssh to the DSF Hub and the Agentless Gateway.
2. Added support for the new Sonar public patch '4.10.0.1'. |
3 Apr 2023 | 1.4.0 |
1. Added support for the new Sonar version '4.11'.
2. Added support for Agentless Gateway HADR. |
13 Apr 2023 | 1.4.1 | Bug fixes. |
17 Apr 2023 | 1.4.2 | Updated DSFKit IAM required permissions. |
20 Apr 2023 | 1.4.3 |
1. First Alpha deployment of Agent Gateway and MX. It can be used with caution.
2. Updated DSFKit IAM required permissions. |
2 May 2023 | 1.4.4 |
1. Minimum supported Sonar version is now 4.11. To deploy earlier versions, work with earlier DSFKit versions.
2. In the POC examples, onboarded the demo databases to the Agentless Gateway instead of the DSF Hub. |
16 May 2023 | 1.4.5 |
1. Defined separate security groups for the DSF node according to the traffic source type (e.g., web console, Hub).
2. Added the option to provide custom secrets for the DSF Hub and the Agentless Gateway. 3. Updated the POC multi_account_deployment example. |
28 May 2023 | 1.4.6 |
1. Replaced IAM Role variable with instance profile.
2. Removed usage of AWS provider's default_tags feature. 3. First Alpha deployment of DRA. It can be used with caution. 4. Alpha deployment example of full DSF - Sonar, DAM and DRA. It can be used with caution. |
11 Jun 2023 | 1.4.7 |
1. Triggered the first replication cycle as part of an HADR setup.
2. Added LVM support (DSF Hub and Agentless GW). 3. Fixed error while onboarding MSSQL RDS. |
14 Jun 2023 | 1.4.8 |
1. Fixed typo in the required IAM permissions.
2. Added support for Terraform version 1.5.0. 3. Fixed global tags. |
4 Jul 2023 | 1.5.0 |
1. Added support for the new DSF version '4.12'.
2. Released full DSF POC example. 3. Bug fixes. |
18 Jul 2023 | 1.5.1 |
1. Released full DSF installation example.
2. Added support for DAM activation code in addition to the already supported option of a license file. 3. Added security groups samples to the documentation. 4. Improvements and bug fixes. |
1 Aug 2023 | 1.5.2 |
1. Added DSF instances' required IAM permissions samples to the documentation.
2. Improvements and bug fixes. |
16 Aug 2023 | 1.5.3 | Improvements and bug fixes. |
11 Sep 2023 | 1.5.4 | Improvements and bug fixes. |
eDSF Kit offers several deployment modes:
-
CLI Deployment Mode: This mode offers a straightforward deployment option that relies on running a Terraform script on the deployment client's machine which must be a Linux machine.
For more details, refer to CLI Deployment Mode.
-
Installer Machine Deployment Mode: This mode is similar to the CLI mode except that the Terraform is run on an EC2 machine which the user creates, instead of on the deployment client's machine. This mode can be used if a Linux machine is not available, or eDSF Kit cannot be run on the available Linux machine, e.g., since it does not have permissions to access the deployment environment.
For more details, refer to Installer Machine Deployment Mode.
-
Terraform Cloud Deployment Mode: This mode makes use of Terraform Cloud, a service that exposes a dedicated UI to create and destroy resources via Terraform. This mode can be used in case we don't want to install any software on the deployment client's machine. It can be used to demo DSF on an Imperva AWS Account or on a customer’s AWS account (if the customer supplies credentials).
For more details, refer to Terraform Cloud Deployment Mode.
The first step in the deployment is to choose the deployment mode most appropriate to you. If you need more information to decide on your preferred mode, refer to the detailed instructions for each mode here.
Before using eDSF Kit to deploy DSF, it is necessary to satisfy a set of prerequisites.
- Create an AWS User with secret and access keys which comply with the required IAM permissions (see IAM Permissions for Running eDSF Kit section).
- The deployment requires access to the DSF installation software. Click here to request access.
- Only if you chose the CLI Deployment Mode, download Git here.
- Only if you chose the CLI Deployment Mode, download Terraform here. It is recommended on MacOS systems to use the "Package Manager" option during installation.
- Latest Supported Terraform Version: 1.5.x. Using a higher version may result in unexpected behavior or errors.
- jq - Command-line JSON processor.
An important thing to understand about the DSF deployment, is that there are many variations on what can be deployed, e.g., with or without DRA, the number of Agentless Gateways, with or without HADR, the number of VPCs, etc.
We provide several of out-of-the-box Terraform recipes we call "examples" which are already configured to deploy common DSF environments. You can use the example as is, or customize it to accommodate your deployment requirements.
These examples can be found in the eDSF Kit GitHub Repository under the examples directory. Some examples are intended for Lab or POC and others for actual DSF deployments by Professional Services and customers.
For more details about each example, click on the example name.
Example | Purpose | Description | Download |
Sonar Basic Deployment | Lab/POC | A DSF deployment with a DSF Hub, an Agentless Gateway, federation, networking and onboarding of a MySQL DB. | sonar_basic_deployment.zip |
Sonar HADR Deployment | Lab/POC | A DSF deployment with a DSF Hub, an Agentless Gateway, DSF Hub and Agentless Gateway HADR, federation, networking and onboarding of a MySQL DB. | sonar_hadr_deployment.zip |
Sonar Single Account Deployment | PS/Customer | A DSF deployment with a DSF Hub HADR, an Agentless Gateway and federation. The DSF nodes (Hubs and Agentless Gateway) are in the same AWS account and the same region. It is mandatory to provide as input to this example the subnets to deploy the DSF nodes on. | sonar_single_account_deployment.zip |
Sonar Multi Account Deployment | PS/Customer | A DSF deployment with a DSF Hub, an Agentless Gateway and federation. The DSF nodes (Hub and Agentless Gateway) are in different AWS accounts. It is mandatory to provide as input to this example the subnets to deploy the DSF nodes on. | sonar_multi_account_deployment.zip |
DSF Deployment | Lab/POC | A full DSF deployment with DSF Hub and Agentless Gateways (formerly Sonar), DAM (MX and Agent Gateways), DRA (Admin and DRA Analytics), and Agent and Agentless audit sources. | dsf_deployment.zip |
DSF Single Account Deployment | PS/Customer | A full DSF deployment with DSF Hub and Agentless Gateways (formerly Sonar), DAM (MX and Agent Gateways) and DRA (Admin and DRA Analytics). | dsf_single_account_deployment.zip |
If you are familiar with Terraform, you can go over the example code and see what it consists of. The examples make use of the building blocks of the eDSF Kit - the modules, which can be found in the Imperva Terraform Modules Registry. As a convention, the eDSF Kit modules' names have a 'dsf' prefix.
Fill out the eDSF Kit pre-deployment questionnaire google form if you need help choosing or customizing an example to fit your use case.
When using eDSF Kit there is no need to manually download the DSF installation software, eDSF Kit will do that automatically based on the Sonar, DAM and DRA versions specified in the Terraform example. In order to be able to download the installation software during deployment, you must request access beforehand. See Prerequisites.
This includes the following version of the DSF sub-products:
DSF Sub-Product | Default Version | Supported Versions |
Sonar | 4.12.0.10 | 4.9 and up
Restrictions on modules may apply |
DAM | 14.12.1.10 | 14.11.1.10 and up
14.7.x.y (LTS) |
DRA | 4.12.0.10 | 4.11.0.10 and up |
Relevant variables are:
variable "sonar_version" {
type = string
}
variable "dam_version" {
type = string
}
variable "dra_version" {
type = string
}
When specifying Sonar and DRA versions, both long and short version formats are supported, for example, 4.12.0.10 or 4.12. The short format maps to the latest patch.
When specifying a DAM version, only long format is supported.
Make sure that the version you are using is supported by all the modules which are part of your deployment. To see which versions are supported by each module, refer to the specific module's README. (For example, DSF Hub module's README)
After you have chosen the deployment mode, follow the step-by-step instructions below to ensure a successful deployment. If you have any questions or issues during the deployment process, please contact Imperva Technical Support.
This mode makes use of the Terraform Command Line Interface (CLI) to deploy and manage environments. Terraform CLI uses a bash script and therefore requires a Linux/Mac machine.
The first thing to do in this deployment mode is to download Terraform .
NOTE: Update the values for the required parameters to complete the installation: example_name, aws_access_key_id, aws_secret_access_key and region
-
Download the zip file of the example you've chosen (See the Choosing the Example/Recipe that Fits Your Use Case section) from the eDSF Kit GitHub Repository, e.g., if you choose the "sonar_basic_deployment" example, you should download sonar_basic_deployment.zip.
-
Unzip the zip file in CLI or using your operating system's UI. For example, in CLI:
unzip sonar_basic_deployment.zip >>>> Change this command depending on the example you chose
-
In CLI, navigate to the directory which contains the Terraform files. For example:
cd sonar_basic_deployment >>>> Change this command depending on the example you chose
-
Optionally make changes to the example's Terraform code to fit your use case. If you need help doing that, please contact Imperva Technical Support.
-
Terraform uses the AWS shell environment for AWS authentication. More details on how to authenticate with AWS are here.
For simplicity, in this example we will use environment variables:export AWS_ACCESS_KEY_ID=${access_key} export AWS_SECRET_ACCESS_KEY=${secret_key} export AWS_REGION=${region} >>>> Fill the values of the access_key, secret_key and region placeholders, e.g., export AWS_ACCESS_KEY_ID=5J5AVVNNHYY4DM6ZJ5N46.
-
Run:
terraform init
-
Run:
terraform apply -auto-approve
This should take about 30 minutes.
-
Depending on your deployment:
To access the DSF Hub, extract the web console admin password and DSF URL using:
terraform output "web_console_dsf_hub"
To access the DAM, extract the web console admin password and DAM URL using:
terraform output "web_console_dam"
To access the DRA Admin, extract the web console admin password and DRA URL using:
terraform output "web_console_dra"
-
Access the DSF Hub, DAM or DRA web console from the output in the previous step by entering the outputted URL into a web browser, “admin” as the username and the outputted admin_password value. Note, there is no initial login password for DRA.
The CLI Deployment is now complete and a functioning version of DSF is now available.
This mode is similar to the CLI mode except that the Terraform is run on an EC2 machine which the user creates, instead of on the deployment client's machine. This mode can be used if a Linux machine is not available, or eDSF Kit cannot be run on the available Linux machine, e.g., since it does not have permissions to access the deployment environment.
-
In AWS, choose a region for the installer machine while keeping in mind that the machine should have access to the DSF environment that you want to deploy, and preferably be in proximity to it.
-
Launch an Instance: Search for RHEL-8.6.0_HVM-20220503-x86_64-2-Hourly2-GP2 image and click “enter”:
-
Select t2.medium 'Instance type', or t3.medium if T2 is not available in the region.
-
Create or select an existing 'Key pair' that you will later use to run SSH to the installer machine.
-
In the Network settings panel - make your configurations while keeping in mind that the installer machine should have access to the DSF environment that you want to deploy, and that the deployment's client machine should have access to the installer machine.
-
Copy and paste the contents of this bash script into the User data textbox.
-
Click on Launch Instance. At this stage, the installer machine is initializing and downloading the necessary dependencies.
-
When launching is completed, run SSH to the installer machine from the deployment client's machine:
ssh -i ${key_pair_file} ec2-user@${installer_machine_public_ip} >>>> Replace the key_pair_file with the name of the file from step 4, and the installer_machine_public_ip with the public IP of the installer machine which should now be available in the AWS EC2 console. E.g., ssh -i a_key_pair.pem [email protected]
NOTE: You may need to decrease the access privileges of the key_pair_file in order to be able to use it in for ssh. For example:
chmode 400 a_key_pair.pem
-
Download the zip file of the example you've chosen (See the Choosing the Example/Recipe that Fits Your Use Case section) from the eDSF Kit GitHub Repository, e.g., if you choose the "sonar_basic_deployment" example, you should download sonar_basic_deployment.zip. Run:
wget https://github.com/imperva/dsfkit/raw/1.5.4/examples/poc/sonar_basic_deployment/sonar_basic_deployment.zip or wget https://github.com/imperva/dsfkit/raw/1.5.4/examples/poc/sonar_hadr_deployment/sonar_hadr_deployment.zip or wget https://github.com/imperva/dsfkit/raw/1.5.4/examples/installation/sonar_single_account_deployment/sonar_single_account_deployment.zip or wget https://github.com/imperva/dsfkit/raw/1.5.4/examples/installation/sonar_multi_account_deployment/sonar_multi_account_deployment.zip or wget https://github.com/imperva/dsfkit/raw/1.5.4/examples/poc/dsf_deployment/dsf_deployment.zip or wget https://github.com/imperva/dsfkit/raw/1.5.4/examples/installation/dsf_single_account_deployment/dsf_single_account_deployment.zip
-
Continue by following the CLI Deployment Mode beginning at step 2.
IMPORTANT: Do not destroy the installer machine until you are done and have destroyed all other resources. Otherwise, there may be leftovers in your AWS account that will require manual deletion which is a tedious process. For more information see the Installer Machine Undeployment Mode section.
The Installer Machine Deployment is now completed and a functioning version of DSF is now available.
This deployment mode uses the Terraform Cloud service, which allows deploying and managing deployments via a dedicated UI. Deploying the environment is easily triggered by clicking a button within the Terraform interface, which then pulls the required code from the Imperva GitHub repository and automatically runs the scripts remotely.
This deployment mode can be used to demonstrate DSF in a customer's Terraform Cloud account or the Imperva Terraform Cloud account, which is accessible for internal use (SEs, QA, Research, etc.), and can be used to deploy/undeploy POC environments on AWS accounts owned by Imperva.
It is required that you have access to a Terraform Cloud account.
If you want to use Imperva's Terraform Cloud account, contact Imperva's Technical Support.
NOTE: Currently this deployment mode doesn't support customizing the chosen example's code.
-
Connect to Terraform Cloud: Connect to the desired Terraform Cloud account, either the internal Imperva account or a customer account if one is available.
-
Create a new workspace: Complete these steps to create a new workspace in Terraform Cloud that will be used for the DSF deployment.
-
Click the + New workspace button in the top navigation bar to open the Create a new Workspace page.
-
Choose Version Control Workflow from the workflow type options.
-
Choose imperva/dsfkit as the repository.
If this option is not displayed, type imperva/dsfkit in the “Filter” textbox. -
Name the workspace in the following format:
dsfkit-${customer_name}-${environment_name} >>>> Fill the values of the customer_name and environment_name placeholders, e.g., dsfkit-customer1-poc1
-
Enter the path to the example you've chosen (See the Choosing the Example/Recipe that Fits Your Use Case section), e.g., “examples/poc/sonar_basic_deployment”, into the Terraform working directory input field.
>>>> Change the directory in the above screenshot depending on the example you chose
-
To avoid automatic Terraform configuration changes when the GitHub repo updates, set the following values under “Run triggers”:
As displayed in the above screenshot, the Custom Regular Expression field value should be “23b82265”. -
Click “Create workspace” to finish and save the new eDSF Kit workspace.
-
-
Add the AWS variables: The next few steps will configure the required AWS variables.
-
Once the eDSF Kit workspace is created, click the "Go to workspace overview" button.
-
Add the following workspace variables by entering the name, value, category and sensitivity as listed below.
Variable Name Value Category Sensitive AWS_ACCESS_KEY_ID Your AWS credentials access key Environment variable True AWS_SECRET_ACCESS_KEY Your AWS credentials secret key Environment variable True AWS_REGION The AWS region you wish to deploy into Environment variable False
>>>> Change the AWS_REGION value in the above screenshot to the AWS region you want to deploy in
-
-
Run the Terraform: The following steps complete setting up the eDSF Kit workspace and running the example's Terraform code.
-
Click on the Actions dropdown button from the top navigation bar, and select the "Start new run" option from the list.
-
Enter a unique, alphanumeric name for the run, and click on the "Start run" button.
>>>> Change the "Reason for starting run" value in the above screenshot to a run name of your choosing
-
Wait for the run to complete, it should take about 30 minutes and is indicated by "Apply finished".
-
-
Inspect the run result: These steps provide the necessary information to view the run output, and access the deployed DSF.
-
Scroll down the "Apply Finished" area to see which resources were created.
-
Scroll to the bottom to find the "State versions created" link which can be helpful to investigate issues.
-
Scroll up to view the "Outputs" of the run which should be expanded already. Depending on your deployment, locate the "web_console_dsf_hub", "web_console_dam" or "web_console_dra" JSON object. Copy the "public_url" or "private_url" and "admin_password" fields' values for later use (there is no initial login password for DRA), for example:
-
Enter the "public_url" or "private_url" value you copied into a web browser. For example, enter the "web_console_dsf_hub" URL to access the Imperva Data Security Fabric (DSF) login screen.
-
Sonar is installed with a self-signed certificate, as a result, when opening the web page you may see a warning notification. For example, in Google Chrome, click "Proceed to domain.com (unsafe)".
-
Enter “admin” into the Username field and the "admin_password" value you copied into the Password field. Click "Sign In".
-
The Terraform Cloud Deployment is now complete and a functioning version of DSF is now available.
To be able to create AWS resources inside any AWS Account, you need to provide an AWS User or Role with the required permissions in order to run eDSF Kit Terraform. The permissions are separated to different policies. Use the relevant policies according to your needs:
- For general required permissions such as create an EC2, security group, etc., use the permissions specified here - general required permissions.
- In order to create network resources such as VPC, NAT Gateway, Internet Gateway etc., use the permissions specified here - create network resources permissions.
- In order to onboard a MySQL RDS with CloudWatch configured, use the permissions specified here - onboard MySQL RDS permissions.
- In order to onboard a MsSQL RDS with audit configured and with synthetic data, use the permissions specified here - onboard MsSQL RDS with synthetic data permissions.
NOTE: When running the deployment with a custom 'deployment_name' variable, you should ensure that the corresponding condition in the AWS permissions of the user who runs the deployment reflects the new custom variable.
NOTE: The permissions specified in option 2 are irrelevant for customers who prefer to use their own network objects, such as VPC, NAT Gateway, Internet Gateway, etc.
If you are running an installation example and want to provide your own instance profiles as variables, you can find samples of the required permissions here - DSF Instances Permissions.
If you are running an installation example and want to provide your own security groups as variables, you can find samples of the required security groups rules here - Security Groups samples.
Depending on the deployment mode you chose, follow the undeployment instructions of the same mode to completely remove Imperva DSF from AWS.
The undeployment process should be followed whether the deployment was successful or not. In case of failure, the Terraform may have deployed some resources before failing, and want these removed.
-
Navigate to the directory which contains the Terraform files. For example:
cd sonar_basic_deployment >>>> Change this command depending on the example you chose
-
Terraform uses the AWS shell environment for AWS authentication. More details on how to authenticate with AWS are here.
For simplicity, in this example we will use environment variables:export AWS_ACCESS_KEY_ID=${access_key} export AWS_SECRET_ACCESS_KEY=${secret_key} export AWS_REGION=${region} >>>> Fill the values of the access_key, secret_key and region placeholders, e.g., export AWS_ACCESS_KEY_ID=5J5AVVNNHYY4DM6ZJ5N46.
-
Run:
terraform destroy -auto-approve
-
Run SSH to installer machine from the deployment client's machine:
ssh -i ${key_pair_file} ec2-user@${installer_machine_public_ip} >>>> Fill the values of the key_pair_file and installer_machine_public_ip placeholders
-
Continue by following the CLI Undeployment Mode steps.
-
Wait for the environment to be destroyed.
-
Terminate the EC2 installer machine via the AWS Console.
-
To undeploy the DSF deployment, click on Settings and find "Destruction and Deletion" from the navigation menu to open the "Destroy infrastructure" page. Ensure that the "Allow destroy plans" toggle is selected, and click on the Queue Destroy Plan button to begin.
-
The DSF deployment is now destroyed and the workspace may be re-used if needed. If this workspace is not being re-used, it may be removed with “Force delete from Terraform Cloud” that can be found under Settings.
NOTE: Do not remove the workspace before the deployment is completely destroyed. Doing so may lead to leftovers in your AWS account that will require manual deletion which is a tedious process.
Information about additional topics can be found in specific examples' READMEs, when relevant.
For example: Sonar Single Account Deployment
These topics include:
- Storing Terraform state in S3 bucket
- Working with DSF Hub and Agentless Gateway without outbound internet access
Review the following issues and troubleshooting remediations.
Title | Error message | Remediation |
VPC quota exceeded | error creating EC2 VPC: VpcLimitExceeded: The maximum number of VPCs has been reached | Remove unneeded vpc via vpc dashboard, or increase vpc quota via this page and run again. |
Elastic IP quota exceeded | Error creating EIP: AddressLimitExceeded: The maximum number of addresses has been reached | Remove unneeded Elastic IPs via this dashboard, or increase Elastic IP quota via this page and run again. |
Option Group quota exceeded | Error: "Cannot create more than 20 option groups". Remediation similar to the other exceeded errors | Remove unneeded Option Groups here, or increase Option Group quota via this page and run again. |
AWS glitch | Error: creating EC2 Instance: InvalidNetworkInterfaceID.NotFound: The networkInterface ID 'eni-xxx does not exist | Rerun “terraform apply”. |
AWS ENI deletion limitation | error deleting security group: DependencyViolation: resource sg-xxxxxxxxxxxxx has a dependent object | According to AWS support, an ENI can take up to 24 hours to be deleted. Suggestion: Try to delete the ENI from AWS console or wait for 24 hours. |
Blocked by Security Group or Network | timeout - last error: dial tcp x.y.z.w:22: i/o timeout or timeout - last error: Error connecting to bastion: dial tcp x.y.z.w:22: connect: connection timed out |
Check your security group and network configuration |
Invalid EC2 SSH Keys | timeout - last error: Error connecting to bastion: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain |
Check the SSH keys you are using and the SSH keys variables values that you are passing. |
No outbound internet access | Error: No outbound internet access. Either enable outbound internet access, or make sure x is installed in the base ami | If you intended the DSF node to have outbound intent access, then make sure the private subnets have routing to a NAT gateway or equivalent. If you didn't intend the DSF node to have outbound internet access, follow the instructions for 'Deploying DSF Nodes without Outbound Internet Access' in your example's README. |
Sonar HADR setup internal error | Replication failed! Replication script exited with code 1 |
Contact Imperva's Technical Support. |
Sonar federation internal error | python_commons.http_client.UnexpectedStatusCode: Failed to run: federated_asset_connection_sync. Check /data_vol/sonar-dsf/jsonar/logs/sonarfinder/catalina.out for details., status: 500, data: None See log "/data_vol/sonar-dsf/jsonar/logs/sonarg/federated.log" for details |
Contact Imperva's Technical Support. |
DAM configuration script exists with status code 28 | : exit status 28. Output: + set -e | Rerun “terraform apply”. |