Deploy with Private EKS
By default, the Qrvey MultiPlatform deployment creates an EKS cluster with a public control plane endpoint. This allows the deployment to be executed from your local machine. However, for enhanced security, you can deploy Qrvey with a fully private EKS cluster where the control plane is only accessible from within the VPC.
Note: Private EKS cluster deployment is supported in Qrvey v9.2.2 and later.
Overview
When deploying with a private EKS cluster:
- The EKS control plane API endpoint is not publicly accessible.
- All deployment operations must be performed from within the VPC.
- You need an existing VPC with the proper subnet configuration.
- You need a bastion host (EC2 instance) or similar service (such as AWS CodeBuild) within the VPC to run the deployment.
This approach ensures that the Kubernetes cluster control plane remains isolated and accessible only from resources within your VPC.
Before You Begin
Before deploying with a private EKS cluster, ensure you have the following:
Custom VPC Configuration
An existing VPC configured with the following subnets:
- 2 Public Subnets (minimum): For NAT Gateway and load balancers.
- 4 Private Subnets (recommended): For EKS node groups and control plane.
- At least 2 of these will be designated as control plane subnets.
- 2 Intra Subnets (minimum): For RDS database instances.
The VPC should have a minimum CIDR of /22 to accommodate all resources.
Bastion Host or Deployment Instance
An EC2 instance (or alternative service like AWS CodeBuild) within the same VPC with:
- Docker installed: Required to run the Qrvey Terraform deployment container.
- Access to Qrvey Registry: The instance must be able to pull images from
qrvey.azurecr.io. - Proper Security Group: The instance's security group must be authorized to communicate with the EKS cluster control plane.
- Network Access: The instance should be in one of the private subnets with internet access through a NAT Gateway (to pull Docker images).
- SSH Access: if using Bastion host, a SSH key pair for accessing the bastion instance from your local machine.
Standard Deployment Prerequisites
All standard prerequisites from the Deployment on AWS guide, including:
- IAM credentials with required permissions.
- S3 bucket for Terraform state storage.
- Qrvey registry credentials.
- SMTP configuration.
- DNS Hosted Zone (optional).
Initial Deployment with Private EKS
Follow these steps to deploy a new Qrvey instance with a private EKS cluster.
Step 1: Prepare Your Local Machine
-
Ensure Docker is installed on your local machine and you have the Qrvey registry credentials.
-
On your local machine, create a
config.jsonfile with the base configuration following the Deployment on AWS guide. -
Add the private EKS-specific configuration to your
config.jsonfile under the"variables"object:{
"account_config": {
"access_key_id": "<ACCESS_KEY>",
"secret_access_key": "<SECRET_KEY>",
"region": "<REGION>",
"bucket": "<S3_BUCKET_TO_STORE_THE_STATE_FILE>",
"key": "<FILE_NAME>"
},
"variables": {
"registry_user": "<REGISTRY_USER_PROVIDED_BY_QRVEY_SUPPORT>",
"registry_key": "<REGISTRY_KEY_PROVIDED_BY_QRVEY_SUPPORT>",
"qrvey_chart_version": "<QRVEY_VERSION>",
"enable_location_services": true,
"es_config": {
"size": "large",
"count": 1
},
"customer_info": {
"firstname": "",
"lastname": "",
"email": "email@company.com",
"company": "<COMPANY_NAME>"
},
"initial_admin_email": "admin@company.tld",
// Private EKS Configuration - Add these settings
"use_existing_vpc": true,
"vpc_details": {
"vpc_id": "vpc-061eb5c85b75ba645",
"public_subnets": [
"subnet-0753b709074f1172a",
"subnet-0602be44cadd9f6c1"
],
"private_subnets": [
"subnet-03300c0a7e544a906",
"subnet-042a7e94b64c65da0",
"subnet-0465952c690ef67ff",
"subnet-078ed141023a99953"
],
"intra_subnets": [
"subnet-0da4f9009c5965dcb",
"subnet-0f5048a3ad0f0ebbf"
],
"control_plane_subnets": [
"subnet-03300c0a7e544a906",
"subnet-042a7e94b64c65da0"
]
},
"eks_config": {
"public_access": false,
"api_access_cidrs": [],
"allowed_security_groups": ["sg-00e400d5d333496ae"]
}
}
}Important: Replace all placeholder values with your actual VPC ID, subnet IDs, and security group IDs.
Step 2: Set Up the Bastion Host
The following steps deploy using a bastion host (EC2 instance). If you're using another service like AWS CodeBuild or AWS Systems Manager Session Manager, adapt these steps accordingly to match your deployment environment.
-
Launch an EC2 instance in one of the private subnets of your VPC (or use an existing bastion host).
-
Ensure the instance has Docker installed. If not, install Docker.
# Amazon Linux 2
sudo yum update -y
sudo yum install docker -y
sudo service docker start
sudo usermod -a -G docker ec2-userLog out, then log back in for the group membership to take effect.
-
Verify Docker is working:
docker --version
Step 3: Transfer the Configuration to the Bastion Host
-
From your local machine, create an SSH connection to the bastion instance:
ssh -i /path/to/your-key.pem ec2-user@<BASTION_PRIVATE_IP>Note: If your bastion is in a private subnet, you may need to use a jump host or VPN for access.
-
Create a directory for the deployment on the bastion host:
mkdir -p ~/qrvey-deployment
cd ~/qrvey-deployment -
Transfer the
config.jsonfile from your local machine to the bastion host using one of the following methods:-
SCP command:
scp -i /path/to/your-key.pem config.json ec2-user@<BASTION_PRIVATE_IP>:~/qrvey-deployment/ -
Copy and paste the content directly into a new file on the bastion host.
-
Step 4: Run the Deployment
From the bastion host (using an SSH connection), deploy the platform:
-
Log into the Qrvey Registry:
export REGISTRY_USER="<your-registry-user>"
export REGISTRY_KEY="<your-registry-key>"
export QRVEY_VERSION="<qrvey-version>"
docker login qrvey.azurecr.io --username $REGISTRY_USER --password-stdin <<< $REGISTRY_KEY -
Navigate to the directory containing your
config.json:cd ~/qrvey-deployment -
Run the deployment:
docker run --platform=linux/amd64 \
-v $(pwd)/config.json:/app/qrvey/config.json \
-it --rm \
qrvey.azurecr.io/qrvey-terraform-aws:${QRVEY_VERSION} applyThe deployment process takes about two hours.
-
After deployment completes, retrieve the environment details:
docker run --platform=linux/amd64 \
-v $(pwd)/config.json:/app/qrvey/config.json \
-it --rm \
qrvey.azurecr.io/qrvey-terraform-aws:${QRVEY_VERSION} outputThis command displays your deployment URL, admin credentials, and other important information.
Step 5: Log into Qrvey
Navigate to the Qrvey URL provided in the output and log in with administrator credentials.
Configuration Variables for Private EKS
The following table describes the required configuration variables specific to private EKS deployments that should be added to the "variables" section of your config.json file.
| Variable | Type | Description |
|---|---|---|
use_existing_vpc | boolean | Set to true to use an existing VPC. |
vpc_details | object | Contains all VPC-related configuration. For more infomation, see vpc_details Structure. |
eks_config | object | Contains EKS-specific configuration for private access. For more information, see eks_config Structure. |
vpc_details Structure
{
"vpc_id": "vpc-xxxxxxxxx",
"public_subnets": ["subnet-xxx1", "subnet-xxx2"],
"private_subnets": ["subnet-xxx3", "subnet-xxx4", "subnet-xxx5", "subnet-xxx6"],
"intra_subnets": ["subnet-xxx7", "subnet-xxx8"],
"control_plane_subnets": ["subnet-xxx3", "subnet-xxx4"]
}
| Property | Type | Description |
|---|---|---|
vpc_id | string | ID of your existing VPC where EKS will be deployed. |
public_subnets | array[string] | List of public subnet IDs (minimum 2) for NAT Gateway and external load balancers. |
private_subnets | array[string] | List of private subnet IDs (recommended 4) for EKS node groups. |
intra_subnets | array[string] | List of isolated subnet IDs (minimum 2) for RDS database instances. |
control_plane_subnets | array[string] | List of private subnet IDs (exactly 2) where the EKS control plane will be located. Must be selected from the private_subnets list. |
eks_config Structure
{
"public_access": false,
"api_access_cidrs": [],
"allowed_security_groups": ["sg-xxxxxxxxx"]
}
| Property | Type | Description |
|---|---|---|
public_access | boolean | Must be set to false to disable public access to the EKS API server endpoint. |
api_access_cidrs | array[string] | List of allowed CIDR blocks for API access. Should be empty ([]) when public_access is false. |
allowed_security_groups | array[string] | List of security group IDs that are allowed to communicate with the EKS control plane. Include your bastion host's security group ID here. |
Important: When
public_access: false, the EKS cluster is only accessible from within the VPC. All deployment and upgrade operations must be performed from a bastion host or service within the VPC.
Upgrading an Existing Deployment to Private EKS
If you have an existing Qrvey deployment with a public EKS cluster, follow these steps to switch to a private cluster.
Step 1: Set Up the Bastion Host
If you have not already done so, set up your bastion host. For more information, see Step 2: Set Up the Bastion Host in the Initial Deployment section.
Step 2: Update the Configuration File
-
On your local machine, locate your existing
config.jsonfile. -
Add or update the following configuration in the
"variables"object:{
"variables": {
// ... existing variables ...
"use_existing_vpc": true,
"vpc_details": {
"vpc_id": "vpc-0e5653ff74f625ef7",
"public_subnets": [
"subnet-0d8bd5ab884f21992",
"subnet-098ad37676a1766e0"
],
"private_subnets": [
"subnet-0ad3f6021fc87b079",
"subnet-0b894a95a6e5819c7",
"subnet-00ccf9b735c440f42",
"subnet-02ac722d53d7bc59c"
],
"intra_subnets": [
"subnet-05f9bec07e00e455b",
"subnet-0513173529e3aaa48"
],
"control_plane_subnets": [
"subnet-0ad3f6021fc87b079",
"subnet-0b894a95a6e5819c7"
]
},
"eks_config": {
"public_access": false,
"api_access_cidrs": [],
"allowed_security_groups": ["sg-0cf32169b9d88a9c9"]
}
}
}Important: Replace the VPC ID, subnet IDs, and security group ID with your actual values.
Step 3: Configure Security Group Access
-
Identify the security group of your bastion host or deployment instance.
-
Update the
allowed_security_groupsarray in theeks_configwith your bastion's security group ID. -
Ensure the EKS cluster security group allows inbound traffic from the bastion's security group.
Step 4: Transfer the Configuration and Run the Upgrade
-
Create an SSH connection to your bastion host:
ssh -i /path/to/your-key.pem ec2-user@<BASTION_PRIVATE_IP> -
Create or navigate to the deployment directory:
mkdir -p ~/qrvey-deployment
cd ~/qrvey-deployment -
Transfer the updated
config.jsonto the bastion host. -
Log into the Qrvey Registry (if not already logged in):
export REGISTRY_USER="<your-registry-user>"
export REGISTRY_KEY="<your-registry-key>"
export QRVEY_VERSION="<new-qrvey-version>"
docker login qrvey.azurecr.io --username $REGISTRY_USER --password-stdin <<< $REGISTRY_KEY -
Run the upgrade:
docker run --platform=linux/amd64 \
-v $(pwd)/config.json:/app/qrvey/config.json \
-it --rm \
qrvey.azurecr.io/qrvey-terraform-aws:${QRVEY_VERSION} applyNote: If upgrading from a version earlier than 9.2.1 to 9.2.1 or later, add the
--refresh-helmflag:docker run --platform=linux/amd64 \
-v $(pwd)/config.json:/app/qrvey/config.json \
-it --rm \
qrvey.azurecr.io/qrvey-terraform-aws:${QRVEY_VERSION} apply --refresh-helm -
Wait for the upgrade to complete and verify the output.
Troubleshooting
Cannot Connect to Bastion Host
If you cannot create the SSH connection to your bastion host, verify the following:
- SSH key pair is correct.
- Bastion host's security group allows SSH (port 22) from your IP address.
- Bastion host is in a subnet with proper routing (NAT Gateway or Internet Gateway).
- VPN or network connection (if accessing from a corporate network).
Docker Login Fails on Bastion Host
If the Docker login to the Qrvey registry fails, verify the following:
-
Bastion host has internet access through a NAT Gateway.
-
Registry credentials are correct.
-
Docker is running:
sudo service docker status -
Pull a public image to verify connectivity:
docker pull hello-world
EKS Cluster Not Accessible
If the deployment fails because of EKS cluster access issues, verify the following:
- Bastion host's security group ID is listed in
allowed_security_groups. - EKS cluster security group allows traffic from the bastion's security group.
- Bastion host is in one of the VPC's private subnets.
- The
control_plane_subnetsare correctly specified in the configuration.
Deployment Fails with VPC Endpoint Error
If you see an error related to VPC endpoints:
- Set
"create_vpc_endpoints": falsein yourconfig.jsonundervariablesif VPC endpoints already exist. - Ensure your VPC has the necessary endpoints for S3, ECR, and other AWS services.
Alternative Deployment Methods
You can also perform deployments using other AWS services that run within your VPC.
AWS CodeBuild
- Create a CodeBuild project in your VPC with access to the private subnets.
- Configure the build specification (buildspec) to run the Docker deployment commands.
- Ensure the CodeBuild service role has the necessary IAM permissions.
AWS Systems Manager Session Manager
- Use Session Manager to connect to an EC2 instance without requiring SSH.
- Ensure the instance has the SSM agent installed and proper IAM role.
- Run deployment commands through the Session Manager console or CLI.
Security Considerations
When deploying with a private EKS cluster, keep the following security considerations in mind:
- Network Segmentation: Keep the control plane in dedicated private subnets.
- Security Groups: Use restrictive security group rules. Only allow necessary traffic.
- Bastion Access: Limit SSH access to the bastion host to specific IP addresses or VPN ranges.
- IAM Roles: Use IAM roles for EC2 instances instead of storing credentials on the instance.
- Audit Logging: Enable CloudTrail and VPC Flow Logs for security monitoring.
- Regular Updates: Keep your bastion host and Docker up to date with security patches.
Additional Resources
For more information on deployment procedures and configuration options, see Deployment on AWS.