Our preferred solution for a production server is to deploy services with Kubernetes. This can specify the variety of resources required and point to associated AWS products. Using our open-source Helm charts with EKS (Amazon Elastic Kubernetes Service) may be an option.
If you are set on AWS ECS, this isn’t something we have documentation or extensive experience, but a modification from he Deploy to VM is not too much stretch other than we can’t support you 100%
To adapt the provided Docker Compose setup for AWS ECS, you could follow these general steps:
- Push your Docker images (Speckle Server and any dependencies like PostgreSQL, Redis, etc.) to a container registry. Elastic Container Registry (ECR) say.
- Create ECS Task Definitions for the Speckle Server and its dependencies. equivalent to the Docker Compose environment variables, CPU and memory allocations, and logging configurations.
- Create an ECS Cluster; you can host your containers using AWS Fargate for a serverless compute engine or EC2 instances.
- Define an ECS Service to run and maintain a specified number of instances of the Task Definition simultaneously in an ECS cluster. (If a container instance fails, the ECS service scheduler launches another instance of your Task Definition to replace it, maintaining the desired count of instances.)
Stretch goals: - Configure an Application Load Balancer (ALB) to distribute incoming traffic across your Speckle Server containers. ECS integrates with ALB to provide dynamic port mapping and load balancing.
- Service equivalences: for database and storage persistence, consider using AWS-managed services like RDS for PostgreSQL and ElastiCache for Redis. Fortunately (unlike Azure) S3-compatible object storage, AWS S3 is natively compatible and recommended.
This is probably inadequate as a guide and doesn’t cover IAMs or security groups