Deploying a Server on Azure

Has anyone been able to deploy a speckle server on Azure? If so what did you need to set up?
I’ve seen a few posts on Azure AD but not much detail on the actual steps on the deployment.
Any help would be awesome!!

Thanks!

Hey @shuzmm, we’ve seen several, but each have their own set of gotchas depending on the way the organisation sets things up.

We heartily recommend starting with the Deploying a Server - Kubernetes | Speckle Docs way as it in theory it’s the most portable. @shiangoli has deployed to azure recently, and afaik the smart peeps from B+G are also doing the same (cc @AlexHofbeck). @jenessa.man and @JdB from Arup the same.

Sorry for spamming a lot of people :slight_smile: I suspect having some specific questions will help.

2 Likes

Hi guys,
regarding the steps for Azure and Docker, I can provide a PDF beginning of next week. Was already on my to do. Need to make sure that this one is clean with no specifics about our server. Azure Kubernetes is something, we did not take a look at (for now). For Azure Kubernetes @shiangoli and the guys from Arup are better equipped to answer that :slight_smile: .

6 Likes

Hi @dimitrie, @AlexHofbeck @shuzmm , We have certainly been able to deploy Azure Kubernetes and will share the steps either here or elsewhere which is more appropriate

2 Likes

@shuzmm @AlexHofbeck @dimitrie
Below is an example of the steps you can take to install Speckle on Azure Kubernetes. These are all from the command line and using the portal. For a production build its best to create yaml pipelines which will use the az cli and/or terraform

Pre-requisties

I’ve used powershell for the work
Install the CLI, which will also install kubectl

az aks install-cli

Install Azure Kubernetes

az aks create -n aks-name -g <resource group name> --node-count 3 --network-plugin azure --generate-ssh-keys --enable-managed-identity --kubernetes-version 1.25.6 --os-sku Ubuntu

switch context to the new cluster
az aks get-credentials -n <aks cluster name> -g <resource group name> --overwrite

Install Ingres-nginx load balancer

helm repo update

helm install ingress-nginx ingress-nginx/ingress-nginx --create-namespace --namespace ingress-nginx --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz

Once complete check that the ingress-nginx service is up and running

kubectl get service -n ingress-nginx

Create Speckle namespace and install the priority classes

kubectl create namespace speckle

Install Priority Classes
Speckle relies on 3 priority classes Low, Medium, and High. Save each of these as yaml files

Low Priority Class File:

apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: low-priority
value: -100
globalDefault: false
description: "Low priority (-100) - Non-critical microservices"

Medium Priority Class File

apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: medium-priority
value: 50
globalDefault: true
description: "Medium priority (50) - dev/test services"

High Priority Class File

apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: high-priority
value: 100
globalDefault: false
description: "High priority (100) for business-critical services"

run the following for each of the Low, Medium and High files to create the priorities

kubectl create --context "aks-name" --namespace speckle --filename .\MediumPriorityClass.yaml

Install Redis
example
az redis create --location <region> --name <redis cache name> --resource-group <resource group> --sku Basic --vm-size c0

Or install Azure Redis Cache from the azure portal
retrieve the primary or secondary access key

Install Postgres

install postgres single or flexible server and enable access to azure services

Create the Kubernetes secret for Speckle

kubectl create secret generic server-vars --context "<aks name>" --namespace speckle --from-literal=redis_url="rediss://:<redis access key> %3D@<redis hostname>.redis.cache.windows.net:6380/0" --from-literal=postgres_url="postgresql://postgres-username%40<pg-host-name>.postgres.database.azure.com:<postgres-username-pw>@<pg-host-name>.postgres.database.azure.com:<port number>/speckle?sslmode=require" --from-literal=s3_secret_key="S3 key" --from-literal=session_secret="s3 secret" --from-literal=email_password="smtp password"

The postgres connection string needs to be in URI format. Please see the following link

Note the gotcha @ only needs to be escaped with %40 in the username not the host

postgresql://user%40host:password@host:port/database?sslmode=require

S3 Compatible Storage using Minio

Azure storage services do not support Amazon S3 storage API hence you will need to install minio to provide that S3 API interface. There are various paths you can take here we’ve opted for Minio on K8s and this is still work in progress and I will update this post once complete

Create DNS and install certificate

The approach will depend on your environment that is requesting a DNS and your certificate provider. Once you have your certificate you need to assign it to a k8s secret that will be used by the ingress

kubectl create secret tls <tls secret name> --key .\<key file>.key --cert .\<certificate file>.crt -n ingress-nginx

kubectl create secret tls <tls secret name> --key .\<key file>.key --cert .\<certificate file>.crt -n speckle

Speckle install

Download the helm values yaml file
https://raw.githubusercontent.com/specklesystems/helm/main/charts/speckle-server/values.yaml

Modify the namespace used for speckle, domain and whether your using the cert-manager and if you are using postgress the database certificate

helm upgrade <speckle server name> speckle/speckle-server --values values.yaml --namespace speckle --install

Check the service is up and running

kubens speckle
kubectl get all

Deploy the ingress into the Speckle Namespace

kubectl edit ingress <speckle server name> -n speckle

change the host and hosts value to your DNS
Also the secretName of your certificate
The external facing IP address of your ingress-nginx loadbalancer

spec:
  ingressClassName: nginx
  rules:
  - host: <DNS hostname>
    http:
      paths:
      - backend:
          service:
            name: speckle-frontend
            port:
              number: 8080
        path: /
        pathType: Prefix
      - backend:
          service:
            name: speckle-server
            port:
              number: 3000
        path: /(graphql|explorer|(auth/.*)|(objects/.*)|(preview/.*)|(api/.*)|(static/.*))
        pathType: Exact
  tls:
  - hosts:
    - <DNS hostname>
    secretName: <Secret holding the certificate>
status:
  loadBalancer:
    ingress:
    - ip: <External facing IP address>

If all is well you should be able to browse to your DNS hostname. hope the above helps. I noticed this is a personal message, others might find the above useful and I can place this elsewhere

7 Likes

:speckle: :heart_eyes: .

1 Like

This is great, thanks for writing this up @shiangoli
:speckle: :heart:

2 Likes

This is amazing thanks so much @shiangoli. I might start looking at putting all of these into YAML / Terraform files to hopefully make this easier for Azure

3 Likes

@shiangoli I’ve managed to get most of the services up and running:

However I’m getting a specific error when the pod is being probed for readiness
Saying connection is refused on 127.0.0.1:3000
I don’t see anything in the speckle-server YAML that changes any of that. Is this something we need to change to 0.0.0.0?

1 Like

@shuzmm - the readiness probe is making a request, not listening, so the listen address of 0.0.0.0 would not be appropriate.

This error message is stating that the Kubernetes readiness probe cannot connect to the server at localhost (127.0.0.1).

The Readiness Probe for Speckle Server is an HTTP request made by node to 127.0.0.1. This command is run, by kubelet, from within the speckle server container itself - so it should have access to the container’s local network and not be subject to external network errors or misconfigurations.

You could try amending the readiness probe by editing the deployment in your cluster, perhaps swapping 127.0.0.1 with localhost (which would allow Kubernetes to use IPv6), or the Kubernetes service address, or the pod’s IP address.

My guess at this moment is that the Kubernetes cluster has been deployed with IPv6 enabled, perhaps as a dual stack cluster, and the pod is only locally addressable via IPv6 and not IPv4.

3 Likes