Speckle is made from many components. If you’ve been using Speckle, you’ll be familiar with our Connectors that plug into your favourite 3D modelling applications.

You will also have used the Manager for Speckle, which helps with finding and installing the Connectors of your choice. Another part of Speckle is the Server (see it deployed here, or its source code here - and don't forget to leave us a ⭐), where your streams (projects) and commits (model versions) are hosted, allowing you to share your models with others and between different applications and computers.

As part of being an open-source project, you can run your own Speckle Server. We’ve previously published guides on deploying Speckle Server via DigitalOcean’s 1-click Marketplace, or manually using Docker Compose. Now we’ve made it easier to deploy a Speckle Server on a Kubernetes platform by providing a Kubernetes Helm Chart.

::: tip

💡 For those of you who wish to dive straight into deploying the Helm Chart on your own Kubernetes platform, we have provided a detailed step-by-step guide.

:::

Kubernetes

Kubernetes makes it easier to run and manage an application, such as Speckle Server, in the cloud or in a data centre. Kubernetes can distribute your application across many machines providing redundancy in the case of failure of any single machine and helping you to achieve a state of “high availability” where your application is available to customers 99%, or even 99.99%, of the time. In the event of a failure, such as your application crashing or the hardware deteriorating, Kubernetes will try to recover automatically. This makes an application managed by Kubernetes more robust and fault tolerant.

If requested, Kubernetes can automatically scale your application to meet user demand, ensuring it is available even during busy periods. Additionally, it can scale your application down during quiet periods to save money.

Kubernetes provides a common interface to control your software, agnostic of the vendor; this means that running an application is almost the same whether Kubernetes is provided by Google Cloud, Amazon Web Services, Microsoft Azure, or in your organisation’s data centre. This prevents vendor lock-in and reduces the re-training or modifications required to move your application to another cloud service.

Helm and Helm Charts

Helm is a mechanism for packaging an application, in our case Speckle Server, along with the instructions which tell Kubernetes how to run the application. Helm provides a common way of creating a template for those instructions so that you can modify the instructions to your needs. It is a similar concept to an installer for a Windows or Mac application. It makes it easier to install the application and allows you to choose some options - such as the location where the application should be installed and whether you would like additional features installed.

Helm packages this template into an artefact known as a Helm Chart. Speckle provides a Helm Chart for Speckle Server. Our Helm Chart has been available while we have been developing it, and we previously published a guide on how to deploy Speckle Server to Kubernetes using the Helm Chart.

::: tip

We’re introducing the Helm Chart now as we have recently made some changes to make it more configurable and secure.

:::

Our Documentation

Speckle’s Helm Chart exposes hundreds of different variables that can be configured; this makes it highly flexible and powerful though daunting and confusing to those unfamiliar with Helm Charts. Our guide to deploying Speckle on Kubernetes highlights the dozen or so variables that are critical, and we’d recommend starting with those. We’ve documented all these variables in depth and published these to the Helm Chart’s dedicated website.

Additionally, we’ve provided details of these variables to Helm in a JSON Schema document. This allows Helm to automatically check and validate some aspects of the values that you are using, ensuring they meet the requirements of Speckle’s Helm Chart. If you have misconfigured something, this will help catch some errors when you run helm template helm install or helm upgrade commands.

Choosing Where Speckle Should Run

Within Speckle Server, there are seven different components; some are CPU- and memory-intensive than others, such as the preview generation component and the imported file conversion component. It would be best to have these components run on hardware (or virtual machines) that can provide the high CPU or memory required. It is sensible to have other components run on more cost-effective hardware.

We’ve updated the Helm Chart to allow the configuration of four Kubernetes variables; affinities, node selectors, tolerations, and topology spread constraints. This allows us to create a more robust and fault-tolerant application where replicas of a single component are configured to be distributed across multiple different machines (or data centres) and not grouped on a single machine which may lead to a single point of failure.

Securing Your Kubernetes From Speckle

Continuing in the theme of Security, Speckle Server’s Helm Chart now creates Kubernetes Service Accounts for each of the Speckle Server components, tailored to the minimum requirements each of them requires. A Service Account provides the Speckle Server component with authorisation and credentials to interact with Kubernetes; for example, to view the connection details, which allow it to connect to the Postgres database. For all Speckle Server components, we have removed their ability to interact with Kubernetes API and limited their ability to read secrets to the single secret required by Speckle Server.

We have further restricted privileges used by Speckle Server to the minimum required to run, forcing it to run as a non-privileged Linux user and removing all Linux capabilities.

::: tip

These are not required for Speckle Server to run, but further restricts the ability of malicious actors to undertake attack techniques against the host environment.

:::

Network Security

Kubernetes Network Policies represent a further security-related enhancement. In simplistic terms, these act as firewalls around each Speckle Server component, limiting access into or out of the component from the network - whether that is the internet or the local network within Kubernetes. This ensures that all inbound network traffic comes via the official ingress route and limits each component to communicate with just the dependencies (Postgres-compatible database, Redis store, S3 compatible blob storage, DNS, or metrics system) that they need to, and no others.

As with the other security-related improvements, this places additional friction in the path of a would-be attacker and helps prevent attack techniques such as data exfiltration or allowing an attacker to move through a network to attack other parts of the Speckle Server or other systems.

Summary

As with all software, the Helm Chart is a continual work of improvements. Please try out the Helm Chart on your Kubernetes Platform by following our step-by-step guide. We’ll be continuing to work on bringing more features to Speckle’s Helm Chart, and we’d love to hear your feedback.