Update

You can find the next posts of the series here:

A bit of intro

This is part one of a 5 part blog post series, where we are going to walk through what the umbrella topic of DevOps means to us at the Speckle core team. Also we’ll take a look at how we apply these practices in the real world to build, deploy, update and manage the 15+ Speckle servers we currently maintain.

So why should you read these post? If you are:

  • an AEC hacker → our goal is to deliver some educational content to you in the world of IT infrastructures, modern application deployment, operations practices and more. If these things sound scary, don't worry we will start from the basics.
  • a Speckle Server admin → this post series should give you a peek into our way of thinking about server deployments and operations. Maybe you can learn some tricks from us.
  • a DevOps engineer → we will describe our application specific challenges and how we've solved them. We deploy and run 15+ Speckle Server instances easily and in a scalable way across multiple geolocations. We’ve set ourselves up for the possibility of rapid growth. So this should be interesting to you, or you could roast us. We're always open to some feedback.

With that said, here are the topics that we are taking a look at:

  1. DevOps basics + the anatomy of a Speckle server stack
  2. The one about building and testing stuff
  3. The one about deploying stuff
  4. Piecing it all together
  5. Tracing, Logging, Monitoring

Post 1 -> The one about the basics

What is DevOps?

If that question bores you, the second part of the post about the Speckle server architecture could be you cup of tea.

Right, so let's start for the beginning.

The term DevOps is formed from Dev(elopment) + Op(eration)s; but in truth it entails more than just bridging the long standing gap between development and operation teams. It is a very interdependent combination of cultural philosophies, practices and tools that made this way of working so successful, that it’s basically ubiquitous in today’s software development world.

The cultural seed is stemming from the far reaching effects of the agile movement on the world of software development. With the need for faster and faster iteration cycles on the actual production applications, the traditional approach of developing big monolithic applications and deploying them into a hand crafted environments became less and less sustainable.

Teams started to develop applications with the micro-service architecture structures, where the different components of the apps could be developed and deployed independently from each-other. This (arguably) has some benefits from the development perspectives (and some drawbacks too, which we’ll not get into), but it brought on a lot of additional cost and complexity from the ops perspective.  Now instead one application the operations teams had to manage a lot more components. This sparked the development of quite a few new tools, that were capable of tackling these tasks and the whole thing basically became a self inducing cycle, where more rapid development required better tools and the advancement of tools enabled more frequent iterations. Today it's not uncommon that a company deploys to actual live production systems 100+ times a day.

And the reason why working this way is beneficial, is that teams can deliver value faster into their users hands, who can in turn provide rapid feedback to the product developers, as it was envisioned in the Agile manifesto https://agilemanifesto.org/

What is DevOps for us @Speckle

For small teams like ours, DevOps is less about making peace between ops and dev teams, since at our current scale we mostly operate as one team. It's more about how we can utilize our limited amount of time and energy to build and operate the best end user experiences possible.

It is said, that if your ops team is not actively working on developing your product, or your dev team know very little about how your app is operated, you are not doing devops. We live by this approach too but at the same time we are trying to be pragmatic. Not all members of team is a Docker / Kubernetes wizard, but people working mostly on desktop connectors have a working knowledge and ownership on how the Speckle connectors are tested, built and published.

With this integrated mindset its easier to avoid the infamous "it runs on my machine" problem and we can focus on pushing for new useful additions to the Speckle ecosystem.

High level server architecture

Server architecture

In this segment we’ll take a look at how the Speckle server project is structured, what its components are responsible for and how these parts interact with each-other and with the wider Speckle ecosystem.

We have quite a few more projects that we could talk about, like our desktop connectors, but during this post series will be focusing on what we call the Speckle server. Mostly because that is the main backbone of our infrastructure and we can demonstrate all core concepts on the project.

Frontend

The web frontend is the main web interface for the Speckle server. It's what our users interact with to manage their accounts, streams, branches and view their commits. It also bundles the Viewer package, a standalone npm package that is responsible for displaying and providing an API to interact with 3D data in a web browser.

The frontend is a VueJs app that we pre-render into static assets at build time and serve from an nginx server in our production setup. This reverse proxy is also acts as an api gateway routing traffic around to the backend services.

Server

A nodejs app that is our backend orchestrator. The server is responsible for managing the persistent data storage layers for the whole application stack, it maintains the PostgreSQL database schemas.

The server also relies on a Redis in-memory database for real time notification and other pub-sub type event handling.

It bundles a few services that in the future might be split out to be independent micro-services:

  • the core Speckle services that have:
  • a GraphQL API for interacting with the main Speckle domain specific objects like streams, branches, commits etc.
  • a REST API for the most performance sensitive operations that are the Speckle object up and downloads.
  • Authentication and Authorization for the frontend and backend services
  • an web based GraphQL API explorer
  • acts as a public facing API for the worker type backend services

As part of being a public API for the file import service it relies on an S3 compatible object storage for storing the uploaded source files before the file import worker service can pick them up.

File imports

A background worker service, that reads file import tasks from the PostgreSQL db and turns the content of the file into a new commit on the given stream. At the moment it supports importing .ifc files.

Webhooks

For different events on the Speckle resources we support registering custom webhook handlers. The webhook service is also a background worker service responsible for making the outgoing calls to the registered external handlers.

Previews

For each commit a preview generation task is created which is picked up by the background worker preview service. It loads the referenced object of the commit in a headless browser into a Speckle viewer and creates a preview image from the model.

Wrapping up

Thanks for reading, in the next post we’ll take a look at how we approach Continuous Integration for the Speckle server.

Stay tuned.