Machine reservation and multi-tenancy in MAAS
by Christian Reis on 20 December 2017
As product manager for MAAS, a common request MAAS end-users bring is multi-tenancy, which in its more fundamental form can be understood as the ability to reserve machines for certain sets of users. This is common when you have a central MAAS which is managing multiple parts of your datacenter; it applies less when you are using MAAS dedicated to a deployment, such as we use in a typical starter OpenStack build.
Let’s discuss that use case in a bit more detail. It boils down to:
- You have a MAAS which centralizes hardware meant for multiple teams
- You’d like to avoid the teams being able to take hardware which is not meant for them
MAAS currently implements a feature called Machine Reservation — in essence, pre-allocating machines to users. Typically in a MAAS machines are left unassigned; once commissioned, they are only assigned on-demand, when a request to deploy a new machine comes in from the API or Web UI. But with machine reservations, you can obtain the fundamental effect of multi-tenancy in a very simple manner: you simply pre-assign machines to your users and as they request machines, they will get chosen from that assigned set.
Here’s an image showing a MAAS installation where 5 groups of users — prod, qa, staging, sandbox and admin — are each assigned of a set of machines:
The full listing above is only visible to administrators; machines assigned to specific users are not visible to other users when logged into the system. In other words, MAAS administrators can see the complete set of machines enlisted in MAAS, but regular users see only their own. Following on from the example above, when the prod user is logged in, they will see this:
This simple example should bring up a few questions, which I’ll cover in the next sections.
1. What happens when you run out of machines in the assigned set?
The way MAAS satisfies a user’s request to deploy a machine is pretty simple:
- It looks for a free machine in the set of machines assigned to the user.
- If there are machines available, it picks one for deployment.
- If there are no machines available, it looks at the set of machines not assigned to anyone.
- If there are machines available, it picks one for deployment.
- Otherwise, it returns an error.
This leaves it up to you to decide what sort of policy to put in place:
- For a hard allocation policy, where every single machine belongs to a specific user, ensure that as new machines are commissioned in MAAS they are immediately assigned to the right user.
- If instead you would like to have spare capacity available to whoever requests machines first, then you can leave some or all of your machines unassigned and they will be allocated in a First Come, First Served basis.
2. How do I assign machines to a group of users?
The current implementation of MAAS does not model groups; that’s on the roadmap, as is using LDAP as the source for its users and groups.
However, there is a simple way to get most of that benefit, which is to create accounts in MAAS to represent your groups, assign machines to those accounts, and hand out API keys to users within those groups. If you require users to access the Web UI, then you’ll need to share passwords between them, which is not an ideal setup, but which for certain environments is an acceptable compromise.
3. What happens when I add new machines to MAAS?
As hinted at in question 1, when new machines are added and commissioned, they are put in the globally available pool. If you are operating with a policy where all machines are always allocated to users, then ensure that you assign the new machines as soon as commissioning ends. There is a small race condition there, and one which we are also investigating how to address as part of our multi-tenancy roadmap work, but this should not generally be a major concern for most IT environments where users are trusted to a reasonable degree.
MAAS is already in use in large environments using this model, and we are welcoming input and feedback on how well it works. If you would like to add to the discussion, join the maas-devel mailing list and share your thoughts. See you there!
Related posts
Provisioning bare metal Kubernetes clusters with Spectro Cloud and MAAS
Bare metal Kubernetes (K8s) is now easier than ever. Spectro Cloud has recently posted an article about integrating Kubernetes with MAAS (Metal-as-a-Service. In the article, they describe how they have created a provider for the Kubernetes Cluster API for Canonical MAAS (Metal-as-a-Service). This blog describes briefly the benefits of ba […]
Data Centre AI evolution: combining MAAS and NVIDIA smart NICs
It has been several years since Canonical committed to implementing support for NVIDIA smart NICs in our products. Among them, Canonical’s metal-as-a-service (MAAS) enables the management and control of smart NICs on top of bare-metal servers. NVIDIA BlueField smart NICs are very high data rate network interface cards providing advanced s […]
Canonical joins the Sylva project
Canonical is proud to announce that we have joined the Sylva project of Linux Foundation Europe as a General Member. We aim to bring our open source infrastructure solutions to Sylva and contribute to the project’s goal of providing a platform to validate cloud-native telco functions. Sylva was created to accelerate the cloudification of […]