Performance directly affects how quickly MAAS can list, commission, and deploy machines. For large-scale environments, small slowdowns can add up. This page explains how we measure MAAS performance, what improvements we’ve made so far, and how you can track and share your own results.
How we measure
To keep MAAS fast, we run continuous performance tests that mimic real-world data centre conditions:
-
Reference environment:
- 5 rack controllers
- 48 machines per fabric
- 5 VMs per LXD host
- Machines with varied hardware features
-
Test runs: Daily simulations at 10, 100, and 1000 machines.
-
APIs tested: Both REST and WebSocket APIs.
-
Tooling: Jenkins executes the scenarios, results are stored in a database, and we review them via dashboards.
This lets us see how new changes scale before they reach users. We also compare development and stable releases to spot regressions early.
Example result: In MAAS 3.2, machine listings through the REST API loaded 32% faster than in 3.1.
Work done so far
Some recent highlights:
- A video overview that walks through major performance improvements.
- Ongoing work by the UI team to make the interface faster and smoother.
These efforts are part of a broader programme of optimisation across the product.
How you can help
Your metrics and feedback are essential. Here’s how you can contribute:
- Track your MAAS metrics using Prometheus and Grafana.
- Share results like machine counts, network sizes, and API response times.
- Join the conversation on the MAAS performance forum.
This input helps us validate improvements against real-world usage.
What’s next
We’re continuing to target areas that matter most in large environments. Expect further improvements in:
- Search and filtering performance
- Scalability of commissioning and deployment
- Dashboard responsiveness
Your feedback helps prioritise where we focus next.
Next steps for you
- Learn how to monitor MAAS
- Peruse the MAAS metrics reference
- Join the performance forum
Last updated a day ago.