Leveraging Cloud Services for Scalable Web Applications
Scalability is not just a buzzword. It’s a decision you make (or don’t make) on day one.
There’s a specific moment when an application stops being “our software” and becomes “this week’s problem.” It usually happens when an important client comes on board along with twenty others, and you realise the architecture you had in mind was never really designed for that.
We’ve experienced that moment firsthand while working on the new version of Trust-IT Grants, a platform that manages the entire workflow of open calls and grants, from proposal submission to final reporting.
The previous version worked. But it worked for one client at a time, on top of an infrastructure shaped by decisions made years ago.
Rewriting it from scratch was not an easy choice. It takes time, it requires convincing those funding the project, and above all, it means resisting the temptation to “just fix what’s already there.” But it was the only way to do things properly: to build something multi-tenant by design, where each client has its own isolated environment, its own rules, its own evaluation workflows, without one configuration interfering with another.
Today, the platform runs on Kubernetes, and that choice wasn’t made because it’s trendy. It was made because when you have multiple tenants with different usage peaks: an open call here, another one closing there: you need an infrastructure that can allocate resources intelligently without constant manual intervention. Kubernetes gives you that leverage. But only if the underlying code is designed to take advantage of it: stateless components, independent deployments, and tenant-specific configurations separated from the application logic.
What is often underestimated is that Kubernetes doesn’t solve architectural problems , it makes them more visible. If your service keeps state in memory, you won’t notice the issue when you write the code, but when Kubernetes decides to move a pod to another node. That’s when “scalable by design” stops being a slide-friendly phrase and becomes something very real.
The same applies to observability. On a platform with multiple active tenants, knowing that “something is wrong” is not enough. You need to know what, for whom, and since when. Centralised logging, tenant-level metrics, distributed tracing: these are not optional, they are the difference between solving an issue in twenty minutes and spending an entire afternoon trying to figure out where a specific client’s evaluation workflow got stuck.
We don’t share this story to claim we got everything right. We share it because this is the kind of reasoning we bring to the table. This is how we’ve always worked. We don’t sell architectures, we support difficult decisions, the kind where the right answer costs more in the short term but stands the test of time. Our FitSM-0 and FitSM-1 certifications reflect the same rigor we apply not only in designing systems, but also in managing services. Technology and governance, together, from the very beginning.