We don't all have a bottomless pit of money and I think most of us have learnt by now that it certainly doesn't grow on trees, so in our home lives we keep an eye on how much we're spending otherwise we get ourselves into trouble.
The same is true when building services in the cloud. Even large, multi-national organisations have departments who have budgets that they have to stick to, so when building out services you don't tend to look for the largest service offering you can find.
What this often means is that we have to find a balance between what offers the best performance, scale, resiliency and throughput against what we can reasonably spend, not only in standing the service up but against what it will cost to maintain.
Various services within Azure offer different ways of accomplishing this, Functions allows you to pay only when your code is actually running for instance, which is great if your code can afford to have a little longer to get up and running and you're not calling it several thousand times a day. I personally have a couple of functions that run in my own subscription once every few hours on a Timer, I can pay for that with the change I find down the back of my sofa.
Azure Batch offers a different method, it allows you to specify formulae for determining how pools can Auto-scale. Recently I've implemented a formula that scales based on the number of tasks waiting to be executed, but with an upper limit to the number of nodes. It also removes all nodes from the pool if there is nothing to process. In this particular case it works fine as data turns up sporadically so there can be long periods of time when nothing is happening.
The days when a software engineer could just write the code and let someone else figure out where it would run, how it would be maintained and how much its running costs would be are pretty much over. Today's software engineer needs to think about a number of factors, performance, cost and security being chief amongst them.