Truly Ephemeral Workloads Make For Challenges You Can See.

Function-as-a-service (FaaS) technologies, including AWS Lambda, Azure Functions and IBM/Apache OpenWhisk, are experiencing mass adoption, even in private clouds, and it’s easy to see why. The promise of serverless is simple: developers and IT teams can stop worrying about their infrastructure, system software and network configuration altogether. There’s no need to load-balance, adjust resources for scale, monitor for network latency or CPU performance. Serverless computing can save you a lot of time, money and operational overhead, if you play your cards right.

Say Goodbye To The Idle Instance

There’s also less waste with serverless computing. You only pay for infrastructure in the moment that code gets executed (or, each time a user processes a request). It’s the end of the server that just sits there. But with all these advantages, IT practitioners are also faced with an avalanche of complexity and new challenges.   

The fundamental challenge of serverless computing is easy to imagine: if something is ephemeral, how do you observe for standard infrastructure metrics on health, uptime and availability? While serverless removes some of the heavy lifting associated with infrastructure management, there are a new set of issues which IT infrastructure teams will need to address:

Efficient Code Is Now Business-Critical

Nothing ruins your potential serverless cost savings like spinning up an instance to rewrite code or fix errors. You’ll want more visibility into error handling and resource usage to understand where your serverless costs can be streamlined.

Resource Optimization Is Now The Responsibility Of The Serverless Customer

The great advantages of public cloud, like efficient use of resources and service delivery, are as fleeting as the instance itself. Now, even though the serverless instance may be available, the overall responsibility for service availability shifts back to the IT operations team.

You Still Need To Invest In Infrastructure Monitoring

You still need visibility to ensure digital experiences and to prevent downtime and security breaches. Cloud providers offer some standard monitoring capabilities, but serverless computing makes end-to-end visibility more complex and existing legacy (and many cloud-only) monitoring tools won’t cut it. You want a modern approach that can handle on-prem and multi-cloud environments as well as microservice architectures. If there's a different instance every single second (or less), imagine the challenges of monitoring these instances for uptime, availability, performance, or configuration. DevOps teams call this observability - tracking application performance (metrics, traces, and logs) from moment-to-moment. Monitoring also needs to be latency-sensitive and account for cold starts, or the lag of spinning up an instance. Your modern infrastructure monitoring solution should not discriminate between a start-up lag and an actual disruption.

Welcome To Application Hyper-Awareness

IT operations teams will need to know more than ever about application usage and understand specific limitations of their FaaS application. You’ll want to effectively track how the cloud provider executes and charges for the application code and what to fix when things go awry. All of these demands will require more modern solutions for serverless infrastructure management.

Automation Is The Rule, Not The Exception

You’ll also need to build the right automation for effective monitoring of dynamic serverless workloads and to couple this with the insights of your experienced cloud engineers. There’s just too much risk in leaving serverless monitoring completely to your cloud provider, but it’s not something that can be done manually anymore.

The Opportunity For Modernization: Serverless Can’t Be Ops-Less

All of these demands refocus the challenges facing modern IT operations teams. Pivoting toward serverless requires new strategies to optimize infrastructure to support serverless development. It also requires a new framework of incident response (when incidents are as ephemeral as instances) and log management and analytics to track the speed of change. These new serverless demands are opportunities for ops teams to support efficient development while tracking and managing infrastructure that’s invisible. And especially when resources need to be continuously tuned for efficient, cost-effective development, the demands of a flexible, agile digital operations framework is as important for ever. 

The best way that IT operations can prepare for this change? Develop the skills. Find ways to consolidate tools. Simplify. Adopt a cloud-centric, integrated approach with monitoring and management to move with high velocity at scale. Gartner research Vice President Andrew Lerner said it well: “While Serverless is hailed as the holy grail of “NoOps”, the reality is there is plenty of cloud centric operational know-how as well as security, monitoring, debugging skills that will be required to operate these in a production environment and/or at a scale.”

Serverless is a tremendous leap forward for infrastructure management. But, like any leap forward, it requires a re-assessment of how things are done. Infrastructure teams need a different type of governance and an extreme level of flexibility. They need to continue to embrace multi-cloud strategies, while taking on new responsibilities for optimization, utilization and governance. IT Ops won’t become obsolete. But the sooner it can adopt a framework to support the evolution of serverless technology in the face of digital transformation, the sooner it can help the business realize even greater serverless value.

This article was first published on the Network World IDG Contributor Network.

Next Steps:

Unified Service Intelligence: The Answer To Your Hybrid Challenges


Recommended posts