Welcome!

Log Management Authors: Dana Gardner, Pat Romanski, Elizabeth White, David H Deans, Carmen Gonzalez

Related Topics: @DevOpsSummit, Containers Expo Blog, @CloudExpo

@DevOpsSummit: Article

Understanding #Serverless at @CloudExpo | #DevOps #NoCode #LowCode #AI

Understanding the pitfalls and disadvantages of serverless will make it much easier to identify use cases that are a good fit

Understanding Serverless Cloud and Clear
By Martijn van Dongen

Serverless is considered the successor to containers. And while it’s heavily promoted as the next great thing, it’s not the best fit for every use case. Understanding the pitfalls and disadvantages of serverless will make it much easier to identify use cases that are a good fit. This post offers some technology perspectives on the maturity of serverless today.

First, note how we use the word serverless here. Serverless is a combination of “Function as a Service” (FaaS) and “Platform as a Service” (PaaS). Namely, those categories of cloud services where you don’t know what the servers look like. For example, RDS or Beanstalk are “servers managed by AWS,” where you still see the context of server(s). DynamoDB and S3 are just some kind of NoSQL and storage solution with an API, where you do not see the servers. Not seeing the servers means there’s no provisioning, hardening or maintenance involved, hence they are server “less.” A serverless platform works with “events.” Examples of events are the action behind a button on a website, backend processing of a mobile app, or the moment a picture is being uploaded to the cloud and the storage service triggers the function.

Performance
All services involved in a serverless architecture can scale virtually infinitely. This means when something triggers a function, let’s say, 1000 times in one second, it is guaranteed that all executions will finish one second later. In the old container world, you have to provision and tune enough container applications to handle this amount of instant requests. Sounds like serverless is going to win in this performance challenge, right? Sometimes the serverless container with your function is not running and needs to start. This causes slight overhead in the total execution of the “cold” functions, which is undesirable if you want to ensure that your users (or “things”) get 100% fast response. To get predictable responses, you have to provision a container platform, leaving you to wonder if it’s worth the cost, not just for running the containers, but also for related investments in time, complexity and risk.

Cost Predictability
With container platforms or servers, you’re billed per running hour, or, in exceptional cases, per minute or second. If you have a very predictable and steady workload, you might utilize at around 70%, which is still a lot of waste. At the same time, you always need to over-provision because of the possibility of sudden spikes in traffic. One option would be to increase utilization, which would come with fewer costs, but also higher risk. With serverless, in contrast, you pay by code execution to the nearest 100 milliseconds, which is much more granular and close to 100% utilization. This makes serverless a great choice for traffic that is unpredictable and very spiky because you pay only for what you use.

Security
You would expect cloud services to be fully secure. Unfortunately, this isn’t the case for functions. With most cloud services, the “attack surface” is limited and therefore possible to fully protect. With serverless, however, this surface is really thin and broad and runs on shared servers with less protection than, for instance, EC2 or DynamoDB. For that reason, information such as credit card details are not permitted in functions. That does not mean it’s insecure, but it does mean that it can’t pass a strict and required audit…yet. Given the high expectations for serverless, security will likely improve, so it’s good to get some experience with it now so you’re ready for the future.

Start with backend systems with less sensitive data, like gaming progress, shopping lists, analytics, and so on. Or process orders of groceries, but outsource the payment to a provider. Like credit card numbers, these things are on their own sensible piece of data, but if data in memory is leaked to other users of the same underlying server, a credit card number exposure can be exploited, but an identifier like id: 3h7L8r bought tomatoes cannot.

Reliability
Another thing to think about with security is the availability of services. A relatively “slow” service that can’t go down is generally better than a service that is fast but unavailable. Often in a Disaster Recovery setup, all on-premise servers are replicated to the cloud, which adds a lot of complexity. In most cases, it’s better to turn off your on-premise and go all-in cloud. If you’re not ready for this step, you can also use serverless as a failover platform to keep particular functionalities highly available, not all functionalities of course, but those that are mission critical, or can facilitate temporary storage and process in a batch after recovery. It’s less costly and very reliable.

Cloud and Clear
Until recently, it was quite tricky to launch and update a live function. More and more frameworks, like Serverless.com and SAM, are solving the main issues. Combined with automated CICD, it’s easy to deploy and test your serverless platform in a secured environment. This ensures the deployment to production will succeed every time and without downtime. With cloudformation or terraform you “develop” the cloud native services and configure functions. With programming languages like nodejs, python, java or C#, you develop the functions themselves. Even logging and monitoring has become really mature over the last few months. The whole source gives you a “cloud and clear” overview of what’s under the hood of your serverless application: how it’s provisioned, built, deployed, tested and monitored and how it runs.

Conclusion
AWS started in 2014 with the launch of Lambda, and although this post is mainly about AWS, Google and Microsoft are investing highly in their functions, and in the serverless approach as well. Over the last couple of months, they’ve shown very promising offerings and demos. The world is not ready to go all-in on serverless, but we’re already seeing increasing interest from developers and startups, who are building secure, reliable, high-performing and cost-effective solutions, and easily mitigating the issues mentioned earlier. You can look forward to waking up one day and finding out that serverless is now fully secured, provides reliable performance (pre-warmed), and has been adopted by many competitors. So be prepared and start investing in this technology today.

This blog was originally published by Xebia at Understanding Serverless Cloud and Clear.

The post Understanding Serverless Cloud and Clear appeared first on XebiaLabs.

More Stories By XebiaLabs Blog

XebiaLabs is the technology leader for automation software for DevOps and Continuous Delivery. It focuses on helping companies accelerate the delivery of new software in the most efficient manner. Its products are simple to use, quick to implement, and provide robust enterprise technology.

IoT & Smart Cities Stories
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and Bi...
Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and sh...
What are the new priorities for the connected business? First: businesses need to think differently about the types of connections they will need to make – these span well beyond the traditional app to app into more modern forms of integration including SaaS integrations, mobile integrations, APIs, device integration and Big Data integration. It’s important these are unified together vs. doing them all piecemeal. Second, these types of connections need to be simple to design, adapt and configure...
Cloud computing delivers on-demand resources that provide businesses with flexibility and cost-savings. The challenge in moving workloads to the cloud has been the cost and complexity of ensuring the initial and ongoing security and regulatory (PCI, HIPAA, FFIEC) compliance across private and public clouds. Manual security compliance is slow, prone to human error, and represents over 50% of the cost of managing cloud applications. Determining how to automate cloud security compliance is critical...
Contextual Analytics of various threat data provides a deeper understanding of a given threat and enables identification of unknown threat vectors. In his session at @ThingsExpo, David Dufour, Head of Security Architecture, IoT, Webroot, Inc., discussed how through the use of Big Data analytics and deep data correlation across different threat types, it is possible to gain a better understanding of where, how and to what level of danger a malicious actor poses to an organization, and to determin...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.