AWS re:Invent 2016: The State of Serverless Computing (SVR311)



I'm Tim Wagner I'm the general manager for AWS lambda and Amazon API gateway and we're really delighted to be here with you today sharing all these amazing customer stories about this topic that's so near and dear to my heart service architectures and huh it looks like they're a few servers up here on stage now I don't think it's right to have a server list State of the Union address if they're going to be servers on stage this happened in New York and we had to get out of baseball back to take care of them and then it happened again in Tokyo and we took a sword to them but here we are in Vegas Vegas at the Mirage I think what we need to get rid of these servers is a little bit of magic always good to get rid of some servers while you're at it of course the real magic is the amazing power that serverless brings to developer productivity cost savings and making it really easy for customers to deliver more value to their businesses and for their customers faster than ever so going to talk about a few things here this morning for those of you who may not yet have joined us in the service revolution I'm going to bring you up to speed a little bit on lamda by talking about why we developed it and kind of how we think of it in sort of the evolution of compute then we're going to take a look at how customers are achieving faster time-to-market lower costs and more business results by adopting serverless solutions and I'm going to be joined up here on stage by Tim grease bak from fin rub he's going to tell you a little bit about their service journey and some of the things that they've discovered along the way and then finally we're gonna dig in a little bit to some of the great new features that you've heard about being launched this week and we'll see how lambda fits into an entire suite of service pieces from AWS a whole portfolio of which is designed to keep infrastructure out of your way while making your job as a developer easy fast and fun so what is service service computing lets you build and run applications without thinking about servers of course your application still runs on servers but the server management is done by AWS you no longer have to provision scale or maintain those servers you don't have to install and operate databases you don't have to think about storage systems the complexity of managing authentication and connections for mobile and IOT apps all of that infrastructure complexity goes away with the serverless approach that care and feeding of the infrastructure everything from security updates to new hardware migration becomes our responsibility instead of yours and it's not just the hardware infrastructure that you get to stop worrying about it's also the software servers it's also the responsibility of receiving and all those requests and then having to wait around and figure out what to do with them with this approach with lambda instead we call your code only when it's needed only when there's something for it to do which can also save you a lot of money now let's take a look at what led up to the creation of lamda so the modern era of computing has gone through several phases with each era shaping the ecosystem and the software that was dictated by that hardware so you know go back 15 20 years in the realm of physical servers in data centers you know an expensive data center proposition required in order to get started building a business and sometimes even just building a new application now as an industry we made that a little more efficient in the virtualization error because we could pack more work on to an individual box and it was also the beginning of a journey to separate the software and the code from some of the ties to particular pieces of silicon on which a tram of course it was still sitting in a data center that you had to own and operate so moving to the cloud was our next major revolution it freed companies from the physical limitations of operating their own data centers it achieved that economy of scale so that one HVAC and power plant could service many different companies and it started that journey of moving even more of the management away from individual companies and over to a service provider but still that server by server programming model and all the operational deployment mechanisms associated with it were still inherited from previous generations now each progressive step here got better virtualization improved utilization gave us faster provisioning and speed disaster recovery some of that Hardware independence then the move to the cloud capitalized on that trend turning those lengthy and expensive data center investments into simple fast and flexible cloud provisioning of infrastructure and further that important process of shifting the maintenance burden from the application owner to the cloud service provider more scale more elastic resources and of course much more agility as an organization but even in the cloud there's still limitations to infrastructure you still have to administer those servers even if they're virtual even if they're in the cloud you still need to manage the capacity and the utilization and size the utilization and the capacity to your workloads you need to be able to have enough capacity to provision for your peak experience while still worrying about carrying too much when you might be idle and then under provisioning can lead of course to outages and then of course is the challenge of managing all those fleets of servers ensuring that you have the right bits on the right machines at the right time and this is what led to the evolution that brought us to land up we wanted to meet not just current but future IT needs to help companies be as agile as possible and to take the parts that take they told us were the most challenging and often the least differentiated and least interesting for them out of this particular puzzle and the answer to that was lambda our service compute solution that has all the benefits and compute power of existing cloud infrastructure but without the operational complexities of dealing with the server's themselves or of getting applications onto those fleets of servers both the hardware and the software models get easier in this next generation of compute and the challenges the responsibilities that were endemic to owning the infrastructure even virtualized cloud infrastructure go away worrying about whether you're using server lies servers effectively building distributed infrastructure to handle fault tolerance dealing with scaling getting code onto the machine all of that disappears by virtue of transferring to AWS and with that you can focus on your business logic instead of where and how that logic runs for example recent security patches like dirty cow hardware migrations to newer generations of and routine machine maintenance and cycling are all handled by the lambda team landed developers on the other hand don't have to think about those problems so we've talked a little bit about what serverless isn't let's talk a little bit about what service is LEM helps you respond to events in real time now there are a lot of ways you can build event based systems of course but this a particular approach offers the simplest and fastest way to construct reactive systems webhooks and other kinds of asynchronous processing models because it alleviates the challenge in the conventional hassles that are associated with trying to match and scale capacity to an unpredictable workload when there's nothing to do you typically have too many machines when there's a lot to do you often have too few machines and with lambda that bursts capacity is built right into the system so that it can react quickly and efficiently and is able to monitor along the way that continuous scaling is super important it's a key piece of what makes lambda lambda and differentiates it from infrastructure mechanisms because scaling happens per request lambda accepts and monitors each request on your behalf so it can scale up and down it can place your code in the right place and as a result you get these really interesting benefits for example and lets us replace hardware as we need to it lets us do fault tolerance on your behalf because we can place it into the right availability zones and it helps us monitor your application so that some of this sort of annoying grunt work of just building basic simple kinds of monitoring and logging isn't a challenge that your application developers have to deal with and then finally you don't have to think about the deployments and the code runtimes at the Machine level you're not renting the machine you only pay when your code has something to do when we give it some work to do and importantly when you're not running you're not paying that whole concept of being over-provision goes away now lambdas the compute portion of what is actually a large portfolio of service offerings from AWS all of these services together help you build application architectures that enjoy the same kind of benefits that lambda provides for compute object storage and s3 where you don't have to think about where the capacity comes from no sequel data storage and DynamoDB real-time streaming with Kinesis and as we'll discuss a couple of important newcomers here as well that you heard about this morning now all of these services have a common goal speeding up the innovation and delivery flywheel for your application and your business removing the friction of thinking about compute storage and networking infrastructure takes away whole categories of operational complexity for example with lambda code deployments to the fleet are the responsibility of the service with s3 allocating and provisioning additional storage for the files the responsibility of the service now this operational benefit creates a developer productivity effect because you don't have to write the code to do those things the code that you might otherwise have had to supply as part of your application never appears and your developers freed from the responsibility of doing all of that grunt work for monitoring and logging and capacity management and fleet deployments can instead spend their time on the business logic speeding up the delivery of the things that you actually care about the things that actually differentiate your business and of course the idea is to free you up to do more time give you more time back for innovating differentiating and competing in an incredibly an increasingly agile and competitive market it's a powerful cycle and one that we've seen many times as companies start to adopt the serverless architecture so we've talked a little bit about what service is how we came to be and how it can help improve productivity lower costs and decrease some of your operational burdens now I want to take a look at how actual customers are using lambda today and how they're using the entire AWS server list portfolio to out-innovate their competitors and deliver business results I want to start by looking at some of the common categories that we see people using service for and we'll start with web applications from simple static websites hosted on s3 to fully dynamic web apps that utilize lambda kognito DynamoDB and others customers are increasingly choosing serverless approaches for doing this job and the reason is really simple zero code when you don't have customers using the site and then almost instantaneous scale-up when you've got a flash sale or a sudden burst or you've released a new product or whatever it is that triggers usage of your website that can be incredibly difficult to plan for and incredibly difficult to pay for with a conventional mechanism and so server list isn't that ideally tuned to that and to make building these server list web apps even easier we've also recent released open source and implementations of both Express and flask the latter with our Challis support that makes it even easier to use api gateway and lambda with these common web application frameworks now of course it's not just web apps that you can build this way backends for mobile applications by using the mobile sdk along with Cognito for author authorization combining the AWS iot service so that you can reach out to connected devices and and build backends for them as well including being able to run native code which is often required to make that work in the cloud that same paradigm and some of those same benefits developer agility productivity easy scaling all apply regardless of what type of back-end you're creating now in the middle in kind of the linchpin here is data processing and this is what we first launched lambda supporting and it is still one of the largest parts of our business today hooking up to these incredibly big data pipes that come from s3 so you can have a lambda function react when an object so uploaded to DynamoDB so that you can react when database records change to Kinesis so you can do clickstream analytics and other kinds of real-time streaming these are all amazingly dense compute workloads that are ideally designed for a reactive and serverless approach and they work really easily by simply combining lambda with these other parts of the AWS portfolio and then finally some of our newer our newer services here Amazon Alexa and and chat BOTS you heard about Amazon Lex here today powering voice enabled and language understanding apps and acting in the case of lambda has essentially a universal webhook receiver because the easiest way to build a web hook is with lambda and it's the perfect way to run your business logic in response to these two these services that can have incredibly varied UI on the front-end now lots of different customers have already joined us on this on this service journey using lambda and API gateway for purposes of cost reduction increased agility developer productivity or in many cases even getting things to market faster and easier successful startups like a droll and Localytics to Fortune 500 companies like coca-cola to global enterprises like Thomson Reuters companies of all sizes all shapes across all industries have adopted serverless approaches let's take an in-depth look at a few of these thomson reuters processes 4000 requests per second on lambda using that to drive information exchange and analytics within their systems you'll hear in from from another Tim who'll be joining me on stage here a little bit about FINRA which processes half a trillion validations of stock trades daily Hearst will be joining us later this afternoon reduce their time to ingest and process data for their analytics pipeline by 97% going from processes that took hours to do in batch down to minutes with the server lists and near real-time approach bebo has used the capacity capability of lambda to handle massive spikes in their usage that arise when they launch new content things that would require them to otherwise hold on to huge amounts of capacity that they don't need in slack times an expedient which is used lambda for both their DevOps solutions and for the generation of their traveller profiles performs more than 1.2 billion lambda requests every month they also do more than 3 million code deployments every month at a cost of less than $1 now I'm gonna take a minute and look a little deeper at vivos use case here vive is a great example of an all-in service approach their content services are a core part of their business it's what allows their music artists to deliver video to their audiences and to their partners so that includes artists and video metadata video ingestion video encoding publishing to their own partners their own platforms as well as their partners platforms and providing api's for a variety of client applications now managing all of that the old fashioned way made it much harder for them to deliver and innovate quickly they were getting stuck in the IT quagmire so they opted for a micro services approach they completely reaaahhr tected from the ground up to give themselves the agility the cost savings and the burst capacity that they needed in order to make their business a success and so they completely recreated their data and content platform building it on top of lambda Kinesis dynamodb s3 and redshift and now that they don't have to manage that infrastructure any longer they're free to focus on their business and their customer deliveries and their partner execution all right instead of telling you the story of FINRA I'm gonna invite Tim grease Bach up on stage with me Tim's the senior director at FINRA and he's going to tell you a little bit about their journey to the service architecture thanks thank you Tim so we're here – we're here today to talk about how we use server lists to validate perform a half a trillion data validations a day it's a big problem we have to solve and this lambdas helped us a lot so first just to understand a little bit about who we are so FINRA is the financial industry regulator of authority and we monitor 99% of all trading in u.s. listed equities to provide investor protection and identify fraudulent activity some of you may have heard of things like insider trading and these are the types of things that we're trying to look out for to try to help you but this requires data and lots and lots of data we received the market data from the exchanges and the broker dealers every day sometimes up to 75 billion records a day the first thing we have to do with that data is we have to validate it it's critical they're required to provide us data and so we have an obligation to make sure the data is correct and provide feedback to them so that they can correct it and get it to us it's important it's really critical that we have clean data that we can do analytics on so once we get the data and we validate it then we do market what we call market reconstruction we take and we stitch the data together and this is to create to understand the life cycle of these trades so that we can perform various surveillance algorithms in analytics so that's what it starts with there so a half a trillion validations this is a lot of data and this is on an average day we are impacted heavily by the financial buy markets things that are going on around the world and sometimes this can spike to two or three times this number so just to put it in perspective what's a half a trillion you know we talked millions and billions so if you had a lego I think a lot of people out here probably grew up with Legos if you took a half a trillion Legos and stacked them up and end it would go around the world 500 times so just gives you a sense of how much data we're dealing with but it's more than just the validation to half a train validations our volume varies daily and it can even very very hourly if something happens like brexit or something with grease around the world this can drive a lot of volatility in the market and so we have to be prepared for it we won't know this in advance and we have to be prepared to handle it also the rules continue to change the rules that are the things that we're looking for so when we get the data we have to be able to continue to expand what the validations that we're looking for today we perform over 200 individual validations we generate about a hundred different derivations on the data and this is as the data is coming in before it's even handed off and yet the SLA expectations never change and so what that means is we're requiring folks to give us the data we have an obligation to give them back feedback so that they can then correct it and get it back to us otherwise they can there can be penalties to these folks so this is very important that we have an obligation to do that so originally we did this on prem with a big Hadoop cluster like many of you are familiar with that it's a static cluster process the files and batches required to be up and running 24 hours a day five days a week we leveraged a map only job so the files would come in through FTP they would land on a nasty voice we would pull the files in perform the validations generate various air files things of that nature and then hand them off downstream I'm sure many of you have seen a picture like this before so this created a number of challenges not easily scalable this is a big one if you're going to have a static cluster you're gonna have to size it for scale so we would have a cluster that was sitting idle over 50% of the time well that was very expensive in addition the ongoing server and software maintenance as Tim just mentioned you know we would have the case where all of a sudden there's a emergency issue with an operating system and we need to roll out patches across hundreds or thousands of servers that's very costly when you're dealing with a system that's running 24 hours a day and it was also originally designed for batch processing so when thinking about this new architecture we set out with some goals that we thought were very lofty number one we want to support any volume spike on demand we can't predict what's going to happen and when it's going to happen so we have to be ready for it we want to get out of the infrastructure management as I like to tell our colleagues managing infrastructures should be a thing of the past for us service providers that are building software why should I have to worry about that I don't want to pay for peak volume all the time want to pay for what we're using and when we're using it and then lastly as part of our regulatory obligation we want to improve the availability of the data internally our goal is to make sure we get the data and hand it off as fast as we can so that folks can do their analysis so we set out to look at various technologies was this kind of a standard process right we define the criteria we cared about scalability cared about security data partitioning how well it could be monitored performance cost maintenance these were the factors that were important to us we've looked at several technologies lambda we looked at apache ignite on ec2 and spark on EMR and then we performed actual pscs on each of these and because we needed to understand how well it would perform and given our output of that we found that lambda was going to provide us the best solution across all of those criteria for this service cloud solution so let's look a little bit about what it looked like so we ended up our new architecture is a lambda centered what we call a lambda Center an AWS solution so the validator itself is Java code that runs in lambdas and so that's where we can we actually do the various validations we leverage queues sqs queues between the components so as notifications status messages things of that nature flowing between them we also built a controller that manages the data feed so this is it becomes like a little ecosystem of its home and there's times maybe reference data changes where we need to be able to pause what we're doing and correct something and continue to continue processing so we did have to build that and we'll talk about that in a minute ideally we'll be able to get rid of that scene too and then all of our data if you've heard some of our earlier talks we like to separate compute and storage so all of our data resides on s3 we manage that in our open source herd data management system so that's so this interacts with that system as well so how'd it work out for us this is the neat part so all the faster cheaper and more scalable so at the end of the day we've reduced our cost over 50% this has been a big savings and we can we can track it daily or even hourly we know exactly how much we're spending processing times the asked of me from our management was we want the data available in less than a minute no matter when it comes and what the volume is errors are handled on a finer grain basis so again when you're building batch systems generally your if an error occurs the entire batch has to get set aside and reprocessed we've had in the past year several days so brexit was one had been some others where the volume just spikes and a spike for us is a big spike and yet we're able to handle that processing without any issues and that's been very successful last infrastructure to manage hey you know I'm excited for the day that I no longer have to you know hear from security saying hey you need to rush this patch out onto the operating system I think many of you can appreciate that but there's another thing that's changed here and it kind of reminded tied me back to what I said I guess Vernor said during this morning about agility with development so suppose we're dealing with a half a trillion suppose we want to test I don't know ten times that five trillion validations how do you do that if you're not using service computing you've got to go acquire 10,000 servers you know whatever the number may be well here we can take our time and have our engineers focus on the the actual value in the software building data validation tools data generation tools and then they can just pump it through the system and then therefore they're the framework the lambda framework it scales up and we can test and see how much we did so we've achieved we've got a great deal of success with this and its really allowed us to focus more on building the software and not worrying about infrastructure management so where do we go from here so the first bullet here is you probably saw when I said that the controller I want that going on ideally with some of the new things that are announcing we're going to be able to hopefully eliminate that in the next year so we'll be a hundred percent surplus well now that we understand it and more not only understand lambda but also the programming model this is a different model we're in a weird kind of a revolution in the software space right now and this change is how you think about software so now we're looking at how we can do this for our ETL processes how we can even replace web apps as he's mentioned really try to look at new ways to build software using lambda and then the last one is really achieve our goal of continuing the push to provide near real-time throughput and that's really about what our mission is and that is how can we better protect investors so in order to do that we need to be able to handle the capacity and per and validate the data as quickly as possible so that's it that's what we're doing with lambda thank you Tim thank you so much Linda really appreciated [Applause] it's incredibly exciting to see a company like this that's getting so much value out of the service architecture and really experiencing kind of what it means to inject that into one of the core critical parts of their business well thank you so much for sharing your stories with us today so two years ago we introduced you to lamda last year we told you about how we've made it ready for enterprises to start adopting it this year increasingly you are telling us about lamda how has become a core part of the way you build modern responsive applications and drive value through your businesses we're humbled and very excited at the incredibly fast adoption by companies of all sizes and startups enterprises alike and how quickly this is gone from a new idea to being a standard part of the cloud toolbox for so many of you now I want to turn and talk a little bit having kind of considered some of the history of server lists and seeing how companies are adopting it and take a look at the broader portfolio of service capabilities within AWS I want to talk a little bit about some of the new launches that you've heard over the last couple weeks and and inclu up to and including this morning and see how that fits into this sort of broader space of serverless offerings from AWS now the platform that we provide has a whole bunch of service pieces that I showed you earlier but there's also the idea of capabilities that you need to make a service platform successful here at Amazon we're incredibly focused on customer feedback and we use that to drive our product roadmaps in fact over 90% of the features that we work on in lambda come from discussions and meetings and presentations just like this where you tell us the things that you most want to see but we also categorize these investments for survey lists because we're trying to build a new way of doing things and these nine pieces are how we think about the investments that we're making I want to step through each one of the and talk a little bit as we dive in about what we're doing in that area so I'm going to start off with the cloud logic layer computing is the keystone of the service platform and for AWS lambdas the central hub that receives events processes them and makes calls to other AWS services including third-party services and if lamp is the implementation API gateway is the interface offering fully managed API is with fine grained usage and throttling access controls now I want to talk a little bit about some of the new features that we're building into these components so as you heard earlier this morning today we're launching support for c-sharp we're really excited to be able to add another language to the lamda platform here and we've heard from a lot of enterprises who are using c-sharp in their day to day business and really want to be able to keep working in a language that they love and so what we've done with the with the new introduction of course e alarm is ported back to Amazon Linux and then exposed it in lamda as a first-class language with all the same logging metrics support serialization of common AWS event objects that you've come to expect from languages inside of lambda but we've also tried to make it feel very idiomatic for c-sharp programmers so support for new get and integration with Visual Studio so that you can do all of the development opportunities and experiences that you're used to now over on the Gateway side one of the things that we did is we started releasing features in the run-up to reinvent here was added support for binary encoding we've heard from lots of customers that they want to be able to process audio files visual files other kinds of binary formats and now they have a simple and natural way to do that inside of API gateway we also made it easy to automatically convert to and from JSON so if you're using lambda on the back end for your API it integrates seamlessly with this binary encoding support at the API level now regardless of whether the consumer of an API is external or internal high-quality documentation is essential is essential for helping that API get used appropriately correctly and successfully and so we're really excited today to be adding documentation support to API gateway now we've always supported swagger as a portable way to do import and export of api's but documentation had been a missing piece of that puzzle so now with today's launch we have full fidelity swagger documentation support as well now what we set out to do this one of the things customers told us was a common pain point was the fact that they often had repetitive pieces of their documentation and api's many copies of error strings repeated parameters and so forth and so we've also introduced a very simple inheritance model to make it easier for you to sort of write that documentation string once and then have it percolate to all the different parts of the API where it applies so there's less work to do you can still export it a swagger and round-trip it back in as an extension or you can choose to convert it to a standard format on export now you heard in the keynotes about Amazon Lex this exciting new introduction for natural language recognition and understanding and Lex is a great example of how we continue to add core capabilities in the form of new services that are integrated with lambda in this case a fully serviced mechanism for text and speech understanding so similar to an Alexa skill you can give sample utterances and indicate the parts that you need to be filled out Lex will go and build a model in though for those and then when it's actually supplied with speech from a human actor it'll interpret that fill in the missing parts potentially query back to get a pieces of information that haven't yet been supplied and once it has everything that it needs similar to an Alexa skill it'll turn that into a J's on package and call your lambda function so lambdas essentially acting as a business logic webhook receiver tool X making it really easy to take that JSON package and then go call out to enterprise connectors or do whatever it is you need to do to make a chatbot or speech recognition system or some other kind of natural language platform do the appropriate action so with Lex your lambda functions now have access to the full power of natural language understanding and you can add speech and text recognition to your lambda functions now part of what makes that lambda programming model so easy is that it does fit really naturally into both event processing and other kinds of extensibility roles both for AWS services like Lex but also with third-party services and we continue to invest in tying the platform together in terms of making that integration super easy we've made those common patterns of computing especially easy to set up so you can put an object to s3 trigger a function put a record in the database to dynamo trigger a function put a record into Kinesis trigger a function and without the need to manage infrastructure these cloud native fully server list design patterns are super easy to discover develop and own and recently we've added support for sequel database triggers as well so now with aurora you can actually use a sequel database and kick off lambda functions as you need to so you get the ability to do stored procedures in the cloud and this week we also announced that Amazon Kinesis firehose is going to have support for lambda bringing one of our most scalable data ingestion services and our most scalable compute service together to form a seamless package that can transform aggregate audit and do other kinds of data analysis as you stream it in and route to s3 elasticsearch or other backends now in addition to event processing we also use lambda in a couple of other ways we use it as a customization engine for services like Amazon Cognito and it u.s. config and we also use it in that webhook receiver style for Lex and Alexa and others and that pattern is also super popular with third parties so integrating with slack for commands integrating with Twilio for message commands the easiest webhook receiver is a lambda function now serverless architectures help you simplify development deliver more quickly lower costs but in addition to creating and managing those applications many of you have asked us for a mechanism to monetize those api s and lambda functions as well and so today we are announcing that Amazon API gateway has been integrated into the AWS marketplace the ADA buyers market place lists over 3,500 software listings today across 35 product categories and we recently announced support for API products now with today's launch sellers can use API gateway the easiest way to host and manage their API is and publish those api's directly into the AWS marketplace and this is a win in both directions the marketplace gives you an easy way to help your consumers the people are going to go buy those api's discover and purchase them and as a seller we've done the hard work for you of integrating billing on the backend so as consumers use your API s we automatically detect that usage do all the metering package up the billing information and deliver it to the buyer and as part of their monthly AWS bill so you don't have to write any lines of code at all to make that happen and we're really excited to announce that customers are already adopting this with this week's launch NTT DoCoMo and f-secure have both taken their their API is an API gateway and added them to the a diverse marketplace providing secure URL checking in the form of f-secure and speech recognition API is from DoCoMo so we know these are going to just be the first of many and of course if you've got a lambda function then you've been thinking G be really nice to make some money from this thing that I built add an API to it and now you can monetize that as part of the marketplace integration now with serverless approaches like lambda customers have a deeper relationship with us when it comes to handling the care and feeding of their events and that means that sometimes those events even in the presence of a coding error or some other kind of problem might be really important and it might be necessary to ensure that that event one way or another gets preserved and so today we're launching a new capability in lambda to make it even easier to build reliable end to end solutions a dead letter Q or dlq for events so now when you turn this feature on any problem that happens in processing your asynchronous events whether it's a misconfigured I am permission or a bug in your code or just an input that you weren't expecting and couldn't process correctly for some reason maybe it threw an exception will be automatically detected by lambda and then instead of giving up after we've tried your code three times we'll take that event and we'll send it to your choice of either an sqs queue or an SMS topic and probably a few of you out there are thinking but would it be nice if I could actually react to it with another lambda function and of course you can take that SMS topic and immediately direct it back to another lambda function that you use as an error handler so from there you can build any code that you want to take action on that failed event so full fidelity on the payload allows you to capture it know bits lost many nines of reliability in terms of being able to preserve and protect those events once they've entered our system so if you've got data that's coming through s3 through SNS or other event sources now you can have very high reliability in the entire end-to-end system with nothing more complicated than giving us the Arn of an sqs queue now customers have increasingly adopted lambda functions as a way to create decoupled independently scaled micro-services in fact one of the great things about lambda is that it makes it hard not to adopt some of those best design patterns in building the micro-services these smaller or separate components single single applicant single source solutions for each individual function it's a really nice way to think about application development and that's great in so many ways but it does mean that as you start producing more functions that make up kind of a conceptual application as you start having more micro-service applications you've got more and more of these little pieces how do we bring back some organization to that approach how do we keep all those small pieces organized without going back to the problems that monolithic applications were plagued with in the first place and our answer to that is a squirrel so this is Sam in addition to being our awesome mascot Sam stands for the service application model and what the Sam model is is a standard way of talking about and describing a service application all the different pieces the functions the API is the event sources the data stores that go into making that up it doesn't change what those pieces are so you can still version update change an individual function but now you have a way of talking about the group about the collection of it in a in a sensible way and in a way that we can also use as we'll see here for programmatic purposes and our chief goal with this was to simplify development and management for service applications not just within an AWS but also more broadly Sam brings order to this different parts of the micro service and now we can take that that mechanism that representation and use it to inform things like building packaging to Toyman and others Sam is as of today natively supported by AWS CloudFormation so those of you who are language walks you can think of this as essentially a new grammar that CloudFormation speaks so that you can ride in a highly tailored very customized pattern for service applications makes it easy for both humans and machines to understand the different parts without getting bogged down in infrastructure detail so doing the same thing at the configuration and modelling level that we think we've achieved with lambda as a service you can also export any function as a Sam template that includes blueprints and the lambda console making it really easy to get started and we're offering command-line tools to be able to package and deploy lambda functions now and broader applications using Sam and then finally but maybe most importantly of all we've also made Sam available as an open specification so you can go to github and download download Sam yourself take a look at it and our goal with this is to really be able to broaden the idea of modeling service apps to the entire ecosystem so that instead of having of every individual open source project or partner having a different way of talking about the bits and pieces that go together we have one common Universal way of representing that in a similar way to which swagger gives us a common language and mechanism for talking about api's now let's talk about what Sam can do for us when it comes to building service applications many of you have told us many of you have told me that one of the biggest sources of friction in trying to adopt lambda and serverless approaches has been the absence of a CI CD tool chain from AWS particularly one that was tailored well for service apps so today I'm happy to announce that we've got all the pieces in place to make this easy so using Sam you can now describe the different parts of your service application you can commit your code to github or code commit and with the help of code pipeline we can automatically detect that push and then extract your code and insert it into the pipeline to kick off the process from there our newly announced service code build can do the building and packaging of your service application and those of you who've been frustrated by the need to include all of your third-party Python and node libraries code build can now do the pipping and npm for you so no need to manually package those libraries anymore of course you can test you can do as much as as many gates as you want there in terms of determining when that pipeline should proceed in various ways and then when you're ready to deploy whether it's to test or staging or production code pipeline will automatically integrate with CloudFormation and it'll use that Sam model in order to either create your resources for the first time or if they already exist to go out and update them so code pipeline gives you this mechanism to orchestrate and automate all of those steps providing a complete end-to-end solution for building packaging and deploying serverless applications based on Sam he's quite the useful squirrel so now you've built and deployed the other question we hear a lot is but how do I diagnose it you've given me this thing that's got lots of little pieces it spans lots of services maybe it spans lots of api's I've got events and they're flowing all over the place how do I understand that topology how do I get my head wrapped around what's going wrong when something doesn't work and since I can't log into the boxes how do I know what's happening maybe when I've given an asynchronous event and I'm not sure why it's not popping out of the other side so our answer for gaining insights into how these functions are behaving is AWS x-ray today we announce a new service which is designed to help you profile trace and understand the topology of your services that are running in the cloud and while x-ray has a lot of uses I'm going to focus here particularly on its value for service applications which I think are one of the big missing pieces in terms of making server lists really easy to use and serverless applications really easy to diagnose so with x-ray you can visualize the surface call graph of your app you can see a dynamic service map for what's calling what allowing you to discover where your dependencies are and also understand what's actually been built in the system and with that you can detect hotspots places where maybe events aren't moving from place to place as you expected throttles are occurring other things are happening that might represent problems that you need to go take action on once you've decided where in your broad application topology the problem exists you can drill in pinpoint service specific issues get timing breakdowns and get all kinds of information about the specific pieces of that dynamic profile including some of the parts that have previously been somewhat opaque like dwell times for asynchronous evokes in lambda for example now we're really excited by all the improvements that we've been making to developing and diagnosing serverless apps but we're even more excited by the growing ecosystem around us we love that so many companies and so many individuals have joined us in the service journey have gotten equally passionate and excited about it and are offering all kinds of both commercial and open source solutions from CI CD approaches to performance monitoring to different kinds of application frameworks there's a large and growing list of partner offerings so I want to take a quick look at that companies like Twilio have brought their messaging system to lambda making it really easy to integrate both business logic running on us with their modern mechanisms for communicating and conversing with both people and machines companies like algorithm you have made all of the different algorithms that they've been able to extract from universities and other places available as part of lambda through this integration zapier offers a way to take enterprise workflows and and events from lots of different companies and bring them together and uses lambda in that web poke mode in order to be able to do the processing and customization cloudBees code ship my talk group and others provide build and deploy mechanisms for service apps data dog log lease plunk and sumo logic give you great ways to do logging auditing and performance analysis and monitoring for service applications all of these are available too in more information on our partner network and of course many of them also are available to get started immediately in blueprints right on the AWS lambda console now over in the open source world Sam is is in great company we've got other frameworks that come from AWS themselves such as chalice but also some really great projects out there like apex Claudia Jas and Gordon that are making it much easier for people to build different kinds of web application frameworks in order to use lambda and API gateway and serverless offers a fantastic way to do local testing configuration and deployment of lambda applications we're also excited to see this list constantly growing and some of the new projects that are just getting started in domain specific areas like Big Data scientific and numeric computing are really exciting to us as well so this is definitely a watch this space we think over the next year the open source is just gonna blossom in terms of having lots and lots of different domain rich solutions alt styled on a server list framework now we talked about how to bring order to sort of the different spatial pieces of a service app using Sam but there's also a potential temporal component to it we've talked to you in the past about how important it is to keep lambda functions stateless so that's part of the core design value it helps you follow that good design practice and it enforces that separation between code and storage so lots of advantages to that but sometimes you need a little bit of state for example you might want to retry a failing function more than the standard number of times or maybe you've uploaded an object to s3 and you want to do many different things you've got an image and you want to create not just one thumbnail but ten different kinds of representations and formats so up to now it's been a little bit challenging to bring that kind of execution order to what is an inherently stateless function so everybody remember that like class we all took and you know computer science undergrad and maybe like the sophomore year where you learned about state machines well don't worry we're not going to bore you with math and theory here today all you need to remember is that we're gonna provide you now with a new service that can help you organize multiple lambda functions or different steps to do your bidding when you care about the order in which they run so what would you use this for well sometimes you want to be able to control the sequence of two functions you need to know with certainty that the first function finishes before the next one starts or you want to do a fan-out like my example in s3 where you need to take have multiple functions run and you might also need to wait until they're done to take a finalization step so maybe not just scatter out but also gather back in sometimes you need to retry a function multiple times maybe you're even pulling for another system that hasn't yet been converted to this event processing style and you just have to keep trying over and over again until some particular piece of information or some user input is finally received and then more generally even though the lambda functions can only run for five minutes each you might have a broader workflow that can span hours or days or maybe even weeks even though the individual pieces of it run is lambda functions and this is where AWS step functions comes in so it runs eight machines in the cloud it lets you keep that clean separation so that your code stays in the lambda function but the choreography the workflow part of it is held inside of AWS step functions and there's a really nice easy to use visual designer that you can get to in the console you can express it in JSON and of course you can do it all programmatically as well this makes it really simple to build run and monitor multi-step applications even when they outlive any individual lambda functions execution time supporting thousands of workflows and millions of simultaneous steps so last year at this time lambda was in just four regions today lambdas in ten regions worldwide with plans to take it into many more data centers as well offering you the ability to create applications that live wherever your client and data need to be but we don't think that's enough in fact we want to bring lambda and lambda Z ease of use and serverless model to every place that our customers need compute power so today Vernor announced lammed at the edge a preview of our new capability where we can take lambda functions and push them out into our global points of presence our 68 pops today and Counting where you can now run lambda functions and in the land in the edge preview you'll be able to target CloudFront distribution events so you'll be able to react to both content retrievals and origin fetch fetches as they occur in those pops now we've got some restrictions on the amount of capacity during the preview here but you should expect us to lift that over time and also to broaden out the set of use cases and scenarios that are possible to run in the pops here we're really excited to see what customers are going to do with this and we're really excited to see lambda going into newer and broader use cases now the pops are grayed but they're still part of our cloud and as you heard in Andy's keynote yesterday we're also taking lambda functions outside of the AWS cloud by enabling them to run on devices now when we created lambda we intentionally chose a programming model that could be separated from the underlying infrastructure and that abstraction has proven super valuable to us not just in terms of making lambda itself successful and easy to use and easy to scale but it also means that we can apply lambda functions in these new scenarios such as the newest generation of the AWS snowball where you can now run lambda functions to transform or audit data as you ingest it onto that device and route to the AWS cloud and we thought that was such a good idea we said let's take that framework that we built for the snowball and expose it and give it give it out to anybody who wants to use it on their own devices and that was the origin of green grass this framework that power snowballs compute available to hardware manufacturers now who want to be able to add lambda and messaging capability to their devices because sometimes you've got something whether you're on a boat or on a tractor or in some other place where either the responsiveness low latency or lack of connectivity to the cloud requires you to be able to do that compute absent the benefit of a data center connection so look for lambda functions to start appearing inside your camera's appliances thermostats and everywhere else you know two years ago I was on the stage at the palazzo introducing a preview of this new thing called lambda and my team and I were incredibly passionate and incredibly excited about its potential but of course we didn't know if everybody else would be in the two years since then a huge number of companies a huge number of enterprises have adopted serverless approaches as a core part of their application development and they've come to depend on lambda an API gateway for the most critical business functions with land and gateway with the growing set of other AWS service offerings with our incredible ecosystem and partners we're enabling customers to innovate more deliver faster and lower costs it's been an amazing ride but as we like to say at AWS it is absolutely still day one for service we are only just getting started thank you enjoy the rest of this incredible program we have you for you here at the Mirage today and go service [Applause]

5 Comments

  1. Sw yx said:

    no comments about the magic trick?!? that was fun

    June 29, 2019
    Reply
  2. Jay P said:

    Thomson Reuters processes 4,000 req/s on lambda? Isn't this VASTLY more expensive than running an EC2 fleet?

    June 29, 2019
    Reply
  3. Nicholas Shook said:

    this is super awesome stuff! I wish I made it to this talk at Re:Invent

    June 29, 2019
    Reply
  4. Leslie-Alexandre DENIS said:

    An IT world, without ops. That's a smart move to gain new market share but it's far from doable. Let's find out the new "Knight Capital" case in months.

    June 29, 2019
    Reply
  5. Makram Kamaleddine said:

    This is incredible. Thanks for posting!

    June 29, 2019
    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *