February 10, 2016

New York

Containers & PaaS: Complementary or Mutually Exclusive?

It's always a tough time to be in between the happy hour and to be able to talk on something right? So hopefully I can keep this short and I originally developed this as a kind of follow along, hands on lab. The idea is that I want to give something tangible in your hands so you can compare like a platform as a service and containers side by side so specifically with platform as a service we are going to look at cloud foundry and then we are gonna look at Docker in terms of containers right.

So the idea is that you can kind of try some of the of the usual things that you gonna do like scale connect to logs, kill a container and see how it recovers things like that right. How do you do this in a paas environment versus how would you do this in a Docker environment some of the things you may have to roll yourself things like that, right? So the idea is that it is structured as a hands on lab okay I will give you a pointer to the hands on lab, okay but I'm going to tell you that it's going to be a little bit hard to kind of follow along with me so what I would suggest you do is take this link and kind of take it as a homework okay, and of course I have been a teacher also for a long time. The prerogative of the teacher is to do the simplest example in the class and let you guys do the tough part.

So I'm gonna go through a very simple example, the idea is that at least it drives home the comparison between the two platform as a service and containers in general. And again, containers now aren't anything new. I worked in Sun myself and Bryan as well worked in Sun. A lot of you might have heard of Solaris Zones, before that jails and all that, but I think Docker is really cool, because you can just take an image and run it.

So before I get started I wanted to kinda get a level set, how many of you have used, I'm not gonna ask Docker cuz I assume that all of you are using Docker. How many of you have not used Docker before, that's probably a good question. Don't be shy, okay? One person okay, how many of you have used cloud foundry before some kind of paas kind of thing, okay a few people okay cool, so that's kind of what I was expecting as well, I was expecting that most of you would use Docker, and some of you might be kind of familiar with cloud foundry, and that's going to be the the level that I'm going to start off with, okay? So what I'm going to show here which is no idea what is that, if somebody can help me fix that you know I had to reboot my laptop just now because sometimes the Mac doesn't like the external projector okay.

Sorry. Let's move on. So PaaS and Containers: are they complimentary or mutually exclusive? And I'll try to summarize toward the end of the talk. Okay, my name is Raghavan Srinivas and I work as a cloud architect or evangelist at IBM. Okay the easiest way just call me Rags. A lot of people have a tough time pronouncing my name but Rags is easy.

Those are the two ways to reach me. You can tweet about me RagsS, okay? If you like the talk, tweet. If you don't like the talk, don't do anything. Okay? [LAUGH] This is the URL, the hands on lab that you can follow along. So I'll leave it there for a little bit, just in case you wanna kinda copy this down, okay.

ibm.biz/containairesnychol. If you don't want to write down this link, just look for github and ragsns, It's basically a pointer to that. So github and ragsns. Okay end was my initials, all right? Or just send me an email and I can give it to you all right. How many of you really wanna do this how many of you kinda set up all the prerequisites that I had asked for? Anybody, anybody at least one? No? Okay.

Okay. That's kinda what I expected. I am not completely surprised by this, but like I said, the prerequisites are pretty pretty easy to set up. It requires the cloud foundry CLI and then it requires the IC plugin which is the IBM Containers plugin and you can actually try both of these commands kind of side by side, and I'll show the structure of the hands on lab so that you kind of get an idea of how to do this, okay? So the agenda is pretty straight forward I'm going to look at monoliths, microservices, all that crap, you know all the kind of things that we are hearing about right immutable infrastructure, dot dot dot dot dot.

You know right all of those. We will take a look at cloud foundry and we'll take a very quick look at containers, Docker etc and then we'll look at IBM Bluemix because that's a platform where you can kind of evaluate these two side by side okay very easily and then do a summary and then that's pretty much.

Okay everybody on board with this, anything that you'd like to see, anything you'd like to eliminate. If you don't want to talk Cloud Foundry, I can do that too, but hopefully it's gonna give us some perspective. So you've heard so many things about what developers needs to do, developers are king makers but developers are also dealing with so much stuff out there.

Okay? You have to be able to deploy at least ten times a second or ten times a minute or whatever the case may be, right? Continuous, seconds to deploy, it has to be continuous integration, it has to be mobile ready, it has to be able to fail fast. It should be any language, ...

so how do I deal with this? It's really not easy to be able to do this from a traditional perspective, that is, from a monolithic perspective where you had six months, three months or sometimes even a year development life cycles, those are not acceptable any more because you just can't do this.

Just from a very simple one aspect of the distributed system perspective, like for example scaling. How do I scale a monolithic application. If I want three instances of the yellow thing, pentagon or whatever that is. There is really no way I can do that in a monolithic app, if I only want three of those.

I really have to scale all the others as well with it, right? I want the orange entity, I want the, whatever those are. So there's really no easy way of just scaling one particular service. On the other hand, if you're developing as a microservice, you can scale it in whatever fashion you want.

So you want three of the yellow, three of the red, two of the blue, three of the purple, whatever you can do that very easily, right? So micro services makes a lot of things easier, I'm not gonna say that it's just a no brainer, just do it. There are a number of things that you have to deal with it and we'll see some of those aspects today but in general, if you want to compare a monolith with a micro service, you know there are a number of aspects in which you're gonna look at, right? One of the things is the code base.

Typically you had a single codebase for a monolithic application, right. On the other hand on a micro service you have multiple code bases, each micro service has its own code and going back to two piece the teams where two piece the teams work on each aspect of the business and they develop in separate ways with the API the handshake being very clear, between different services.

Understandability confusing and hard to maintain. In a typical micro service model much better readability and easier to maintain, it's not to say that you can't write horrible microservices if you ask me to write micro services it may be very confusing, hard to maintain anyway. But the point is in general, if you have a micro service and you have a very clear handshake, it's much easier, much readable and much easier to maintain.

From a deployment perspective, really you have much longer deployment times, complex deployments with maintenance windows being hours, maybe even days sometimes. But in a microservice, typically it may not work the way it's supposed to, you know it may be degraded functionally but at the same time it doesn't mean that the whole system comes down plop, right?Some of those aspects may not be working and typically you can do it with very minimal or maybe even sometimes zero downtime, okay? Language, you've heard of this, typically in a monolith you have to stick to one language and make it just easy, right? On the other hand, in a microservice aspect you are typically developing it in a different programming language depending again on the comfort of the two teams, right? Somebody might like Java, I'm a Java guy myself right, but—how many of you are Java guys by the way.

Okay all right nobody shot me down, that's nice. And scaling, it requires you to scale the entire application even thought the bottlenecks are localized, right? So really there is only one service that you really wanna scale, but you are kind of tied behind your backs because you really have to scale the whole thing, right? And really we don't care about that like for example you know the example that I give is let's say I have a queuing application, and for some reason.

You are not able to handle the queue quickly. All that you need to do now is to scale the consumer. So once you scale the consumer, you will be able to consume it in a faster pace than it would before so it enables you to scale bottlenecked services without scaling the entire application.

So you heard a whole lot of things about the kind of code for failure, service discovery, dot dot dot dot dot, a whole bunch of things. A great aspect of this or a great example is the Netflix application architecture, and it goes back to the question that I asked as well, they do a lot of things, you know service discovery and so on and so forth.

But a lot of this may actually sound very similar to what we heard about ten years back. How many of you have heard of service-oriented architecture. Yeah, we all lived through that. And it seems very similar but there are some subtle differences. I don't know if any of you heard of UDDI, shaking his head.

Just two heavy weight service discovery was too heavy weight it really attempted to boil the ocean and typically those kinds of approaches don't work, in micro services the services, the service discovery is more lightweight and the idea behind that is just get it functional and really not worry about trying to get everything going.

So some of the foundational services associated with the microservices model, I can go down the list but I'm not going to go through all of them is to be able to do elastic load balancing, to be able have canary testing, A/B testing, red/black deployments, blue/green deployments, to be able to do this in an easy way okay, metrics monitoring and logging.

Absolutely necessary. To be able to stream my logs in to an OS, common syslog interface like a splunk, a paper trail or something like that, how easy is that? To do that. How about message queues, how about event notifications, dot dot dot. I also talked about mobile services, so on and so forth.

So this is gonna be only the tip of the iceberg if you will. There are a lot more. Having said this, this is a slide that I borrowed from [UNCLEAR], one of my former colleagues works with this and you look at these different technologies, you have to put this all together in order to make it happen.

I mean this is absolutely not everything. I'm missing a lot of technologies here and it's a little bit old, even when it was done and it's definitely old now. But the point is, if you had to kind of do orchestration you had to think about do I use Doctor Swarm do I use something else.

What about Mesos and Kubernetes? Where does melaton fit in? Where does Kronos fit in? There are a whole bunch of stuff that you have to worry about. And then of course how do you deploy them and so on and so forth. What typically a lot of PaaS does, is it takes an opinionated implementation.

It's a single implementation. It covers a lot of these things that you use in everyday life. I've been a developer for about, gosh, more than 20 years. Okay let's put it that way. And I have a lot of grey hair and I really am tired of kind of having to do everything myself. Roll this, connect rocket to here, connect Docker Swarm and then put Consul on top.

All right, I would rather the platform takes care of that and that's my philosophy when I grew up with Java EE as well. When I started Java EE I actually talked to a lot of companies and they were like I don't think Java EE is for us. We will do our own application server because we have this requirement, this requirement, this requirement, this requirement, which is not met by the platform yet.

And fast forward about two or three years all of them pretty much consolidated around some of the application servers, there was no need to write their own application server, and that's kind of how I feel is that there is gonna be an opinionated implementation. Granted it may not work in all cases, it may not work in a lot of cases but I think it will be fine in most of the cases.

My premise is that you know the platform matters, a platform that kind of rolls all these different technologies together makes it a lot easier from a developer perspective and for most of you who still have black hair, are much younger than me, you can stay young, and not have to have too much grey hair, I'm not gonna say that there is none, but that's the way it is.

Okay, it's based on cloud foundry, and docker, a number of different open source technologies, so there is an opinionated implementation like I said, and we'll look at this as we go along. And having said that, cloud foundry is a foundation, it's a foundation and a technology. So it has some bits but it's also an amalgamation of different companies, a foundation of different companies, I shouldn't say amalgamation.

But essentially IBM is one of the sponsors as well, but there a whole bunch of companies Pivotal is one of them, they project lead cloud foundry, and really it's platform is a service, and there is the use of the containers. Actually these containers precedes Dockers about the same time.

For those of you, you know that Docker started off as a platform as a service company, and DotCloud is gonna go defunct on February 29th, or something like that. The point was that containers was already in vogue even before Docker was there. So containers are there as a plumbing, and as a developer I really shouldn't care too much about how it is used, whether it's a rocket container, whether it's a lattice container, whether it's a Docker container, or whatever Container that is.

So that's kind of where I'm going to. So a whole lot of things as a developer that I need to worry about besides worrying about what infrastructure of a services I need to run on rather. So like for example, middleware configuration, application containerization, and so on and so forth.

So as a developer again, developing my application is very simple. But these are the things I need to do. I need to provision the VM, install the application runtime, deploy the application. Probably one, two and three might be a lot easier with Docker but I still have to worry about some of the other things.

I still have to worry about, kinda configuring the firewall, configure the SSL termination and so on and so forth. With the cloud foundry platform, you just do a push, you do a bind service, and you do a scale, and whatever it is that you wanted to do. Just a lot simpler than having to handle it yourself.

So at the heart of it you have what is referred to as a DEA pool, which is a droplet execution agent. And essentially what happens is every one of your applications is compiled into what is referred to as a droplet. The droplet is the one that executes in the droplet execution agent pool.

And you scale your application by just creating more droplets, and whatever it is that you wanna do. A droplet runs in a container, and we don't need to worry about that. All of that is handled by the platform itself. So this is kinda how it works. Oh, one more thing about this, there are whole bunch of other components that I really don't need to worry about, there's a helth manager.

Again if you go back to you're doing it all yourself, you have to have some framework that does that, or you have to roll it on your own. So if an application goes down, you have to automatically be able to re-spin it. Cloud foundry automatically does it by some of the health managers.

So if something is down, it just brings it up, if one of the components is down, it can spin that up as well, which is kinda cool. So there are a lot of things that kinda happens automatically if you will. So how does this work? Essentially I push my application, so essentially what it does is it looks through the different build packs.

Build pack is a concept that was borrowed from Heroku actually if you use the Heroku build pack, you can use it like that exactly like that in cloud foundry. In fact recently I wrote a blog about deploying a Swift application. I just took the buildpack from Heroku, put it in Cloud Foundry, and you will be able to run it.

So essentially what happens is it detects. It finds out oh it's a Swift application. So this is the right buildpack that I need to use, and then injects everything associated with that. So the nice thing about this is, if you have like a different framework, like for example, in a Java situation, if you're using [INAUDIBLE] and JSP it injects the appropriate environment with it as well.

So if you are using rails and Ruby it does the same thing as well, flask and whatever that is, Python. Then it does the same thing as well so you don't need to really worry about any of that it just compiles the application with the appropriate build pack, creates a droplet and the droplet is the one that executes in the droplet execution engine and that's pretty much.

So that's it for cloud foundry and I'm going to come to the demo in a little bit. So the presentation is just about getting over. So I guess seeing all this, these slides so many times I'm not even going to spend too much time on this and these are the slides again. Why Docker? I think it's the right balance on simplicity and function.

And one of the key things about Docker is the number of images on the Docker public registry you know really cool, if I have to do my big data application it took me the first time I remember when I was doing a demo for Scale, anyway some big data conference it took me about three days to get everything going and even after three days I wasn't sure if everything was running or not.

The last time we did a big data conference, it took me about 15 seconds to download the sequence IQ, and be able to run it. So it makes it a lot easier, I think one of the cool thing is the public registry that's available where you don't have to reinvent the wheel. So, again, very similar client server architecture where you have a Docker host, your client, and a registry.

You're all Docker fans so I'm not gonna spend any time on this, okay? But how does IBM Bluemix come into the picture? Again, infrastructure as a service was a great thing, Amazon was cool. It took, I don't know 15 days to a month to get a server up and running. I could do it in, not 15 seconds, but maybe in 2 minutes.

But still have to do a whole lot of stuff. And that is where the platform as a service, it can help you. You have to worry about the runtime, you have to worry whether it's a Java 1.7, 1.8, whether it's a python this particular version whether it's the different services that they are going to use all of that is taken care of for you automatically so that's the platform as a service and Bluemix is basically both cloud foundry and containers rolled into one we'll see this in a moment.

So how do I use—it automates the build of Docker images, it does a whole lot of stuff including checking for vulnerability of your images. It kinda gives you a warning and all that, and you can scale and auto recover your containers. So if one of the containers is down, you put it in what is referred to as a container recovery group, and if that one particular instance down, no big deal.

It's gonna recover it. It also puts a load balances in front of it, it is haproxy based, and it balances the load, we'll see demos of that as well. And logging and monitoring, it supplies built in. You can just go to the console and everything is available for you. So the point I'm trying to make is again, if you have somebody like me who has lived through all these travails you really don't need to go do this all yourself.

Just use a platform that works. ic plugin is what extends the Docker and we'll see in a second. Docker Machine, Docker Compose and Docker Swarm are in the future but right now it's limited to Docker linking, Docker volumes and so on. So you can use those. And one of the cool things that a lot of enterprises like is it has a private registry.

So if you just sign up for Bluemix account you get your own registry. Private meaning based on your namespace, not completely private. It's still shared with other users. Cloud-based deployment, it can push images to private repository, and deploy to the cloud. With that said, I will do the summary later but let me go through the demo.

I have about 20 minutes so I think I should be able to do this in 10 minutes. So here's an example. What I'm gonna do is I'm gonna start with the right from the beginning. IBM, I'm gonna set up, I'm logging in to the US-SOUTH so I'm gonna do a CF login. Can everybody see this or should I increase the font? Especially in the back, don't care? It's good, increase the font.

Did I hear anybody? So let me know, if it's bad I will increase the font. So I just put in my name and my password and I'm ready to go. And all that I do is I say if I want to login to my container, I say cf ic to the container environment. I just do cf ic login. I'm totally a command line type of guy I'm not gonna bring up the GUI at all in this session, but if your interested the graphic user interface is there as well and you can pretty much do whatever you want.

Here as well. So notice here I can basically use export this environment and use IBM container as my default Docker environment. So even if you don't have Docker installed on your laptop at least everything on the cloud you can get started. Immediately, you don't even have to get Docker installed on your laptop.

And then you can do whatever you want with it but if I still like to have Docker on my local laptop, so that I can do some local testing. So now what I'm gonna do is, I'm gonna show you some of the commands. So all you need to do is cf ic help. And you see here basically you can build the image from Docker file, you can pause, you can PS, things like that.

Exactly like, most of the docker commands are available so if I do a cf ic ps -a which is kind of the docker ps -a, right? It shows me all the containers that are running. Does that make sense? And you'll see here that I have something called a spring boot and basically what it's doing it's running three instances that I want and so on.

Then what I can do is I can do cf ic images, which is kind of the docker images, and you see a whole bunch associated with that. That makes sense? So now what I'm gonna do is I'm gonna compare Cloud Foundry in this because that was the objective of the exercise. So here is my manifest.

And essentially what I'm doing is I'm pushing an application called pcfdemo. And all that it's doing is its pushing your wall file. And you will notice, this is gonna take a little bit of time. And I'm gonna do a little bit of song and dance then, because I've gotta kinda keep you guys distracted.

But essentially what's gonna happen now is its gonna take the particular application. It's gonna try to figure out what build pack it is and inject the appropriate environment associated with it. So you will see here pretty soon, [BLANK_AUDIO] let me see if I can bring it up. [BLANK_AUDIO] So you see that it associated itself with JRE, because it figured out that it's a Java file, and did the appropriate, injected the java build pack and a whole bunch of other things that it needed to do and then uploaded the droplet and you'll see pretty soon that this is gonna be deployed on the cloud foundry instance.

So let's give it a second. And you'll see zero, one instance you'll see here that it's running one instance because I really didn't say more than one instance. Running more than one instance is pretty straight forward and we'll see that in a second. All right, so my application started.

And automatically provided me with an end point. So all that I'm gonna do is I'm gonna go and kind of take a look at that. I'm gonna create you can see a lot of tabs, I'm gonna just create a new window, that might be the easiest way. And I've had some issues, but hopefully that's gonna come up.

All right, so it is simple example okay and you'll see here, you can't see here so let me increase that, do you see that there is a no rabbit service bond. So essentially the rabbit service is not there and if I wanna connect the rabbit service right, it's pretty forward but before that If I have to scale this all that I have to do is I have to say scale -ie is equals to whatever, and you can say the application pcfdemo, pretty straight forward.

So what I'm gonna do at this point is I'm gonna create a service [BLANK_AUDIO] And I'm gonna call it, let's see, I don't know nyrabbit [BLANK_AUDIO] So what I'm trying to do is I'm connecting a service to an application. So very straight forward to create again kind of the microservices model where each of those are distinct services, they may be running in different containers.

I don't really care too much about that I'm gonna just connect these two together and now what I'm gonna do is I'm gonna just show the environment variables. And you'll see cf env pcfdemo, I should have probably shown it before, actually it works now. The environment variables are just a vcap application right? And you'll see that that is gonna change when I re-stage this particular application.

[BLANK_AUDIO] So this is gonna take a little bit of time again and essentially what is gonna happen now is you're connecting a service to this particular application. There are a number of optimizations it can do as well. So right now I'm just restaging this with the service connector.

So let's give it a little bit of time and again it's gonna go through to create a droplet. [BLANK_AUDIO] All right. If it doesn't work I have something running already, all right there you go. So again it is gonna figure out all the what's needed for the application but more importantly what it's doing is it's injecting the service into the application and you will see how it's done and that's kind of the same concept that can translates over to containers as well, and we'll see that in a second and I'm not going to go through all of this because there is a little bit kind of not interesting to the discussion that we are having but it kind of gives you an idea of how exactly this happens.

So it is bringing the droplet up and as soon as it's done we'll take a look at the environment variables and you'll see that there is something it's injected in the environment which tells the application that it's connected to a rabbit MQ service. All right, so again same thing.

Basically it's starting one instance of that, [BLANK_AUDIO] Come on, so you'll see here all the dependencies are here including the liberty buildpack sometimes if I'm using spring booted, it does the CLI as well and so on. [BLANK_AUDIO] All right, so now the app has started. So what I'm going to do is, I'm just going to do ENV to check on the environment variables.

You see here that I don't have that particular, I do the point service. I did not do a point service. [BLANK AUDIO] So we'll come back to that in a second. So let me look at services. When I just restage without doing anything which probably should have given me a clue. So what I'm going to do is I'm going to bind to rforic.

[BLANK_AUDIO] And if I look at my ENV it should have changed [BLANK_AUDIO] You see here it injects, everybody with me? In a vcap service essentially it injected one and all the time I have to do is re-stage pcfdemo, and at some point you will see that it'll have a rabbit service bound to it.

So let's not go through the whole thing, let's try something else. Not a big deal. So while it's going on what I'm gonna so I'm gonna just do this ps -a. And you see here I have a springboard application. But remember I talked about a container recovery group. So creating a container recovery group is pretty straight forward.

Actually the exercises walks you through it, let me show you how I did this, I didn't want to create the whole thing from scratch so. [BLANK_AUDIO] Springboot. [BLANK_AUDIO] So all that I'm doing is, can everybody see this? I have a host name and a domain and I say maximum of three and a desire of three.

So in other words, if it's less than three, the platform itself basically makes sure that it reaches the desired state. We just equal to three. So what I can do at this point I have a very simple endpoint set up, so if I look at this, if I just print this—call this particular endpoint.

It should give me all the different variables that are associated with it. So what I'm gonna do is I'm gonna take a look at grep -i hostname. And you'll see, it's 13C230, everybody see that? And it just changed to 13C whatever. 22D and you'll see the cycling through the three because I have three instances up and running, so you'll see that it goes back to—so essentially it automatically put the load balancer in front of it so I don't need to worry about that.

So now I'm gonna get a little bit ambitious and I'm gonna kill one of those instances so I'm just gonna kill off this and see what happens. So cf ic rm -f, and I'm gonna kill this off. All right and I'm gonna inspect. [BLANK_AUDIO] spring-boot is doing pretty good.[BLANK_AUDIO] so if only I can type the command.

[BLANK_AUDIO] It's group inspect and not inspect group [BLANK_AUDIO] and all of those are in your instructions anyway. So what I'm doing here is I'm taking a look at this particular container recovery group called spring-boot. So at this point, it says the create is complete, it still doesn't know that one of them is down but at some point it'll realize one of them is down and desired state needs to be three where as the current size is? Two and it will bring it back up.

So we'll go back to that in a second. But let's go back and look at this particular URL you know what we did here is connect a service. So you see the rabbit service was already started. [BLANK_AUDIO] And you see data being streamed from rabbit MQ and all that I needed to do was start the data stream, and I don't know what the heck it does, but it does some data. But essentially what it shows you is that connecting to a service is pretty straight forward.

Now how can we do the same thing with a container, and I already have done this. Again the idea is I don't wanna spend too much time on this. So cf ic ps -a, you see here there's a pcfdemo running. Everybody see this, so cf ic group inspect, and look at pcfdemo. So do you guys see this? What am I supposed to do? [BLANK_AUDIO] There it is.

Okay so essentially what it shows, it shows it's bound to a particular service, and essentially, this is the route that is being used here. And you will see here that it uses kinda the same concept that you used in cloud foundry meaning it has the same environment variables, which is the vcap services, vcap cloud credentials, and so on and so forth.

That's the cloud credentials that are used to connect to the service. So it's the same concept but from a developer perspective, I don't really care. All that I did was I provided, just connect to the service. And it's ready to go. So you can see here that it's a linear, it's NQP and so on and so forth.

So what we'll do at this point, just kinda take a look at this, and hope that it's kinda the same thing that we saw before. But this is running in a container. [BLANK_AUDIO] Come on. [BLANK_AUDIO] I had some issues with this. [BLANK_AUDIO] And basically this is a demo that actually runs in Pivotal Cloud Foundry exactly the same way just kinda need—and you will see here the data is being streamed and behaves pretty much the same way the only difference you will see here is if I do the ENV, somewhere here it will tell you that it's Docker and it kinda gives you all the cloud credentials and all that so what I was trying to show here was how is it different but running in Cloud Foundry what is running in Docker it's kinda the same concept.

You still wanna be able to connect to the services that you are used to, you wanna be able to connect to the MySQL, the redis service and whatever that is and the concept is exactly the same, all that you do is you are going to provide an environment variable which tells you what service and that automatically injects all the appropriate credentials into the environment and runs it exactly the same way.

So let's go back and take a look at my recovery to see if that worked cf ic ps -a. [BLANK_AUDIO] and you'll see that this was building four seconds ago, everybody see this? So I didn't have to do anything, the platform itself kind of to figure out, well the desired choices is three, whereas the current sizes is two, so now I have to do something, I have to bring it back to the original size. And without any programming it automatically brought it up.

If you will see here right now if I do the same call, you'll probably only get two of those because it is still building right, you know one of them. Hopefully, and at some point it will become three. So it's going between D and 30 and whatever, 2D and 30. And when it comes up, you will see that group—can't type and talk.

[BLANK_AUDIO] Okay. [BLANK_AUDIO] So let's see what happens here. [BLANK_AUDIO] So basically chose create complete it chose the current sizes three and the design size is three okay which means the build should also be complete. So let me just verify that by just doing a cf ic ps -a okay, and now it should show that this one is also running.

So now if I go back and do the same curl command you should see two of them being the same but the third one would probably be different. It's still not up I think but at some point you'll see that , that makes sense? So there are more things that I actually do in the hands on lab, let's see.

[BLANK_AUDIO] So some of the other things that I do is, I show all the health monitoring, recoverability of containers and how do you drain logs. Logs are very easy to stream to like any syslog format like Splunk, Paper trail or whatever. Container logs on the other hand, it's very simple all that I do is I see logs and I specify the container ID and I can get the logs associated with it.

If you go to the GUI actually it has much more information as well. So you can kinda take a look at that. And the Docker commands are kind of supplemented with some of the IBM containers. For example if you do a cf ic group list, you can look at what were all the groups that were running.

So these are all the groups that were running, spring-boot, pcfdemo, etherpad-rags, whatever. And I can drill down further and kinda look at inspect. This we did before, group inspect. So it's basically, if you're used to the cloud foundry commands, using the containers commands is pretty straight forward.

All that you do is cf ic and whatever that command is associated with. So what I did was I showed how you can kind of create a droplet on a container. I didn't really walk through how you can create a Docker file, but that's pretty straight forward. Actually let me go back to that, and I can show you how that works.

It's pretty straight forward. I think I create an application right here. So containerising the application, here's my Docker file, and now pretty straightforward. All that I do is run apache tomcat and if I have to build this, I just do cf ic build, I specify a container namespace just to make it unique.

And then I say done so kinda similar to Docker build that you do and then it automatically pushes it to IBM and then once you do the cf ic images, you're gonna see something like this which is what you can use going forward. You can copy from the Docker public registry to the Docker private registry with the command CPI so you can just copy an image if you want, and so on and so forth, group create is there and so on and so forth.

So hopefully that kinda gave you a flavor of how you can connect to our servers, how you can scale with Docker, with IBM containers, at this point you cannot easily scale, you have to pre-scale before. You have to say three is the size I need, four is what the size I need or something like that.

With cloud foundry it's a little bit easier but I expect these thing to evolve. And like I said, I don't think either of them is mutually exclusive, I think there will be some services that are easier to install in Docker, there are some which are easier to install in cloud foundry, because you already have it there.

Now especially for lift and shift applications that they're talking about. It's easier to put it in cloud foundry. And still have every benefit associated with microservices with cloud foundry as well, or at least most of them especially from a deployment perspective. This is again a slide I borrowed form my friend [UNKNOWN] he kinda classifies platforms as a service into structured PaaS and unstructured PaaS.

I mean everybody needs to to be able to scale, everybody needs to be able to monitor an application. Everybody needs to recover an application. If you're doing it in an unstructured PaaS, chances are you have to do it yourself, and that's kinda how he kinda classifies this. So he's looking at a bunch of different ones, like Docker is an unstructured PaaS [BLANK_AUDIO].

[INAUDIBLE] [INAUDIBLE] is a opinionated implementation [INAUDIBLE] some of the semi-structure with Docker containers IBM containers and so on. With that, let me come to the summary, basically cloud native architecture is the new paradigm, but really scale and speed is not for just the big born-in-the-cloud companies, it's really for everybody.

Micro services and monoliths think about [INAUDIBLE] you've seen that a number of times before, think about containers, and again containers come in multiple shapes, types and sizes. Containers doesn't mean one thing, if you don't need to worry about the container underneath, don't worry about it.

Let the platform worry about it. And choose a platform that really enables all of this and brings all of them together. You can get some trial BlueMix accounts if you got to console.engine.bluemix.net and you can also learn how to write applications this HOL is at github, or you can go to the original link that I pointed out.

With that, I think I have time for zero questions. I have time for few questions. Any questions? Always concerned when there are no questions, no questions? Wow, I did a really great job or I completely screwed up. [LAUGH] Yeah exactly. Thank you. No questions even with that. Wow, this is a tough crowd.

[LAUGH] Okay right up there. [BLANK_AUDIO] In the mean time if you have any questions please line it up. And let me put my email up so that you can. [BLANK_AUDIO] and the link too. [BLANK_AUDIO] Okay, yeah. >> My question is just the, you've sort of got a Linux has a Docker run time environment I guess which you just demonstrated.

How is it different from a diego runtime environment which is in like the next cloud foundry? >> Yeah so, they're all based on this obviously cgroups and so on and so forth. Okay, so the question was you know cloud foundry has container technology which is Diego and essentialy what is the difference between a Docker container and a diego container? Did I summarize that okay? [BLANK_AUDIO] >> Why would someone use this rather than Diego which would sort of would treat an application just like a container I guess? >> So the Diego container you can actually run your databased app just like, you're running a Docker application.

There are certain applications that don't run well under Diego and those have got to do with the privileges and so on and so forth. But other than that it's exactly the same. There maybe some differences in terms of performance and so on but I'm not completely aware about. Next question? No more questions? [BLANK_AUDIO] Nobody? Okay thank you.

>> [APPLAUSE]

Speaker:

Raghavan "Rags" Srinivas: Cloud Architect, IBM