22:00:06 <adrian_otto> #startmeeting containers
22:00:06 <openstack> Meeting started Tue Aug 26 22:00:06 2014 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
22:00:07 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
22:00:09 <openstack> The meeting name has been set to 'containers'
22:00:11 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2014-08-26_2200_UTC Our Agenda
22:00:17 <adrian_otto> #topic Roll Call
22:00:20 <adrian_otto> Adrian Otto
22:00:22 <thomasem> Thomas Maddox
22:00:31 <mtesauro> Matt Tesauro
22:01:05 <adrian_otto> we lost apmelton
22:01:26 <apmelton> o/
22:01:26 <thomasem> not fer long
22:01:28 <apmelton> andrew melton
22:01:33 <adrian_otto> oh, there you are
22:01:36 <dguryanov> hello
22:01:40 <apmelton> y'all looking for me?
22:01:53 <adrian_otto> I found it amusing that I called rollc all and your client left the room
22:02:07 <iqbalmohomed> Iqbal Mohomed, IBM Research
22:02:09 <apmelton> in transition to a new bouncer and haven't quite got it set up
22:02:20 <apmelton> so back to the old one for now
22:02:30 <adrian_otto> makes sense
22:02:45 <adrian_otto> so before I advance to announcements, I will share trivial stuff
22:02:53 <adrian_otto> I hurt my shoulder last week
22:03:01 <thomasem> Oh that's no fun
22:03:09 <mtesauro> doing...
22:03:14 <adrian_otto> in California we have bad drought (lowest water levels in 100 years)
22:03:21 <adrian_otto> so I have been trying to save water
22:03:28 <rcleere> digging a well?
22:03:36 <adrian_otto> collecting the shower water in a pail while waiting for it to warm up
22:03:46 <adrian_otto> rcleere: :-)
22:04:09 <adrian_otto> then I take that shower water and use it in my washing machine
22:04:15 <adrian_otto> it's a 5 gallon pail
22:04:27 <adrian_otto> so when full it weights 40 pounds
22:04:29 <apmelton> wow california doesn't sound nearly as fun as everyone says it is :P
22:04:36 <thomasem> you don't say
22:04:38 <adrian_otto> anyway, while I was hurling it into the clothes wacher… ouch.
22:04:42 <thomasem> youch
22:04:45 <adrian_otto> that was last thursday
22:04:54 <adrian_otto> and I have been in serious pain, until today
22:05:02 <adrian_otto> today I am almost all better
22:05:17 <adrian_otto> so I'm in a terrific mood, just thought I would share that
22:05:18 <thomasem> that's good news
22:05:25 <thomasem> lol, that's awesome
22:05:41 <adrian_otto> ok, so on with the agenda
22:05:46 * jogo walks in late
22:05:46 <adrian_otto> #topic Announcements
22:05:54 <adrian_otto> fist of all
22:06:01 <adrian_otto> OpenStack Silicon Valley event.
22:06:06 <adrian_otto> 2014-09-16
22:06:13 <adrian_otto> it's a Tuesday, and I'm attending
22:06:17 <adrian_otto> will I see any of you there?
22:06:39 <thomasem> I honestly didn't find out about that until today.
22:07:01 <adrian_otto> while I'm doing that, I will be missing our team meeting, so we should decide to 1) Cancel or meeting for that day -or- 2) Select a pro-tem chair to run it
22:07:12 <adrian_otto> #link http://openstacksv.com/ Openstack Silicon Valley 2014-09-16
22:07:24 <adrian_otto> what do you all think is best?
22:07:57 <jogo> adrian_otto: I have a general question when there is an opening in the meeting schedule
22:08:07 <adrian_otto> jogo, proceed
22:08:10 <thomasem> adrian_otto: Well, we may know more closer to that time.
22:08:23 <thomasem> e.g. who would be availabel to lead the meeting
22:08:36 <jogo> so I asked this at the nova midcycle but I am less sure then before
22:08:42 <adrian_otto> we can punt this for a week or so
22:08:43 <jogo> so some background
22:09:04 <jogo> I think OpenStack needs a good container story, as they have a lot of value for !VM use cases
22:09:34 <jogo> but why does openstack need a openstack native solution instead of adopting another project and making them work better together
22:09:48 <jogo> there is no shortage of things that try to manage containers at scale
22:10:01 <adrian_otto> great question
22:10:32 <adrian_otto> first of all, my goal is to make containers a first class resource in OpenStack
22:10:43 <jogo> and I didn't see the spec mention that
22:11:02 <jogo> so IMHO first class doesn't mean we need a native built from scratch solution
22:11:20 <jogo> the compute program can just say, use x for the container service
22:11:28 <adrian_otto> the current proposal suggests specific tools that we can leverage
22:11:38 <jogo> adrian_otto: hmm I must have missed that line
22:11:39 <apmelton> jogo: I think the important distinction between the openstack containers service compared to others (kubernetes, fleet, etc) is that our service not only builds and magages the containers, but also the infrastructure they are built on
22:11:44 <adrian_otto> the intent is not to recreate what exists, but represent them within the fabric of openstack
22:12:07 <adrian_otto> without a clumbsy third party heat resource
22:12:18 <adrian_otto> something that taps into the same scheduling capability that OpenStack uses
22:12:25 <jogo> apmelton: so layer something underneath to do taht
22:12:26 <jogo> that
22:12:42 <adrian_otto> ok, so let's table this for like 2 minutes
22:12:45 <jogo> hmm why does it need to use the same scheduling?
22:13:02 <adrian_otto> and continue it once we close out announcements
22:13:08 <jogo> adrian_otto: kk, thats why I asked for when you had a moment, sorry for derailing
22:13:13 <adrian_otto> any other announcements from team members?
22:13:32 <adrian_otto> jogo, Oh, I thought you were referring to 9/16, sorry.
22:13:43 <adrian_otto> I want to have this debate, for sure.
22:14:25 <adrian_otto> ok, so no other announcements, so advancing to next agenda item
22:14:37 <adrian_otto> #topic Discuss Specs for OpenStack Containers Service
22:14:52 <adrian_otto> first the links, and a quick update on each
22:15:02 <adrian_otto> #link https://review.openstack.org/114044 Spec Proposal
22:15:11 <adrian_otto> there is a revision as of today for your review
22:15:25 <adrian_otto> #link https://review.openstack.org/115328 Repo Review
22:15:33 <adrian_otto> on this topic, something perplexing happened
22:15:46 <adrian_otto> we were asked to work within the OpenStack compute program using Stackforge
22:15:58 <adrian_otto> which apparently the current rules do not allow.
22:16:20 <adrian_otto> If we are going to use the OpenStack trademark, then we need to be in the openstack/ namespace, so I resubmitted this accordingly.
22:16:37 <jogo> adrian_otto: you were asked to work on stackforge before goign to the compute program
22:16:56 <adrian_otto> the real intent was to allow for rapid iteration on a project, which is possible regardless of Stackforge or compute program work
22:17:10 <adrian_otto> as they can have separate review teams, and just not tag releases
22:17:41 <dguryanov> What about https://etherpad.openstack.org/p/containers-service-api?
22:17:44 <adrian_otto> and so long as we trust the compute program PTL not to cut a release of containers until it is ready, then the lcoation should not matter
22:18:12 <adrian_otto> thanks, I will get to that in a moment, dguryanov
22:18:37 <adrian_otto> so although we ahve contributors working on implementing an API spec, there is not currently anywhere to land that
22:18:53 <adrian_otto> until I get through the red tape of 115328
22:19:40 <adrian_otto> ok, any questions on the current state of the code repo?
22:20:53 <adrian_otto> ok, so next part is:
22:20:55 <adrian_otto> #link https://etherpad.openstack.org/p/containers-service-api Previous Containers Service API Draft
22:21:17 <adrian_otto> this dates back to the 2013 timeframe when a containers API was first discussed
22:21:38 <adrian_otto> dguryanov: ^^ what are your thoughts on this?
22:21:50 <apmelton> adrian_otto: you asked me to come up with reasons I didn't want to use the docker api as our implementation: https://gist.github.com/ramielrowe/4d162d780977542997a8
22:22:06 <dguryanov> I thinks this API is a better starting point, then docker's API, but it have some lacks
22:22:07 <apmelton> I didn't get much time to work on it, but those are the basic reasons for my position
22:22:20 <adrian_otto> apmelton: excellent! Thank you!
22:22:49 <adrian_otto> apmelton: notice that I adjusted our proposal to suggest an alternate approach in accordance with our discussion last week
22:23:13 <adrian_otto> Docker API users can use libswarm and an openstack containers backend to talk to the openstack containers service API
22:23:18 <apmelton> adrian_otto: yup, I think that we can offer equivalent functionality to the docker api
22:24:05 <adrian_otto> and we can still provide access to other sorts of containers, so if someone prefers openvz they could use that
22:24:16 <adrian_otto> as a backend module to the containers service
22:24:43 <adrian_otto> and possibly even use docker CLI to control that, as perveted as that may sound.
22:25:45 <adrian_otto> ok, so any more thoughts on this before resuming the discussion stemming from jogo's questions?
22:26:31 <adrian_otto> ok, jogo, you have the floor
22:26:36 <dguryanov> I still suggest to create a new etherpad page for API
22:26:42 <jogo> adrian_otto: thanks
22:26:50 <dguryanov> And write all thoughts there
22:26:54 <adrian_otto> dguryanov: Good, I'll take that as an AI
22:26:57 <jogo> adrian_otto: so my understanding of this effort its twofold
22:27:08 <jogo> 1) have a defacto container answer for OpenStack
22:27:38 <adrian_otto> #action adrian_otto to create and share a new etherpad for recording current consensus about a containers service api
22:27:38 <jogo> 2) be able to provision compute instances from inside the container service so user doesn't need to worry about it
22:28:39 <adrian_otto> jogo, yes. I want owners of OpenStack clouds to have a built-in containers solution that just works, regardless of what instance type they have chosen to use for their nova service
22:28:51 <jogo> adrian_otto: sure but why not use an existing solution
22:28:57 <adrian_otto> and without scurrying arounda nd bolting on third party software to make OpenStack containers ready.
22:29:01 <jogo> and just have the compute program 'bless' it
22:29:21 <adrian_otto> jogo: that approach has been tried and failed.
22:29:23 <jogo> adrian_otto: so this comes from my view that openstack should be not be  big tent but a small tent with a big ecosystem
22:29:28 <apmelton> jogo, because that conflicts with #1
22:29:29 <jogo> adrian_otto: do you have examples?
22:29:34 <apmelton> or at least my view of #1
22:29:39 <adrian_otto> scalr, nova-docker
22:30:00 <jogo> not if openstack, or the compute program in particular, says use outside things for this
22:30:07 <jogo> so nova-docker why is that a failure?
22:30:12 <jogo> and I have never heard of scalr
22:30:13 <apmelton> to be the defacto container solution for openstack, that container solution should present it self with very similar features as nova
22:30:45 <jogo> so  cannot comment
22:30:45 <jogo> apmelton: why?
22:30:52 <adrian_otto> because you can't do half the things that containers are meant to allow
22:30:52 <adrian_otto> nova features and containers features are a venn diagram with a limited overlap
22:31:06 <adrian_otto> that limited overlap is what nova-docker delivers now
22:31:18 <adrian_otto> and that's not enough to meet customer expectations
22:31:35 <apmelton> jogo, because otherwise would be a bad user experience
22:32:18 <apmelton> I should be able to switch from using nova to using containers, and bring my cinder volumes and neutron networks with me
22:32:57 <adrian_otto> apmelton: +1
22:33:22 <apmelton> and I should be able to do that without having to learn an entirely new architecture
22:33:30 <jogo> apmelton: interesting idea
22:33:31 <adrian_otto> jogo: I must admit that back in November 2013 I thought *exactly* what you are expressing right now
22:33:48 <jogo> apmelton: but how do you migrate a full VM to a container?
22:33:49 <adrian_otto> and my position has evolved considerably as I started using containers every day.
22:33:51 <jogo> image wise
22:34:18 <jogo> adrian_otto: so there is clearly something I am missing in this discussion
22:34:19 <thomasem> Well, that's another step, and not impossible. OpenVZ has a prototype for that, I think.
22:34:20 <adrian_otto> what you begin to realize is that if you think of a container as just a cheaper VM, you are missing out on all the truly compelling things about containers
22:34:34 <jogo> adrian_otto: as you have more experience with containers then me and you changed your mind
22:34:44 <jogo> but I just don't see what I am missing right now
22:34:49 <apmelton> jogo, that's definitely tricky
22:35:09 <adrian_otto> ok, take for example the ability to set a shell environment variable key/value pair to be set at the time a container starts
22:35:10 <jogo> adrian_otto: sure, I see the value of OpenStack working well with containers
22:35:13 <mtesauro> +1 adrian_otto
22:35:14 <jogo> adrian_otto: let me ask a different question
22:36:30 <thomasem> capturing stdout and return codes too
22:36:34 <thomasem> shared namespaces, though a bit more niche in my opinion.
22:36:41 <apmelton> jogo, what I'm suggesting is, images aside, our users should be able to provision containers and almost the same openstack experience as if they used nova
22:36:58 <thomasem> It ought to feel like OpenStack
22:36:59 <jogo> why don't any public clouds today have native container solutions? and defer to seperate tools
22:36:59 <jogo> adrian_otto: I am not questioning the value of containers
22:37:17 <adrian_otto1> jogo, both GCE and RAX clouds have it
22:37:33 <adrian_otto1> through docker + libswarm, our workd with OnMetal as well.
22:37:42 <apmelton> jogo because those tools are not multi tenant at the moment
22:37:45 <jogo> thomasem: I don't knwo what 'feal like OpenStack' means
22:38:10 <thomasem> jogo: using similar verbiage, data structures
22:38:23 <thomasem> jogo: async
22:38:31 <jogo> adrian_otto1: GCE supports a way to say just give me a container
22:38:35 <jogo> without doing anything else?
22:38:44 <adrian_otto1> with multi-tenancy and async, those are the two major differences.
22:39:01 <apmelton> jogo, they have a docker image that essentially is configured through metadata the user provides
22:39:04 <jogo> adrian_otto1: async?
22:39:12 <jogo> apmelton: sure that is just an image
22:39:19 <jogo> that is different then a full service though
22:39:27 <adrian_otto1> and still allowing for the actual container code to be pluggable, to alow for implementations for LXC/libct, openvz, docker/libcontainer whatever.
22:39:51 <jogo> adrian_otto1: sorry I am still missing something, in part because of all the different voices in here saying different things
22:40:39 <jogo> The point that I am still stuck on is why should OpenStack have an OpenStack native solution to do this? I don't want another case of NIH
22:40:40 <adrian_otto1> jogo: have you seen the use cases section of the spec proposal yet?
22:40:45 <adrian_otto1> those are intended to give a perspective for who values this and why
22:41:01 <jogo> adrian_otto1: looking now
22:41:32 <adrian_otto1> jogo, if a suitable solution already existed, we would be using that now. What we have now is nova-docker and a clumbsy heat resource with no scheduler. That's not working well enough.
22:41:45 <adrian_otto1> we also have libvirt/lxc
22:41:50 <adrian_otto1> which solves only a part of the use cases
22:41:51 <jogo> adrian_otto1: I am really conrused about the nova-docker thing
22:41:56 <jogo> adrian_otto1: I thought that isn't related at all
22:42:30 <apmelton> jogo, do you have an example of another service provider, providing containers as a service?
22:43:20 <apmelton> with a non-native service providing that 'containers as a service'
22:43:20 <adrian_otto1> apmelton: there was only one, Tutum, and now they are part of Docker, Inc.
22:43:59 <adrian_otto> #topic Review Action Items
22:44:01 <adrian_otto> (none)
22:44:12 <adrian_otto> #topic Open Discussion
22:44:23 <adrian_otto> jogo: you are welcome to continue through open discussion
22:44:27 <jogo> adrian_otto1: as that is a nova driver
22:44:27 <jogo> confused*
22:44:27 <jogo> so going through the use cases
22:44:27 <jogo> 1,2,3,4,5 don't have anything in them that is OpenStack specific
22:44:27 <jogo> is that accurate?
22:44:27 <jogo> adrian_otto1: sorry if I am coming accross as contrary
22:44:31 <jogo> adrian_otto: so use cases ^
22:45:06 <jogo> adrian_otto: is it safe to say 1,2,3,4,5 have nothing in them that makes them OpenStack specific
22:46:24 <adrian_otto> jogo, the frame of reference is that the cloud operator is using OpenStack
22:46:33 <adrian_otto> and they have these sue cases to address
22:46:47 <jogo> adrian_otto: right, but is my take on 1-5 accurate
22:47:00 <jogo> adrian_otto: I want to make sure I am not missing something before going on to the next two
22:48:05 <adrian_otto> 1-5 could be solved using a variety of approaches, some not including openstack at all. However, I'm after something that addresses each with a consistent user experience that does not require the cloud operator do to r&D to figure out how to address these use cases.
22:48:29 <adrian_otto> it should just work.
22:48:52 <jogo> adrian_otto: so yes.
22:49:01 <adrian_otto> as A cloud operator, I should not need to do circus tricks to solve those cases.
22:49:13 <jogo> so the does not require R&D to figure it out
22:49:37 <jogo> well I will get back to that
22:49:41 <jogo> ok the last two use cases
22:49:42 <adrian_otto> I want at least one configuration to work with OpenStack out of the box
22:50:12 <jogo> I like the use case in #6
22:50:28 <jogo> nice way of hiding extra complexity from the user
22:50:50 <jogo> but to solve 1-6 can't you add a small tool under an existing system?
22:51:00 <jogo> and I am not sure what you mean in #7
22:51:02 <adrian_otto> sort of, but not really
22:51:07 <adrian_otto> the key is multi-tenancty
22:51:21 <adrian_otto> if I am a single tenant cloud, then yes, there are options for that
22:51:44 <adrian_otto> but as a multi-tenant cloud, that's where it all falls apart and becomes rather yucky from an ops perspective.
22:52:06 <jogo> so as far as I can tell the model existing public clouds use for this is:
22:52:16 <jogo> charge and manage the instances and let user deal with all things containers
22:52:34 <jogo> in that model you don't need to deal with multi tenancy as each user would spin up there own copy of the service
22:52:45 <jogo> using a preseeded image or something
22:53:36 <adrian_otto> jogo: that user experience is sub-optimal
22:53:43 <jogo> adrian_otto: why?
22:53:57 <jogo> requiring a user to spin up a single instance?
22:54:04 <apmelton> jogo, because that means ever user would need to manage that single instance
22:54:05 <jogo> isn't that how everyone does it today?
22:54:16 <thomasem> And that doesn't seem to accumulate to unnecessary overhead once you have a significant customer base? A bunch of wasted resources that could have been solved by some orchestration above it?
22:54:17 <adrian_otto> because the complexity of dealing with wiring up a container infrastructure and cloud resources is carried by the customer, not by the hosted service
22:54:18 <jogo> apmelton: but isn't that how everyone does it today?
22:54:32 <adrian_otto> I'm proposing a place where that complexity can be abstracted from the user
22:54:33 <jogo> thomasem: who said that instance won't run containers as well
22:54:51 <jogo> adrian_otto: right, so in the current form I don't really see that fleshed out in the spec
22:54:57 <jogo> adrian_otto: less I just missed it
22:55:27 <apmelton> jogo, that's how everyone does it today because it's a quick win
22:55:34 <apmelton> jogo, "win"
22:55:46 <jogo> apmelton: with minimal overhead to a user
22:55:48 <apmelton> it's a bad experience because as a user all I want to worry about are my containers
22:55:58 <thomasem> jogo: Nobody. I'm not talking about whether or not instances can run containers. I'm talking about providing a cleaner user experience by not making every customer use additional resources just to handle their containers when we could do it better as a cloud (we know the infrastructure).
22:56:12 <jogo> apmelton: well users are still charged per instance not per container right?
22:56:13 <adrian_otto> jogo, I'm trying not to slant the proposal too much toward what matters to public cloud operators.
22:56:13 <apmelton> I don't want to have to worry about upgrading Container Management Service X
22:56:29 <thomasem> jogo, correct, only per instance is the idea.
22:56:37 <adrian_otto> I want a sensible balance for both public and private cloud use cases, even small ones.
22:57:10 <adrian_otto> and the review is bordering on 400 lines now
22:57:19 <jogo> adrian_otto: that isn't very big ;)
22:57:19 <adrian_otto> I don't want it to be impossible to review either
22:57:30 <jogo> so specs are not code
22:57:33 <adrian_otto> that's with no API details in it
22:57:44 <jogo> so they rule of thumb about length is a little different IMHO
22:57:53 <jogo> anyway, I am concerened that we do something
22:58:06 <jogo> but the !OpenStack things in this space do it way better
22:58:07 <apmelton> jogo, yes, they are still charged for the instance even if they aren't using it
22:58:09 <jogo> and everyone just adpos that
22:58:19 <jogo> adopts that, and we waste our time
22:58:19 <adrian_otto> jogo, so you are suggesting that if we add additional rationale from the perspective of a public cloud operator, that it could help explain the desire to address the use cases?
22:58:40 <adrian_otto> we need to wrap up open discussion in a min
22:58:43 <jogo> adrian_otto: no I am not saying that
22:59:03 <adrian_otto> jogo, I encourage you to continue with us in #openstack-containers
22:59:06 <jogo> adrian_otto: I am saying I don't get why we cannot use the half dozen projects trying to solve this
22:59:17 <adrian_otto> we'd like to learn from your perspective on this
22:59:25 <jogo> adrian_otto: just joined the room
22:59:27 <jogo> thanks
22:59:40 <adrian_otto> thanks everyone for your attendance today.
22:59:46 <apmelton> wanna wrap up here and move to openstack-containers?
23:00:01 <adrian_otto> our next meeting is 2014-09-02 at UTC 1600
23:00:05 <adrian_otto> #endmeeting