16:00:51 <adrian_otto> #startmeeting containers
16:00:52 <openstack> Meeting started Tue Jul  8 16:00:51 2014 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:53 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:56 <openstack> The meeting name has been set to 'containers'
16:00:56 <apmelton> o/
16:01:06 <iqbalmohomed> Hello
16:01:10 <dguryanov> Hi!
16:01:22 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers Our Agenda
16:01:37 <adrian_otto> #topic Roll Call
16:01:51 <dguryanov> Dmitry Guryanov, Parallels
16:01:52 <adrian_otto> Adrian Otto
16:02:00 <thomasem> o/
16:02:00 <apmelton> Andrew Melton
16:02:04 <thomasem> Thomas Maddox
16:02:05 <iqbalmohomed> Iqbal Mohomed, IBM Research
16:02:12 <PaulCzar_> Paul Czarkowski
16:02:19 <erw_> Eric Windisch, Docker
16:02:23 <sew> steve wilson
16:02:26 <xemul> Pavel Emelyanov, Parallels
16:02:42 <adrian_otto> good, looks like we have nice stronga ttendance today
16:04:18 <adrian_otto> feel free to chime in at any time to be recorded in the attendance
16:04:23 <adrian_otto> #topic Review Action Items
16:04:25 <adrian_otto> (none)
16:04:34 <adrian_otto> #topic Call For Proposals
16:04:56 <xemul> Proposal (sorry if this was already discussed): how to mount volumes in container?
16:04:58 <adrian_otto> ok, so this team has really made a ton of progress discussing our options
16:05:21 <adrian_otto> xemul: I will come abck to your question a bit later in the agenda. I have made a note
16:05:28 <adrian_otto> any other additions to the agenda?
16:05:31 <xemul> Thank you
16:05:57 <adrian_otto> ok, so regarding Call for Proposals
16:06:14 <adrian_otto> we have discussed a good deal, and driven consensus on a number of topics
16:06:32 <adrian_otto> one thing taht's hard to do is think about the future in terms of abstracts
16:06:48 <adrian_otto> one way to make considerations easier is to draw a sketch
16:07:32 <adrian_otto> so I'm asking our team if we have volunteers willing to draw sketches in the form of spec proposals that will outline our future state of OpenStack with containers capability added
16:07:41 <Slower> I wonder if a discussion on glance integration would be useful?
16:07:44 <adrian_otto> the idea here is that we would have more than one submission
16:08:13 <adrian_otto> Slower: for an interactive topic today, or in the scope of a containers proposal for OpenStack?
16:08:40 <Slower> maybe interactive topic
16:09:04 <adrian_otto> Slower: okay, I have added that to my list.
16:09:31 <adrian_otto> so I wanted two key outcomes from our time here today:
16:09:52 <adrian_otto> 1) Statements of interest from those willing to work on proposals
16:10:01 <adrian_otto> 2) A sense of where those proposals should be submitted
16:10:13 <adrian_otto> I'll work on #2 first while you each think about #1
16:10:43 <adrian_otto> I suggest that the proposals follow a prevailing format used in a number of OpenStack projects...
16:10:47 <adrian_otto> Format to follow prevailing nova-specs template: http://git.openstack.org/cgit/openstack/nova-specs/tree/specs/template.rst
16:10:57 <apmelton> +1
16:11:03 <funzo> adrian_otto: are proposals different than specs?
16:11:09 <funzo> ah
16:11:11 <erw_> +1 on using specs
16:11:16 <funzo> +1
16:11:16 <thomasem> +1 from me as well
16:11:21 <adrian_otto> if our proposals are actually *not* Nova centric, then we might want to make a containers-specs repo for this purpose
16:11:31 <xemul> +1 for spec too
16:11:53 <adrian_otto> ok, so some support expressed for that format
16:12:02 <thomasem> just a bit
16:12:02 <thomasem> lol
16:12:21 <adrian_otto> does anyone have an alternate point fo view on how to express a concrete plan for adding container support in OpenStack?
16:12:24 <erw_> I am happy to work on specs.
16:12:38 <xemul> I'd like to add my 5 cents for differentiating containers from Nova
16:12:39 <adrian_otto> erw_: Thanks. I will join you to help with that work.
16:12:42 <thomasem> Would the containers-specs repo be a good location for specs that cover multiple OpenStack services?
16:13:08 <adrian_otto> thomasem: yes, that is a possibility
16:13:33 <adrian_otto> one advantage to using the nova-specs repo is that it is likely to get a lot of eyeballs from key stakeholders
16:14:06 <erw_> adrian_otto: I propose we draft in an etherpad and transfer to a spec document, submitted via code-review for wider consideration
16:14:14 <adrian_otto> if we create a new repo, there is a risk that it may get less consideration
16:14:21 <thomasem> that's very true
16:14:25 <adrian_otto> erw_: Good idea
16:14:34 <thomasem> Is there an existing location for specs and features involving multiple services?
16:14:46 <thomasem> specs related to*
16:14:52 <erw_> adrian_otto: we could just discuss on a read-only version of the etherpad amongst the containers-folks and then submit to nova, as well
16:15:06 <adrian_otto> thomasem: not that I am aware of, each project handles them independently from what I have seen
16:15:08 <apmelton> right now with nova, you add your specs to the juno directory, is that a commitment to have the changes in that proposal in juno?
16:15:27 <apmelton> some of our changes may not be in scope for juno
16:15:35 <erw_> adrian_otto: it’s a good point that specsa ren’t open for K yet.
16:15:39 <adrian_otto> it might also be wise to invite members of the OpenStack TC to review a containers spec, regardless of where it gets proposed
16:15:42 <erw_> and won’t be for a long while
16:15:46 <erw_> and specs are now closed for Juno
16:16:06 <dguryanov> We can discuss specs in openstack-dev mailing list
16:16:21 <erw_> also, as a sub-team of Nova, I believe the proper plan for specs is in Nova and not outside of it
16:16:31 <adrian_otto> #agreed the Containers Team shall use one or more etherpad(s) for initial drafts of a "Containers for OpenStack" proposal
16:16:32 <apmelton> erw_: I thought specs closed aug 21?
16:16:56 <thomasem> adrian_otto: Okay... I was wondering about that because, although a change involving multiple services would be broken down into specs for each one, it'd be good to have a higher-level concept to tie them all together, speaking to your earlier comment about difficulty in thinking abstractly.
16:17:01 <apmelton> feature proposal freeze: https://wiki.openstack.org/wiki/Juno_Release_Schedule
16:17:24 <thomasem> Oh yeah, apmelton, I saw that your userns spec was referencing Juno, but yeah...
16:17:32 <erw_> apmelton: this isn’t something we plan to land in Juno, so I doubt it will be well-received.
16:17:39 <adrian_otto> thomasem: nothing prevents contributors of project A from commenting on a spec in project B
16:17:42 <erw_> it’s really a spec for K
16:18:03 <erw_> and so close to FF, I doubt we’ll get eyes on it
16:18:08 <apmelton> erw_: I totally agree what we're proposing isn't going to make Juno
16:18:18 <thomasem> adrian_otto: true true
16:18:34 <apmelton> ah so you're saying for our specs, the window is for all practical purposes closed
16:18:38 <erw_> I propose we sync with Mikal to find an agreement on the best path for the spec process
16:19:04 <adrian_otto> erw_: +1
16:19:18 <adrian_otto> erw_: are you willing to take an action item to reach out to him?
16:19:19 <apmelton> erw_: +1 that's a good idea
16:19:30 <adrian_otto> or would you prefer if I do that?
16:20:44 <erw_> I don’t mind either way
16:21:29 <adrian_otto> #action Eric Windisch to check with Mikal for guidance on the best approach for submitting specification drafts for community discussion
16:21:46 <adrian_otto> in the mean time, we can work on etherpad documents
16:22:00 <adrian_otto> now, if the etherpad doc looks terrible to you, don't fret
16:22:13 <adrian_otto> we can afford to have a few alternate specs to consider
16:22:26 <adrian_otto> the cost of this approach is more work in expressing each set of ideas
16:22:45 <adrian_otto> the benefit is faster convergence on a preferred approach
16:23:01 <adrian_otto> any objection to proceeding with potentially multiple proposals?
16:23:33 <adrian_otto> I will ask that if we do end up with more than one, that each proposal recognize and reference others as alternatives
16:23:47 <adrian_otto> so reviewers see the complete picture of options
16:23:52 <adrian_otto> fair enough?
16:24:41 <dguryanov> Yes, it is.
16:24:52 <erw_> adrian_otto: I think those that are working on proposals all be aware of each other and form an informal working group
16:25:18 <erw_> either by stating their intention today, or through proxy - ML, yourself, etc.
16:25:37 <adrian_otto> yes, erw. My concern is that for those interested in providing external input also know about multiple choices in the works
16:26:22 <apmelton> adrian_otto: about alternatives, there is a section there for discussing alternatives. I also believe that links to reviews are discouraged in specs
16:26:45 <apmelton> might just be links to code reviews though
16:26:58 <apmelton> a spec review is different
16:27:14 <adrian_otto> apmelton: the template is a guideline. Specs are in RST format, so we can fit it in.
16:27:29 <apmelton> ok
16:27:56 <erw_> adrian_otto: big question is how many are working on proposals?
16:28:04 <adrian_otto> on the subject of links to reviews of code contributions, that's probably a grey area meaning that the code came before the review, which in some projects is discouraged.
16:28:25 <apmelton> that makes sense
16:28:27 <adrian_otto> s/before the review/before the spec/
16:28:59 <adrian_otto> erw_: I expect it to be a group of 4 or less, in all hosety
16:29:28 <adrian_otto> and we will source small bits of input from a dozen on this team, and maybe another dozen from outside the team
16:30:17 <adrian_otto> as any proposals take form, I suggest we discuss them here each week in terms of what's been added, and what comes next
16:30:42 <adrian_otto> ok, any further questions or concerns on this topic before I advance to the next?
16:30:51 <erw_> adrian_otto: a note on the wiki page or an etherpad would be a way to at least link those that are working on this - along with contact details
16:31:07 <erw_> if anyone wants to ping us, or if we want to ping each other
16:31:20 <adrian_otto> erw_: good idea. I like that.
16:31:21 <apmelton> erw_: I believe there's a section for that on each spec
16:31:55 <apmelton> the main author, and then other contributors
16:32:26 <apmelton> or are you suggesting just a list of team members who've volunteered to draft specs?
16:33:08 <erw_> apmelton: it sounds like we’ll have several draft specs
16:33:08 <adrian_otto> apmelton: A Wiki page for the initiative, like we have for our Team wiki: https://wiki.openstack.org/wiki/Teams/Containers
16:33:34 <apmelton> ok, sounds good to me
16:33:50 <adrian_otto> we could use a section of that, and link to it, or use a sub-page
16:34:02 <adrian_otto> any other thoughts on this topic?
16:34:46 <adrian_otto> #topic Volume mounting in containers
16:34:48 <adrian_otto> xemul "Proposal (sorry if this was already discussed): how to mount volumes in container?"
16:34:54 <adrian_otto> xemul: you have the floor
16:35:13 <apmelton> adrian_otto: do you have a link to the etherpad from two weeks ago?
16:35:22 <adrian_otto> Yes, one moment
16:35:42 <erw_> xemul: I’m interested to know if you have some ideas here. I’ve been working on this and have been making progress, but it’s all hypothetical / design, I haven’t written any code yet.
16:35:55 <dguryanov> I have found a spec about libvirt lxc containers boot from volumes - https://github.com/openstack/nova-specs/blob/master/specs/juno/libvirt-start-lxc-from-block-devices.rst
16:36:10 <dguryanov> Here is a quote from it^
16:36:18 <xemul> OK
16:36:18 <adrian_otto> We have one that we used during the host agent discussion on 6/24: https://etherpad.openstack.org/p/containers-plugin-arch
16:36:44 <xemul> So the thing, the reason for not allowing this on host is -- if we provide corrupted disk image, mounting one can crash the box.
16:36:54 <dguryanov> As LXC will always share the host's kernel, between all instanances, any vulnerability in the kernel, maybe used to harm the host. In general, the kernel's filesystem drivers should be trusted to free of vulnerabilities that the user filesystem image may exploit.
16:36:58 <adrian_otto> and one from the cinder support discussion: https://etherpad.openstack.org/p/container-block-storage
16:37:14 <adrian_otto> #link https://etherpad.openstack.org/p/container-block-storage Container Block Storage Options
16:37:33 <adrian_otto> apmelton: I think that's the one you wanted
16:37:39 <apmelton> adrian_otto: yup, thanks!
16:38:18 <Slower> The boot from volumes is a slightly different problem
16:38:41 <Slower> and has a different set of security concerns
16:38:44 <dguryanov> Why is it different
16:38:45 <adrian_otto> and from previous minutes: http://eavesdrop.openstack.org/meetings/containers/2014/containers.2014-06-17-22.00.html we agreed:
16:38:48 <adrian_otto> AGREED: our first step for cinder support with Containers shall be addressed by option 8 in https://etherpad.openstack.org/p/container-block-storage (http://eavesdrop.openstack.org/meetings/containers/2014/containers.2014-06-17-22.00.log.html#l-169, 22:38:09)
16:38:48 <adrian_otto> AGREED: Option #6 from https://etherpad.openstack.org/p/container-block-storage is not our preferred outcome. Secure by default is preferred. (http://eavesdrop.openstack.org/meetings/containers/2014/containers.2014-06-17-22.00.log.html#l-213, 22:51:49)
16:38:55 <erw_> xemul: “crash the box” is a generous statement.
16:39:23 <xemul> Well yes :)
16:39:40 <xemul> I used one as "generic"
16:39:45 <Slower> hmm
16:39:46 <Slower> As LXC will always share the host's kernel, between all instanances, any vulnerability in the kernel, maybe used to harm the host. In general, the kernel's filesystem drivers should be trusted to free of vulnerabilities that the user filesystem image may exploit.
16:39:53 <Slower> interesting statement
16:40:40 <xemul> Our point is that if we don't allow container to mount any FS and don't let it provide the virtual image it mounts, the security impact is not that big
16:41:05 <xemul> since e.g. ext4 has been there for many years and can be considered as "mostly harmless" in that sense
16:41:50 <Slower> dguryanov: you have to somehow mount the FS in the mounting-volumes case.. in the namespace of hte container
16:41:53 <xemul> (sorry for being messy, the IRC format is quite unusual to me)
16:42:02 <Slower> xemul: but they could use any FS..
16:42:08 <xemul> Containers?
16:42:30 <Slower> for mounting volumes (user provided FS image)
16:42:32 <xemul> No, typically containers work on volumes they are provided with by the host
16:42:38 <thomasem> What's the most likely (~80%) case for which filesystem would be used?
16:42:54 <Slower> I'm sure that would be ext[234]
16:42:59 <xemul> In our experience the list is ext4 and tempfs
16:43:14 <xemul> And some std linux ones like proc, sys and devtmpfs :)
16:43:18 <erw_> xemul: the problem is that with Cinder, we’re obligated to provide the block device to the container
16:43:19 <Slower> but that doesn't mean someone couldn't do something malitious
16:43:22 <thomasem> If we limit functionality in the ~80% case due to the other 20, we're probably hurting ourselves.
16:43:32 <erw_> xemul: as such, we ARE allowing the container to control the raw filesystem
16:43:41 <xemul> erw_, this is one of the reasons we should consider differentiating from VMs
16:43:42 <Slower> that's an interesting idea.. we could do a FS type whitelist
16:43:56 <thomasem> hmmm
16:43:57 <erw_> xemul: but this is where something like Manila or a similar alternative to Cinder is viable
16:44:32 <dguryanov> We can forbig raw access to a block device and only allow mounts
16:44:50 <Slower> raw access is the easier part
16:44:53 <xemul> erw_, I'll look at Manila, thanks
16:44:59 <funzo> and it's option #8 that we agreed upon
16:45:00 <Slower> since it doesn't involve kernel level interaction
16:45:04 <xemul> Raw access is typically not required
16:45:11 <apmelton> ultimately, what are we trying to get with containers+cinder?
16:45:21 <apmelton> if it's just a remote file system, Manila should cover that, right?
16:45:47 <funzo> firstly DefCore compliance, supporting volume attachment
16:45:56 <xemul> +1
16:46:00 <erw_> imho, containers+cinder, using the nova-volumes APIs should expose block devices into containers and containers should ideally be able to mount those volumes directly.
16:46:07 <xemul> Containers also want to have some persistent data store
16:46:32 <erw_> I think there is room for extensions or alternative APIs that do more sane and reasonably safe things
16:46:37 <erw_> and we should propose those APIs or extensions
16:46:40 <xemul> What if we expose raw device to driver, it mounts one and then launches container APP?
16:47:00 <xemul> And container processes no longer have raw access to FS?
16:47:18 <adrian_otto> Let's spend just a moment longer on this topic so we can touch on the other remaining agenda item. We can revisit this in Open Discussion today as well.
16:47:24 <erw_> xemul: there are valid usecases to allowing raw access to block devices, though
16:47:35 <xemul> erw_, what are they?
16:47:36 <erw_> xemul: cinder is NOT a filesystem service. Cinder is a raw block device service
16:47:43 <xemul> erw_, I agree
16:47:54 <erw_> arguably, we don’t need to allow mounting filesystems from Cinder, period.
16:48:00 <xemul> Should we take the new FileSystem service into consideration?
16:48:23 <adrian_otto> xemul: that is Manila, right? Or are you referring to something else?
16:48:31 <erw_> it isn’t in the API contract and neither VMs nor containers should be obligated to being able to support understanding the arbitrary data that resides on those block devices
16:48:42 <erw_> and yes, filesystems are aribtrary data ;-)
16:48:42 <apmelton> erw_: what are the pure block device use cases for container?
16:48:44 <xemul> adrian_otto, probably Manila, yes.
16:49:14 <erw_> apmelton: raw memory heaps. databases. virtual tape drives. um… I’m sure I can think of more
16:49:19 <adrian_otto> apmelton: see line 128 of https://etherpad.openstack.org/p/container-block-storage
16:49:33 <xemul> erw_, raw access to device w/o mounting one could be an option too, by the way.
16:49:49 <xemul> User namespaces don't allow mounting arbitrary FS, so this is doable
16:49:58 <adrian_otto> ok, final thoughts before we advance topics?
16:50:02 <xemul> And container may use two services -- Cinder for raw disks and Manila (?) for filesystems
16:50:20 <xemul> My final thought (probably old, sorry if it is):
16:50:33 <Slower> I like the idea of a filesystem whitelist.  That would let us at least limit the kernel code exposed in volume mounting
16:50:47 <xemul> Since we're talking about Manila -- probably it makes sense to think of Containers service which is not Nova driver
16:50:50 <Slower> xemul: that is also a good point.. cinder & manila
16:50:57 <xemul> One of the reasons -- scaling the containers
16:51:00 <adrian_otto> Slower: please record taht in the etherpad
16:51:28 <xemul> The thing is -- applying the larger or smaller memory on container is much easier (and actually works) than for VM
16:51:31 <funzo> The API that's implemented for the nova-driver could easily be a subset of container service API impl
16:51:42 <xemul> That said, scaling in terms of applying new flavor might not be that good
16:51:56 <adrian_otto> ok, this is a good discussion, so I'm reluctant to cut it short. I suggest we revisit this again next week.
16:52:11 <adrian_otto> #topic Announcements
16:52:14 <xemul> I don't mind
16:52:30 <adrian_otto> Reminder: adrian_otto will be on vacation 2014-07-11 to 2014-07-24. Eric Windisch will chair our 2014-07-15 and 2014-07-22 meetings.
16:52:47 <funzo> adrian_otto: enjoy the vacation!
16:52:54 <adrian_otto> erw_: you'll own the agenda. I'm happy to help out in any way you want before my departure
16:52:57 <xemul> adrian_otto, is the new meeting time already selected?
16:53:14 <adrian_otto> xemul: see:
16:53:24 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers Meeting Schedule and Agenda
16:53:25 <Slower> adrian_otto: done
16:53:42 <erw_> adrian_otto: thanks.
16:53:55 <adrian_otto> #topic Glance Integration
16:53:59 <adrian_otto> Slower: proceed
16:54:29 <adrian_otto> I will call time in a couple of minutes for Open Discussion, and then adjournment. This topic should be revisited next week as well.
16:54:32 <Slower> I'm realizing that there are some differences in how image/glance integration would work in eg LXC vs docker
16:54:43 <Slower> yeah we can punt if you want
16:55:02 <adrian_otto> let's discuss for just a moment
16:55:14 <adrian_otto> at least let us start to think on this topic
16:55:27 <Slower> so erw_ is working on putting the docker containers inside glance
16:55:28 <erw_> well, how about I start with the work I’ve been doing the past week
16:55:30 <erw_> haha
16:55:35 <Slower> erw_: go ahead
16:55:47 <erw_> I’ve ripped out the docker-registry as a proxy
16:55:58 <erw_> and have implemented save/load of images into/out-of glance
16:56:08 <adrian_otto> erw_: yay!!
16:56:19 <xemul> erw_, and how do images look like?
16:56:22 <erw_> adrian_otto: there are some serious security issues though, which I’d like to discuss - but we can offline that
16:56:30 <xemul> I mean -- is it container FS packed into virtual disk image?
16:56:32 <adrian_otto> indeed
16:56:47 <Slower> and in theory if the image is not in glance it will attempt to pull from docker registry, correct?
16:56:54 <erw_> xemul: docker ‘save’ will create a tarball containing multiple layers and tags
16:57:01 <xemul> Ah OK. Thanks.
16:57:02 <erw_> they’re then imported via ‘docker load'
16:57:10 <Slower> yeah it's native docker format right?
16:57:11 <erw_> Slower: no.
16:57:15 <erw_> er..
16:57:19 <erw_> no the the first question
16:57:26 <erw_> yes to the second - it’s a “docker native format"
16:57:33 <Slower> erw_: ah, how do we get new images into glance?
16:57:38 <erw_> where “a tar ball of tarballs and a manifest file” == native to docker
16:57:40 <funzo> oh, I thought I saw a pull if the image isn't found
16:57:53 <funzo> I probably misread
16:57:58 <apmelton> erw_: so each glance image is a tarball of a tarball?
16:58:03 <erw_> Slower: it would be something like, “docker save cirros | glance image-create"
16:58:26 <adrian_otto> #topic Open Discussion
16:58:27 <Slower> erw_: ah so it has to be in the local registry to be available?
16:58:28 <adrian_otto> feel free to continue this discussion
16:58:48 <apmelton> when nova boots does docker have to pull the entire tarball down, or are layers some how preserved?
16:58:52 <erw_> Slower: You would have a machien running docker to export the tarball from
16:58:57 <erw_> and from that tarball, you can import it into glance
16:59:22 <Slower> hrrm
16:59:27 <erw_> apmelton: it would pull the entire tarball down, which would write layers into the system..
16:59:31 <Slower> so not very good docker registry integration then.. not like before
16:59:35 <thomasem> I've got to run, unfortunately. Catch y'all next week!
16:59:36 <erw_> apmelton: so this is where there are security issues...
16:59:50 <erw_> apmelton: the layers and the tags in these tarballs are arbitrary...
16:59:56 <apmelton> yea, you could have a massive uncompressed image
17:00:07 <adrian_otto> we timed out, sorry guys
17:00:12 <erw_> apmelton: worse, you could overwrite the local docker’s idea of “ubuntu"
17:00:14 <adrian_otto> thanks for attending!
17:00:17 <erw_> from your “cirrus” image.
17:00:24 <apmelton> erw_: scary
17:00:26 <erw_> *cirros
17:00:30 <adrian_otto> #endmeeting