16:02:50 <adrian_otto> #startmeeting containers
16:02:51 <openstack> Meeting started Tue Dec  1 16:02:50 2015 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:02:52 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:02:55 <openstack> The meeting name has been set to 'containers'
16:03:16 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-12-01_1600_UTC Our Agenda
16:03:47 <adrian_otto> #topic Roll Call
16:03:51 <adrian_otto> Adrian Otto
16:03:56 <wanghua> o/
16:03:57 <Drago> o/
16:03:58 <Kennan> o/
16:03:58 <adrian_otto> sorry for the delay.
16:03:58 <dimtruck> o/
16:03:59 <dane> o/
16:03:59 <hongbin> o/
16:04:00 <houming> o/
16:04:01 <rpothier> o/
16:04:02 <daneyon> o/
16:04:06 <rods> o/
16:04:07 <juggler> o/ Perry Rivera
16:04:08 <Tango> o/
16:04:17 <eghobo> o/
16:04:18 <muralia> o/
16:04:21 <suro-patz> o/
16:05:05 <adrian_otto> hello wanghua, Drago, Kennan, dimtruck, dane, hongbin, houming, rpothier, daneyon, rods, juggler, Tango, eghobo, muralia, and suro-patz
16:05:26 <muralia> Hi everyone
16:05:46 <juggler> hello all
16:05:48 <adrian_otto> #topic Announcements
16:05:52 <adrian_otto> 1) Blueprints have been targeted for Mitaka.
16:06:05 <adrian_otto> If you disagree with any of the selections, please see adrian_otto.
16:06:07 <daneyon> adrian_otto I have to leave in a few minutes to attend a customer meeting.
16:06:18 <adrian_otto> daneyon: ok
16:06:21 <adrian_otto> If you are unable to commit to implementing the blueprints, or finding contributors who can, let's un-target them from Mitaka, or get them re-assigned.
16:06:33 <adrian_otto> Several blueprints need further detail or team discussion to be considered. Priority values for these have been temporarily set to "Not" pending clarity.
16:06:54 <adrian_otto> we will have a chance to discuss a few of those today
16:07:04 <adrian_otto> for the ones we do not get to, let's plan to use our ML
16:07:12 <adrian_otto> any other announcements from team members?
16:07:32 <adrian_otto> #topic Review Action Items
16:07:35 <adrian_otto> 1) adrian_otto to review https://etherpad.openstack.org/p/mitaka-magnum-planning and use it to target Mitaka blueprints
16:07:39 <adrian_otto> Status: COMPLETE
16:07:45 <adrian_otto> that was the only action item.
16:07:51 <adrian_otto> Container Networking Subteam Update (daneyon)
16:08:10 <adrian_otto> daneyon: maybe you can make a quick update before you take off?
16:08:15 <daneyon> sure
16:08:23 <daneyon> #link http://eavesdrop.openstack.org/meetings/container_networking/2015/container_networking.2015-11-19-18.01.html
16:08:29 <daneyon> ^ of last week's meeting
16:08:44 <daneyon> I created a support matrix for the CNM
16:08:49 <daneyon> one sec and I'll get it
16:09:18 <daneyon> #link https://wiki.openstack.org/wiki/Magnum/NetworkDriverMatrix
16:09:22 <daneyon> ^ network rivers
16:09:32 <daneyon> #link https://wiki.openstack.org/wiki/Magnum/LabelMatrix
16:09:34 <daneyon> ^ labels
16:10:01 <daneyon> I separated the 2 b/c i think labels can be applicable beyond the CNM.
16:10:22 <daneyon> Tango is working on option 2 of this patch
16:10:24 <daneyon> #link https://review.openstack.org/#/c/241866/
16:10:34 <daneyon> to provide a non-overlay option for the flannel driver.
16:10:57 <adrian_otto> thanks daneyon
16:11:08 <daneyon> wanghua is working on containerizing k8s services, including etcd and flannel
16:11:09 <daneyon> #link https://blueprints.launchpad.net/magnum/+spec/run-kube-as-container
16:11:16 <daneyon> that's about it
16:11:20 <daneyon> any questions?
16:11:26 <adrian_otto> cool
16:11:45 <wanghua> egor don't support to run flannel in container
16:11:55 <adrian_otto> ok, we appreciate the update daneyon
16:12:00 <daneyon> :-)
16:12:23 <adrian_otto> wanghua: if we only use the chroot (mount) namespace, that would still be better than nothing.
16:12:23 <daneyon> wanghua let's continue that discussion on the ML
16:12:39 <daneyon> i think their is pro's and con;s to both approaches
16:12:57 <wanghua> OK
16:12:59 <adrian_otto> I am confident that we can make it work, even if it requires some compromises
16:13:11 <adrian_otto> #topic Magnum UI Subteam Update (bradjones)
16:13:19 <adrian_otto> brad sent his regrets, and will attend next week
16:13:23 <adrian_otto> nothing to share this time
16:13:33 <adrian_otto> #topic Blueprint/Bug Review
16:14:16 <adrian_otto> next week I will begin the "Essential Blueprint Updates" agenda item, but I will skip it this week, since the owners have not been given sufficient time to prepare an update.
16:14:43 <adrian_otto> if you own a blueprint marked Essential, please arrange to have a breif update to share with the team each week here in our team meetings about that work
16:14:52 <adrian_otto> Blueprint Discussion
16:15:01 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/magnum-baremetal-full-support Support baremetal container clusters (adrian_otto, kennan)
16:15:15 <adrian_otto> ok, so this one I really struggled with, Kennan
16:15:28 <adrian_otto> I do see baremetal support for Magnum as a top priority, but...
16:15:43 <Kennan> adrian_otto: it was planed before summit. right now I did not intend to make it M release
16:15:54 <Kennan> as ironic neutron has gaps
16:15:56 <adrian_otto> pursuant to our discussions with Ironic devs in Tokyo, I'm reluctant to put baremetal workarounds directly into Magnum when they belong in Ironic.
16:16:12 <adrian_otto> ok, so are you okay with the way it is currently marked?
16:16:19 <adrian_otto> should I mark it Obsolete?
16:16:34 <Kennan> yes I think it can be in next release.
16:16:39 <adrian_otto> our current plan of record is to await the expected solution of this in Ironic
16:17:01 <adrian_otto> ok, then I will leave it the way it is, and we can re-prioritize it next time
16:17:13 <Kennan> sure
16:17:27 <adrian_otto> that was the only blueprint in the request list for Mitaka that was not included. All the others were.
16:17:47 <adrian_otto> I expect that we will need to prune the Mitaka target list more as we go
16:18:13 <adrian_otto> just because I targeted a blueprint for the release does not mean you are committed to complete it… unless it's an Essential or High priority.
16:18:20 <adrian_otto> if that's the case, let's talk about them
16:18:31 <adrian_otto> here is one:
16:18:32 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/versioning-rpc-server Versioning rpc server and client (eliqiao)
16:18:44 <adrian_otto> eliqiao: you requested discussion on this
16:19:23 <adrian_otto> this is one that I set to "Not" priority pending team discussion
16:19:59 <adrian_otto> keep in mind that if I mark a blueprint "Not", it does not mean I am rejecting it. For that action I use the "Obsolete" design action.
16:20:32 <adrian_otto> "Not" priority means I have reviewed it, and need more input before giving it a different priority, or taking another administrative action. Basically I am treating this as "Pending"
16:21:01 <adrian_otto> eliqiao: yt?
16:21:23 <adrian_otto> we can come back to this one later.
16:21:28 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/magnum-tempest Magnum APIs Test coverage in Tempest Lib  (dimtruck)
16:21:41 <adrian_otto> Implementation: #link https://review.openstack.org/247083 Tempest plugin work (dimtruck)
16:21:53 <adrian_otto> dimtruck: you requested discussion on this one
16:22:15 <dimtruck> sure - it was more of the approach question - i think we covered it last week?
16:22:24 <adrian_otto> ok, moving to the next
16:22:28 <dimtruck> but i'm planning on rebasing it and pushing new changes today
16:22:29 <dimtruck> thanks
16:22:46 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/support-for-different-docker-storage-driver Docker storage drivers (wanghua)
16:23:21 <adrian_otto> wanghua: you requested discussion on this one?
16:23:27 <wanghua> yes
16:23:59 <wanghua> I register this bp a hour ago and I think it is necessary
16:24:16 <adrian_otto> today alternate storage drivers can be selected by building an alternate glance image for your bay node with the modified config
16:24:45 <adrian_otto> is there a reason this needs to be part of Magnum?
16:25:09 <adrian_otto> might it be acceptable to document the process of how to modify the image?
16:25:41 <wanghua> I think not only modify the image
16:25:53 <eghobo> adrian_otto: it's config parameter for daemon
16:26:32 <adrian_otto> eghobo: yes, that's why I'm struggling to understand why Magnum has a role here
16:27:11 <eghobo> wanghua: honestly there is not many choices yet ):, e.g. DeviceMappper is only option for RHEL based
16:27:35 <eghobo> wanghua: could you elaborate, what is your end goal?
16:27:45 <Tango> Maybe there is a case where building the custom image is not sufficient?
16:27:47 <adrian_otto> my current position is that our various COE's and constituent components (such as Docker) have a wide range of configuration options, and it would be impractical to implement setting all of them from Magnum.
16:27:48 <Kennan> seems aufs is used many in docker part
16:28:15 <adrian_otto> so we should carefully consider which ones we add support for, and why.
16:28:25 <Kennan> it seems we can support most populated used driver, not increase so many drivers now
16:28:41 <Kennan> I think or magnum need stable and scale, not include so many drivers
16:28:44 <adrian_otto> Kennan: that sounds sensible to me.
16:29:27 <adrian_otto> wanghua: can you explain a bit more about why you feel this is an important addition?
16:29:50 <eghobo> Kennan: most suitable driver usually part of OS package ;)
16:29:55 <adrian_otto> I want to be sure we fully consider that point of view before making any decision
16:30:15 <wanghua> storage and network are important to docker
16:30:51 <wanghua> we have different network drivers, why we don't have storage drivers
16:30:52 <Kennan> yes, as we have one image now, like fedora eghobo: or we have ubuntu, we'd better have a standard implementaion
16:31:27 <Kennan> and further neededs drivers usually not need to implement now. like neutron, and ironic, they have separate drivers to maintain for later needs
16:31:36 <eghobo> Kennan: sorry we cannot, aufs doesn't work at RHEL ):
16:32:04 <Kennan> yes, I just use aufs as example, not restrict to RHEL
16:32:31 <Kennan> my point is we have only one image now, we need so many drivers now
16:32:36 <Kennan> why
16:32:47 <Kennan> typo s/we/why
16:32:55 <eghobo> Kennan: overlayfs maybe this command ground but it has long way to go
16:33:13 <hongbin> I lean to have one storage driver now, unless someone who is using magnum asks for another
16:33:27 <Kennan> +1 hongbin
16:33:34 <Tango> +1
16:33:41 <houming> +1
16:33:55 <adrian_otto> ok, so I think this idea has some merit, and we should revisit it at a later time
16:33:59 <hongbin> wanghua: additional driver increase the maintain cost
16:34:03 <eghobo> we can add daemon option it doesn't sound as big task
16:34:15 <adrian_otto> if users start asking for this, that will elevate our relative priority for this
16:34:41 <wanghua> ok, we can revisit it at a later time
16:34:58 <adrian_otto> wanghua: I suggest adding an outline to explain the planned implementation, and a level-of-effort estimate, and place that in the description.
16:35:21 <wanghua> ok
16:35:30 <adrian_otto> and we can come back to this when we have both 1) A user seeking this, and 2) A developer willing to add it
16:35:34 <adrian_otto> fair enough?
16:35:35 <juggler> +1 hongbin
16:35:53 <Kennan> sounds ok adrian_otto
16:36:14 <wanghua> adrian_otto, do you know any company is using magnum now?
16:36:51 <adrian_otto> there are several who are in pre-release
16:37:09 <adrian_otto> but I have some information that's restricted by NDA
16:37:23 <adrian_otto> so I prefer that magnum users identify themselves rather than me
16:37:34 <eliqiao> adrian_otto: and what the primary COE for them?
16:37:56 <adrian_otto> I can tell you that Rackspace Carina has chosen Docker Swarm as its first COE
16:38:13 <adrian_otto> Carina by Rackspace is the correct product name, sorry.
16:38:25 <eliqiao> oh, cool.
16:38:37 <adrian_otto> Kubernetes seems to be the more popular one under review.
16:38:55 <adrian_otto> with Mesos as a close third to Docker Swarm
16:39:06 <houming> From google:  eBay plans to make use of the Magnum plug-in for OpenStack  :)
16:39:24 <adrian_otto> the ones who like Mesos tend to be the ones with large scale data analysis workloads
16:39:34 <Kennan> houming: sounds ok. any link ?
16:40:14 <eghobo> houming: I don't think this information is correct
16:40:21 <adrian_otto> Subtopic: Review Discussion
16:40:27 <adrian_otto> #link https://review.openstack.org/250999 Add docker-bootstrap (wanghua)
16:40:51 <adrian_otto> hongbin flagged this one as workflow-1
16:41:07 <eliqiao> hi adrian_otto sorry for interupt, are we done Versioning rpc server?
16:41:32 <hongbin> Yes, I want to make sure this is the right direction before working on the implementation
16:41:44 <adrian_otto> ok, I did I skip that?
16:41:54 <adrian_otto> ok, let's jump back to that next.
16:42:06 <adrian_otto> eliqiao: I will revisit that
16:42:23 <eliqiao> adrian_otto: okay, thx.
16:42:38 <adrian_otto> hongbin: anything we should discuss now regarding review 250999?
16:42:53 <Kennan> I agree with eghobo: better not container related to make complicated
16:43:03 <adrian_otto> wanghua: did you request discussion on this one?
16:43:07 <hongbin> Yes, it is related to containerize flannel and daneyon is not here
16:43:21 <hongbin> wanghua: eghobo could we discuss it in ML?
16:43:24 <Kennan> although not dig much, but I thik something is not needed to be containerized
16:43:24 <adrian_otto> ok, we can include it for next week
16:43:29 <adrian_otto> I will take an action for that
16:43:42 <hongbin> sure
16:44:07 <adrian_otto> #action adrian_otto to raise "#link https://review.openstack.org/250999 Add docker-bootstrap (wanghua)" for discussion in next week's agenda
16:44:23 <adrian_otto> or you can discuss in #openstack-containers or ML in the mean time to work it out
16:44:43 <wanghua> I will start a ml later
16:44:43 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/versioning-rpc-server Versioning rpc server and client (eliqiao)
16:45:05 <adrian_otto> thanks wanghua
16:45:12 <eliqiao> I am not sure if it is too early to push versioning to rpc server.
16:45:17 <eghobo> wanghua: sure I already put all my thoughts in email, you can agree with it or not ;)
16:46:04 <adrian_otto> eliqiao: I see you submitted a review on this:
16:46:14 <adrian_otto> #link https://review.openstack.org/247308 Add conductor manager to handle all RPC handler
16:46:26 <eghobo> i still believe it better to put effort for Ubuntu and CoreOS ;)
16:46:55 <adrian_otto> ok, the blueprint is currently in "Not" status
16:47:12 <adrian_otto> I'm willing to leave it that way if we agree there are more important things to attend to first
16:47:15 <eliqiao> adrian_otto: can I know somthing about why?
16:47:35 <eliqiao> adrian_otto: okay, seems we'd focus on feature enable on conainers.
16:47:49 <adrian_otto> eliqiao: because the review did not have any +1 votes, and you requested discussion on it. I did not want to prioritize it without your input first.
16:48:20 <eliqiao> adrian_otto: hmm. actually I'd like to make some rpc call async, so the behavior may changes for some of the rpc api call.
16:48:46 <adrian_otto> I think async conductor work for scalability should happen too, but I'm not sure which of these should actually come first.
16:49:05 <adrian_otto> as having the versioning in there would allow us to migrate to the next implementation, right?
16:49:54 <hongbin> I think it is better to review the async proposal first
16:50:04 <adrian_otto> so I could set https://blueprints.launchpad.net/magnum/+spec/async-rpc-api to depend on https://blueprints.launchpad.net/magnum/+spec/versioning-rpc-server
16:50:18 <eliqiao> adrian_otto: if we don't have versioning, and we change the rpc behavior, then how to identify which methhod is called?
16:50:23 <adrian_otto> and match the priority
16:50:38 <adrian_otto> eliqiao: yes, that's a legitimate concern
16:50:46 <Tango> Do we plan to keep both async and non-async RPC?
16:51:05 <adrian_otto> Tango: no, we want to replace sync with async
16:51:24 <Kennan> if async is better, why need both. have same question as Tango
16:51:29 <adrian_otto> because you can simulate sync behavior with an async client that polls and blocks
16:51:36 <adrian_otto> but you can't do the reverse
16:52:03 <Tango> So for now, is it OK to just move to async, despite the change in behavior?
16:52:23 <Tango> a little disruptive, but we don't want non-async anyway
16:52:33 <adrian_otto> my gut says yes, but I'm open to alternate viewpoints
16:53:02 <adrian_otto> the only thing that uses our RPC API is Magnum itself
16:53:13 <adrian_otto> so I don't think we risk breaking third party software
16:53:20 <adrian_otto> is that true?
16:53:28 <Kennan> if not break funtional too often. is that concern eliqiao ? you want two versions to make sure easily migrate to async
16:53:28 <Kennan> ?
16:53:31 <hongbin> adrian_otto: yes, I think so
16:53:39 <adrian_otto> ok, I'm going to lave this alone
16:53:55 <adrian_otto> we have a couple of quick things to touch on before open discussion
16:53:58 <eliqiao> if you have 2 conductors at same time? I am considering the upgrade process.
16:54:21 <adrian_otto> eliqiao: good point
16:54:31 <adrian_otto> depends on LOE
16:54:35 <adrian_otto> #link https://review.openstack.org/225400 Add registry_address to bay db and api (wanghua)
16:54:41 <adrian_otto> let's wrap up this one
16:55:24 <adrian_otto> wanghua: any remarks on this one?
16:55:27 <wanghua> Do we need to tell the user the url of docker registery
16:55:44 <wanghua> or hardcoded it to localhost:5000
16:55:53 <Kennan> wanghua: is that for private registery case ?
16:56:05 <wanghua> Kennan: yes
16:56:14 <eliqiao> wanghua: if you have a registery inside of k8s cluster, then where is the image from?
16:56:47 <wanghua> eliqiao: now you should make docker images in the bay and upload them to docker registery.
16:57:24 <adrian_otto> sorry to queeze this one on time
16:57:26 <eliqiao> wanghua: hmm... if user want to add new images to registery, how will they do if you don't expose the url of registery?
16:57:33 <adrian_otto> #topic Open Discussion
16:57:45 <Kennan> seems can discuss in ML or IRC in contianers
16:57:53 <adrian_otto> welcome to continue the 225400 discussion if desired
16:58:09 <adrian_otto> just have a couple of minutes remaining for miscellaneous items
16:58:23 <eliqiao> wanghua: any concern not to expose registery url?
16:58:41 <wanghua> I think it is necessary
16:59:33 <adrian_otto> our next meeting will be in #openstack-meeting-alt on Tuesday 2015-12-08 at 1600 UTC. See you all then!
16:59:41 <adrian_otto> thanks for attending today!
16:59:49 <Kennan> bye
16:59:49 <adrian_otto> #endmeeting