16:03:05 <adrian_otto> #startmeeting containers
16:03:06 <openstack> Meeting started Tue Dec 15 16:03:05 2015 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:03:07 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:03:10 <openstack> The meeting name has been set to 'containers'
16:03:10 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-12-15_1600_UTC Our Agenda
16:03:18 <adrian_otto> #topic Roll Call
16:03:21 <adrian_otto> Adrian Otto
16:03:24 <madhuri> o/
16:03:30 <hongbin> o/
16:03:31 <houming> o/
16:03:31 <Kennan> o/
16:03:36 <dane> o/
16:03:37 <rpothier> o/
16:03:44 <juggler> o/
16:03:52 <bradjones> o/
16:03:54 <juggler> Perry Rivera
16:03:56 <vilobhmm11> o/
16:04:01 <eghobo> o/
16:04:14 <tcammann> good evening
16:04:21 <adrian_otto> helo madhuri, hongbin, houming, Kennan, dane, rpothier, juggler, bradjones, vilobhmm11, eghobo, tcammann
16:04:22 <daneyon> o/
16:04:33 <adrian_otto> I am speaking SMTP protocol apparently
16:04:38 <adrian_otto> hello!!
16:04:40 <juggler> good localtime adrian!
16:04:53 <adrian_otto> lol
16:05:08 <adrian_otto> ok, let's begin
16:05:14 <adrian_otto> #topic Announcements
16:05:21 <adrian_otto> 1) Changing our release type
16:06:07 <adrian_otto> we currently use the release:independent release model
16:06:24 <adrian_otto> I plan to transition us to the release:cycle-with-intermediary model instead
16:06:42 <adrian_otto> the key difference is that rather than just cutting releases whenever...
16:07:06 <adrian_otto> we still do that, but the OpenStack release team also picks one of those releases to include it with the names releases
16:07:28 <adrian_otto> so the one we release closest to the milestone events gets included with OpenStack
16:07:59 <adrian_otto> I plan to submit a change proposal to the projects.yaml file in the openstack/governance repository to indicate this change
16:08:13 <adrian_otto> any thoughts on this, or input I should consider prior to moving forward with this?
16:08:48 <tcammann> lgtm
16:09:03 <adrian_otto> ok, If you think of anything of concern, let me know before Jan 21
16:09:22 <Kennan> adrian_otto: what's the impact to developers?
16:09:26 <adrian_otto> that's the drop dead date for reverting this for Mitaka if I move forward and we change our minds
16:09:48 <adrian_otto> Kennan: I am not aware of any impact. This is an administrative change.
16:10:01 <Kennan> ok
16:10:36 <adrian_otto> 2) December meetings
16:10:51 <adrian_otto> this is the last team meeting I had scheduled for December
16:11:10 <adrian_otto> I expect attendance to be rather sparse for the next two weeks
16:11:11 <daneyon> yay to the holiday break
16:11:11 <vilobhmm11> adrian_otto : what advanatage this brings over the current model we follow ? Or is it a madatory change from for every project ?
16:11:46 <adrian_otto> vilobhmm11: this adjustment is optional, but it's a way to include Magnum in the OpenStack milestone releases more officially
16:11:58 <adrian_otto> and to maintain the stable branches in lock step with other projects
16:11:58 <vilobhmm11> adrian_otto : ok
16:12:42 <adrian_otto> so if we did hold more team meetings in December they would fall on Dec 22 and Dec 29
16:12:56 <adrian_otto> I suggest we resume on Jan 5
16:13:04 <adrian_otto> what do you all think?
16:13:07 <daneyon> +1 on 1/5
16:13:16 <hongbin> +1
16:13:20 <Kennan> wish you guys good holiday
16:13:23 <vilobhmm11> +1
16:13:23 <bradjones> +1
16:13:23 <Kennan> +1
16:13:27 <juggler> +1
16:13:37 <madhuri_> +1
16:13:41 <adrian_otto> #agreed our next team meeting will be 2016-01-05 at 1600 UTC
16:13:56 <adrian_otto> any more announcements from team members?
16:14:56 <adrian_otto> #topic Review Action Items
16:15:00 <adrian_otto> (none)
16:15:17 <adrian_otto> #topic Container Networking Subteam Update (daneyon)
16:15:27 <daneyon> here is the log from last week's meeting
16:15:30 <daneyon> #link http://eavesdrop.openstack.org/meetings/container_networking/2015/container_networking.2015-12-10-18.01.html
16:15:55 <daneyon> quite a bit of discussion related to the image building spec
16:15:57 <daneyon> #link https://blueprints.launchpad.net/magnum/+spec/fedora-atomic-image-build
16:16:15 <daneyon> if yolanda is available, i would like to talk with her after the meeting
16:16:56 <daneyon> we also had good discussion related to the continerize kube services spec
16:17:27 <daneyon> As an action, I talked with the kube communtiy to get a better understanding about reasons to containerize flannel and etcd
16:17:54 <daneyon> My position is that we do not containerize etcd/flannel b/c of the 2 docker daemon requirement
16:18:20 <daneyon> I believe the complexity/operational overhead outweighs the benefits of containerizing these 2 services
16:18:23 <adrian_otto> you don't actually need 2 deamons
16:18:31 <daneyon> ?
16:18:37 <adrian_otto> you can expose the docker sock file
16:19:08 <daneyon> has anyone tested that yet?
16:19:10 <adrian_otto> so you allow the utility containers access to thedocker daemon on the host that way
16:19:18 <adrian_otto> yes, we do that. It works.
16:19:28 <hongbin> Interesting
16:19:55 <eghobo> adrian_otto: no you need 2 daemons, somebody need to run flannel container
16:19:57 <adrian_otto> you just need to set your expectations that when you do that, your container is not a security isolation facility at all
16:19:59 <Kennan> adrian_otto: do you mean share host mnt namespace ? or something else ?
16:20:24 <adrian_otto> but you are using the container image as a mechanism for more convenient maintenance and customization of that utility environment
16:20:46 <adrian_otto> use -v to share the socket file.
16:21:01 <adrian_otto> it's actually evasively simple.
16:21:12 <Kennan> seems kolla have some experience like libvirt related, share the host (if I remember clearly)
16:21:25 <Kennan> maybe can find some helpful info from that
16:21:43 <daneyon> adrian_otto I am open to containerizing etcd/flannel if it does not overcomplicate things
16:21:53 <daneyon> Seems like your idea is worth exploring
16:22:00 <adrian_otto> sdake offered to assist with expertise from his team to help with this, if desired
16:22:30 <adrian_otto> I don't want a setup that's overly complex either
16:23:00 <daneyon> adrian_otto could you add feedback to my message on the ML or on the review?
16:23:06 <adrian_otto> but it's worth some exploration to see if we can strike a balance there, as most who are adopting Magnum are struggling with bay node image customization
16:24:20 <suro-patz> adrian_otto: could you elaborate the 'the struggle with bay node image customization'
16:24:32 <adrian_otto> they don't know how to build the images
16:24:35 <daneyon> Until we can get these services contaienr ized and the DIB images implemented, we need to update the Atomic image
16:24:45 <daneyon> A. To address a vxlan bug in the current code
16:24:45 <adrian_otto> it's not clear who they belong to
16:25:00 <daneyon> B. If we go to F21, the image is ~ 25% smaller.
16:25:07 <adrian_otto> those cloud operators are not familiar with the tools we use
16:25:43 <adrian_otto> so if we ould use docker images to allow changing those components, it may reduce the friction there.
16:26:44 <Kennan> adrian_otto: docker images seems still need some customization(heat need do that) right?
16:27:01 <adrian_otto> there has also been repeated requests for bay nodes based on an ubuntu or debian derivative
16:27:19 <adrian_otto> Kennan: yes
16:27:23 <daneyon> does anyone have a better understanding why a cloud operator wants to create an image instead of using our atomic-5 image?
16:28:07 <adrian_otto> daenyon: one reason is they want to use Magnum on bare metal nodes
16:28:13 <Kennan> seems Tango is working on build new image, he is working on docs if process is OK
16:28:33 <adrian_otto> which requires fooling with the images depending on how they implement bare metal servers (with ironic or otherwise)
16:29:04 <hongbin> Yes, also they might want to ship a bay with customization
16:29:19 <daneyon> adrian_otto I thought we were waiting for the Iromic team to add neutron ports so we can start supporting bare metal nodes.
16:29:29 <adrian_otto> others have an apparently religious opposition to any component that's a RedHat distribution of Linux (rational or not, these preferences are strong)
16:29:58 <daneyon> I am concerned adding Ubuntu, Debian to the mix
16:30:12 <adrian_otto> daneyon: we are, but in the mean time, downstream consumers want to make that work
16:30:31 <Kennan> adrian_otto:  build images maybe OK if our guide or tools is applicable and easy, I think, like some cloud, adminitstraotr build images themselves
16:31:02 <adrian_otto> sometimes cloud operators want to bake in things like pre-configured telemetry and fault monitoring
16:31:21 <daneyon> ok, that makes sense.
16:32:24 <adrian_otto> alright let's wrap the network subteam update
16:32:45 <adrian_otto> any more to add, daneyon?
16:33:13 <adrian_otto> #topic Magnum UI Subteam Update (bradjones)
16:33:23 <bradjones> hey
16:33:40 <bradjones> quite a few reviews out that could do with a look over
16:33:49 <daneyon> nope
16:33:50 <bradjones> I still have a couple of older ones to look at this evening
16:33:58 <bradjones> there is one discussion
16:34:16 <bradjones> #link https://review.openstack.org/#/c/256358/
16:34:42 <bradjones> about moving the dashboard to a panel group under the project dashboard in horizon
16:35:04 <bradjones> it makes no functional difference will just be a different place for users to go to
16:35:24 <bradjones> so if anyone has any strong objection to that then leave a comment on that review
16:35:34 <adrian_otto> thanks Brad
16:35:47 <adrian_otto> any more remarks on Magnum UI?
16:35:55 <bradjones> not from me
16:36:09 <adrian_otto> Essential Blueprint Updates
16:36:14 <adrian_otto> ok, we have 4 to cover
16:36:29 <adrian_otto> I am drawing these from here:
16:36:38 <adrian_otto> #link tps://blueprints.launchpad.net/magnum/mitaka Mitaka Magnum Blueprints
16:36:45 <adrian_otto> corrections
16:36:58 <adrian_otto> #undo
16:36:59 <openstack> Removing item from minutes: <ircmeeting.items.Link object at 0xaa31950>
16:37:09 <adrian_otto> irc://chat.freenode.net:6667/#link https://blueprints.http://launchpad.net/magnum/mitaka Mitaka Magnum Blueprints
16:37:16 <adrian_otto> sigh
16:37:26 <adrian_otto> #link https://blueprints.http://launchpad.net/magnum/mitaka Mitaka Magnum Blueprints
16:37:30 <adrian_otto> ok, here we go.
16:37:42 <juggler> thanks adrian!
16:37:45 <adrian_otto> #Link https://blueprints.launchpad.net/magnum/+spec/magnum-tempest (dimtruck)
16:37:56 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/magnum-tempest (dimtruck)
16:38:20 <adrian_otto> dimtruck: you here?
16:38:23 <dimtruck> hi :).  i am
16:38:30 <adrian_otto> any update on this?
16:38:36 <dimtruck> so the update is that the plugin patch is passing
16:39:03 <adrian_otto> great!
16:39:08 <dimtruck> and thanks to hongbin i have a much better implementation.  I'm changing the description of the patch and it'll be ready to review/merge
16:39:13 <dimtruck> (taking it out of WIP)
16:39:26 <adrian_otto> thanks for your continued work on this.
16:39:34 <dimtruck> there are also 25 tests in patch review for magnum bay CRUDs
16:39:50 <dimtruck> and i also have CA CRUDs ready to go once those are merged :)
16:40:08 <dimtruck> once jenkins is happy :)
16:40:11 <adrian_otto> Thanks dimtruck. Next we have a celebration of an implemented feature:
16:40:20 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/swarm-functional-testing (eliqiao)
16:40:32 <adrian_otto> complete!
16:40:36 * adrian_otto applause
16:40:58 <adrian_otto> Next we have a started one:
16:41:06 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/resource-quota (vilobhmm11)
16:41:12 <adrian_otto> vilobhmm11: any update on this?
16:41:29 <vilobhmm11> adrain_otto : proposed design on ML http://lists.openstack.org/pipermail/openstack-dev/2015-December/082266.html
16:41:52 <vilobhmm11> would request everyone to have a look and let me know if that sounds fair
16:42:04 <vilobhmm11> initially plan is to impose quota on bay creation
16:42:18 <vilobhmm11> by restriting user of a project with certain number of bays to create
16:42:25 <vilobhmm11> and then proceed with other resources
16:42:43 <adrian_otto> thanks vilobhmm11
16:43:06 <adrian_otto> I am marking Design as "Discussion" status
16:43:07 <vilobhmm11> adrian_otto : thats it from my side…will continue the discusssion on ML
16:43:19 <vilobhmm11> adrian_otto : sure thanks!
16:43:28 <adrian_otto> done
16:43:42 <adrian_otto> and last we have:
16:43:52 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/split-gate-functional-dsvm-magnum (eliqiao)
16:43:59 <adrian_otto> eliqiao: any update?
16:44:12 <adrian_otto> I know we have been struggling with OOM errors
16:44:39 <hongbin> I can give an update for the memory issue
16:44:50 <adrian_otto> hongbin, one sec
16:45:06 <adrian_otto> #topic Open Discussion
16:45:16 <adrian_otto> Hongbin raise gate error http://logs.openstack.org/87/253987/1/check/gate-functional-dsvm-magnum-k8s/9eb5206/logs/screen-n-cpu.txt libvirtError: internal error: process exited while connecting to monitor: Cannot set up guest memory 'pc.ram': Cannot allocate memory
16:45:23 <adrian_otto> ok, hongbin, proceed!
16:45:35 <hongbin> I have talked to the infra team
16:45:55 <hongbin> They will investigate why some nodes has less memory than others
16:46:06 <tcammann> I added a item to the agenda, probably need a refresh :)
16:46:32 <adrian_otto> tcammann: , you are welcome to raise it after the next
16:46:37 <hongbin> At the same time, we could investigate how to reduce memory consuption for our gate job
16:46:40 <adrian_otto> hongbin: thanks for driving that one
16:46:44 <adrian_otto> #link https://bugs.launchpad.net/magnum/+bug/1521237 Gate occasionally failed with nova libvirt error
16:46:44 <openstack> Launchpad bug 1521237 in Magnum "Gate occasionally failed with nova libvirt error" [Critical,New]
16:47:22 <hongbin> That is it from my side
16:47:28 <adrian_otto> hongbin reported this one, but the work is unassigned
16:47:42 <hongbin> I can assign it myself
16:47:46 <adrian_otto> hongbin: did you want to cover this one?
16:48:02 <adrian_otto> this is still the same topic
16:48:08 <adrian_otto> ok, nm. Thanks!
16:48:31 <adrian_otto> wanghua: you here?
16:48:46 <adrian_otto> I have a note from you on the agenda:
16:48:48 <adrian_otto> wanghua raise "How to improve docker registry in magnum?"
16:49:11 <adrian_otto> if not, tcammann it's all yours then
16:49:19 <hongbin> I can raise this issue on behalf
16:49:28 <adrian_otto> ok hongbin
16:49:43 <hongbin> Basically, right now docker registry listen to localhost
16:49:45 <tcammann> Midcycle?!
16:50:04 <hongbin> It is easy to pull docker images from private registry
16:50:10 <adrian_otto> yes, Jan-Feb is the timeframe for that
16:50:11 <hongbin> But it is hard to push
16:50:45 <hongbin> Because users need to ssh into the node and do the push, which is inconvience
16:50:46 <adrian_otto> I'm seeking any sponsors who would like to host our midcycle
16:51:02 <adrian_otto> hongbin: acknowledged
16:51:30 <tcammann> I could probably get HP to host it in Sunnyvale, or Bristol (UK)
16:51:36 <adrian_otto> hongbin: I view this as a bug in Swarm
16:51:38 <tcammann> s/HP/HPE/
16:51:53 <adrian_otto> but the workaround is you do a push to the swarm endpoint, followed by a pull
16:52:00 <adrian_otto> and then all nodes get it from the registry
16:52:22 <adrian_otto> but that's icky. it should just work when you do a push
16:52:42 <hongbin> Yes it is
16:53:11 <adrian_otto> tcammann: thanks for that.
16:53:17 <adrian_otto> team, how many contributors do we have in Europe that would attend in Bristol?
16:53:32 <adrian_otto> or who would be able to approve travel to attend?
16:54:03 <adrian_otto> hongbin: we have the same issue in k8s and swarm, right?
16:54:36 <adrian_otto> tcammann: if you would check on the Sunnyvale option, we can expect strong attendance there.
16:54:40 <hongbin> adrian_otto: Yes, as long as the registry listen to localhost, we have the issue
16:54:41 <Kennan> hongbin: for private registry, not quite get, you use docker pull --private-url,  and docker push not work for that ?
16:55:43 <rpothier> I have a comment on heat template refactor
16:56:11 <hongbin> Kennan: because the --private-url is localhost
16:56:40 <adrian_otto> rpothier: we are in open discussion, so you may blurt out anything you'd like
16:56:42 <Kennan> why we bind to localhost, is that bp design like that?
16:56:57 <Kennan> seems we need expose url can be accessible
16:57:08 <rpothier> there is a spec in heat to use conditionals, as an alternative to Jinja
16:57:11 <adrian_otto> Kennan: we took that as a best practice
16:57:20 <hongbin> Kennan: It listen to localhost to avoid TLS complexity
16:57:27 <adrian_otto> because the registry is back ended on swift
16:57:33 <adrian_otto> so might as well run it on all bay nodes
16:57:52 <adrian_otto> and then you don't need to worry about public exposure of the private registry host
16:57:53 <Kennan> I remember setup private registry before, seems OK.(for TLS , not quite dig it)
16:58:44 <adrian_otto> it is time to wrap up our meeting now, with just a minute remaining
16:58:51 <adrian_otto> any final remarks?
16:58:58 <hongbin> adrian_otto: I recalled you want to mid-cycle right after Ironic mid-cycle?
16:59:23 <cooldharma06> Hi all.. I am new to this and I like to contribute to Magnum.. Any suggestions..
16:59:26 <adrian_otto> hongbin: I will cover midcycle planning on our ML
16:59:36 <adrian_otto> #action adrain_otto to plan midcycle by ML
16:59:41 <adrian_otto> #undo
16:59:41 <openstack> Removing item from minutes: <ircmeeting.items.Action object at 0xaa31110>
16:59:45 <Kennan> cooldharma06 join openstack-containers IRC
17:00:01 <adrian_otto> #action adrian_otto to plan midcycle by ML
17:00:06 <adrian_otto> our next team meeting will be 2016-01-05 at 160 UTC. See you all then. Happy holidays!
17:00:10 <adrian_otto> #endmeeting