16:02:16 #startmeeting containers 16:02:16 Meeting started Tue Jul 21 16:02:16 2015 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:02:18 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:02:20 The meeting name has been set to 'containers' 16:02:27 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-07-21_1600_UTC Our Agenda 16:02:27 o/ 16:02:33 #topic Roll Call 16:02:36 Adrian Otto 16:02:37 \o/ 16:02:41 chirag arora 16:02:42 o/ 16:02:42 Digambar Patil 16:02:43 o/ 16:02:43 Ton Ngo 16:02:44 Marty Falatic 16:02:47 o/ 16:02:49 hello 16:02:49 o/ 16:02:51 Jacob Frericks 16:03:27 hello 16:03:48 hello mfalatic, tcammann, chirag, dane_leblanc, diga_, suro-patz, Tango, rods, daneyon, hongbin, jjfreric, brendenblanco 16:05:33 ok, let's begin 16:05:39 #topic Announcements 16:06:01 #link https://blueprints.launchpad.net/magnum/+spec/mesos-bay-type Our Mesos bay type is complete! 16:06:07 1) WHOOT, we have Mesos 16:06:18 Feature is complete, with ongoing work for additional enhancements 16:06:40 I'll pause here from comments from the contributors who worked on this. 16:06:47 congrats!!! 16:07:12 good work hongbin 16:07:13 nice one 16:07:14 this is a good example of our community working together to shape the future of OpenStack 16:07:28 :) 16:07:28 yes 16:07:52 We said we would do this at the summit, and now its done :) 16:08:13 :-) 16:08:19 yes, this is really worth recognition. 16:08:52 ok, this was the only prepared announcement I had on our agenda 16:09:10 but I just thought of another 16:09:18 2) sdake is on vacation this week 16:09:30 adrian_otto: I am targeting to add support for mesos bay with coreos in this week https://blueprints.launchpad.net/magnum/+spec/mesos-bay-with-coreos 16:09:48 diga_: thanks for submitting that 16:10:08 welcome adrian_otto 16:10:19 diga_: we'll take a look at that one. Have you submitted any reviews against that yet, or is that still in ideation phase? 16:10:40 any other announcements from team members? 16:10:43 no, I am testing some patches at my end 16:10:43 o/ 16:10:51 thanks diga_ 16:11:07 (some minor technical difficulties...hello) 16:13:17 ok, let's proceed 16:13:32 #topic Container Networking Subteam Update 16:13:40 The networking subteam had a meeting last Thursday. The meeting summary is here: 16:13:44 #link http://eavesdrop.openstack.org/meetings/container_networking/2015/container_networking.2015-07-16-18.03.html 16:13:52 Based on feedback from the subteam, the language of the networking blueprint has been modified: 16:13:58 #link https://blueprints.launchpad.net/magnum/+spec/extensible-network-model 16:14:11 ^ pls review and let me know if you have any questions. 16:14:17 I am working through the networking spec and it will be submitted for review tomorrow. I appreciate everyone's input over the past few weeks and look forward to upcoming feedback from the community. 16:14:25 We will meet again this Thursday. Meeting details are here: 16:14:26 also note that the URL of the BP also changed 16:14:31 #link https://wiki.openstack.org/wiki/Meetings/Containers#Container_Networking_Subteam_Meeting 16:14:42 Additional supporting blueprints have also been filed. One for expanding the scope of labels and the other for a common pluggable framework: 16:14:46 #link https://blueprints.launchpad.net/magnum/+spec/expand-labels-scope 16:14:51 #link https://blueprints.launchpad.net/magnum/+spec/common-plugin-framework 16:15:01 I look forward to discussing these bp's 16:15:21 daneyon: it looks like the participation level there is healthy 16:15:22 I believe that's it for announcements. Any questions? 16:15:33 do you feel like you have the support needed to succeed? 16:15:33 adrian_otto agreed 16:15:53 and a lot of good dialogue during the meetings and in the etherpad. 16:16:04 adrian_otto I do 16:16:10 ok, great. 16:16:15 any questions about this? 16:16:24 I think it will be important to discuss the above blueprints 16:16:36 neither of them are show stoppers for magnum networking 16:16:55 but they could be beneficial to networking and the magnum project as a whole 16:16:58 daneyon: did you want to give a quick update on the BP as well? 16:17:05 sure 16:17:16 we can do that now 16:17:22 I am half way through writing the bp 16:17:44 i have incorporated most o the feedback in the etherpad: 16:17:50 #link https://etherpad.openstack.org/p/magnum-native-docker-network 16:18:19 if your feedback was not incorporated, then i should have provided feedback in the ep directly 16:18:38 * tcammann adds to reading list 16:18:49 most of the details of the spec can be viewed in the ep 16:19:30 ok, thanks daneyon 16:19:36 their is the idea of loading plugins through magnum.conf as a near-term step until labels are examined further 16:20:03 pls let me know if you have any concerns managing plugins through magnum.conf 16:20:16 this would be an approach similar to how Neutron manages plugins 16:20:27 it seems that the third link in the Common Plugin Framework BP is broken, is that expected? 16:20:32 if no questions, then that's it 16:20:42 let me check 16:21:01 #topic Review Action Items 16:21:04 brendenblanco good catch 16:21:08 let me update the link 16:21:09 fixed 16:21:14 1) adrian_otto to follow up with Barbican team to arrange assistance for integration with Magnum 16:21:23 the parentheses confuses etherpad. 16:21:33 suspect that wouldn't be a problem in normal rest docs 16:21:47 I did follow up, but I'm not sure if madhuri got enough support. I did see a new introduction yesterday on this. 16:22:03 so although the item is complete, I think this needs continued attention 16:22:14 brendenblanco updated... go ahead and refresh 16:22:33 daneyon: works, thanks! 16:22:40 #action adrian_otto to verify Magnum team has enough support to integrate with Magnum 16:22:46 brendenblanco yw 16:22:58 2) All core reviewers to review https://review.openstack.org/194905 16:23:16 adrian_otto: s/Magnum/Barbican/ ^^? 16:23:17 it looks like this review needs a revision 16:23:24 #undo 16:23:26 Removing item from minutes: 16:23:39 irc://chat.freenode.net:6667/#action adrian_otto to verify Magnum team has enough support to integrate with Barbican 16:23:50 #action adrian_otto to verify Magnum team has enough support to integrate with Magnum 16:23:54 ok 16:24:26 any remarks on the "Add TLS support in Magnum" spec? 16:24:59 ok, next action item 16:25:00 3) tcammann to move heat-coe-templates repo to project attic, and to relay our resolution on the ML thread: http://lists.openstack.org/pipermail/openstack-dev/2015-July/068381.html 16:25:05 Status? 16:25:09 Its on my list todo 16:25:12 busy week 16:25:24 #action tcammann to move heat-coe-templates repo to project attic, and to relay our resolution on the ML thread: http://lists.openstack.org/pipermail/openstack-dev/2015-July/068381.html 16:25:39 adrian_otto you may want to double-check your action item above 16:25:39 #topic Blueprint/Bug Review 16:25:50 daneyon: ? 16:25:57 verify Magnum team has enough support to integrate with Magnum 16:26:03 Should that be Barb not M? 16:26:05 I did an undo on that 16:26:14 oops, didn;t catch that 16:26:16 sorry 16:26:20 and then I made the same error 16:26:24 whatever. I know what it means 16:26:46 New Blueprints for Discussion 16:27:02 #link https://blueprints.launchpad.net/magnum/+spec/hyperstack Power Magnum to run on metal with Hyper 16:27:16 we have had an ongoing discussion about this on our ML 16:27:45 I've formed an objection to using Magnum as a way to re-implement aspects of Nova 16:28:09 so I've offered guidance to explore adding Hyper as a virt-driver for nova 16:28:29 if you have opinions on this, please join the ML discussion. Let me find a link to that. 16:29:10 adrian_otto: If there is a hyper virt-driver for nova, it will be something similar as nova-docker? 16:29:14 #link http://lists.openstack.org/pipermail/openstack-dev/2015-July/068574.html Hyper/Magnum ML Discussion 16:29:34 hongbin: yes, nova-docker is also a virt driver 16:29:55 we never anticipated using Magnum as a tool for creating baremetal hosts to bypass Nova 16:30:12 so I'm concerned about that aspect of the proposal 16:30:39 I think we bypass Nova to create a k8s ironic bay? 16:31:06 hongbin: no, we use a Heat template that calls for an OS::Server resource. That uses Nova. 16:31:15 through ironic, correct? 16:31:22 I think so 16:31:45 so I'd be happy to endorse an approach that would closely track that implementation 16:32:01 but that works because there is an ironic virt driver for nova 16:32:29 +1 16:32:44 k 16:32:51 and if/when hyper is added to nova as a virt driver, magnum can take the same approach to support the hyper virt driver 16:33:00 ok, please follow up on the ML for this 16:33:14 Sounds like a consistent approach 16:33:56 I'll keep it on the agenda for next week, to see if we can tweak the BP so we can all feel comfortable with it 16:34:06 Essential Blueprint Updates 16:34:12 #link https://blueprints.launchpad.net/magnum/+spec/objects-from-bay Obtain the objects from the bay endpoint (sdake) 16:34:30 sdake is not present. Did he submit status for this for anyone to share? 16:34:35 my guess is that it's pending. 16:35:55 #link https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes Secure the client/server communication between ReST client and ReST server (madhuri) 16:36:26 my understanding on this one is that we are making progress on our Barbican integration 16:36:35 Patch up for cert controller https://review.openstack.org/#/c/203901/ 16:36:45 thanks tcammann 16:36:45 I would like to see some more work on the spec though 16:36:53 agreed 16:37:21 tcammann: are you ever around when madhuri is? I'm usually leaving for the day as she is arriving 16:37:40 In the mornings I believe so 16:38:35 ok, can you please check with her to see if she needs more help? 16:38:48 we'll pull in help if she needs it 16:39:06 Ok, I'll try and catch up with her 16:39:15 for a while there we were stuck on Barbican, but I expect we have started moving again 16:39:20 thanks tcammann 16:39:27 #link https://blueprints.launchpad.net/magnum/+spec/external-lb Support the kubernetes service external-load-balancer feature (Tango) 16:39:52 I moved to Kubernetes V1 16:40:07 It did require further tweak in the templates 16:40:56 Tango: are those changes up for review, or might we expect those to come soon? 16:41:42 I have 2 issues right now: resolve the bug in the atomic build, and debugging the config for Kubernetes to talk to OpenStack (as it has changed) 16:42:27 sdake and I were talking to the Atomic folks to get some help. They confirmed the bug and suggested we move to Fedora 22. 16:42:44 so we have a Fedora problem 16:43:00 But so far I have not been able to build with F22. I will need to bug them again 16:43:06 we have tooling (that I think you contributed?) for building new images 16:43:23 ok, so I look forward to an update next time 16:43:31 if you want help before then, let's fire something up on the ML 16:43:50 Right, the build process is ok for f21, but for f22 it's still not fully developed yet from the Atomic side 16:43:57 We might want to start getting creative about where to find help if you feel like you need it 16:44:22 I can certainly use help with the Atomic build problem 16:44:42 another option would be to pick an older version of k8s 16:44:47 Tango so the present recommendation then is to maintain f21 for build setup then? 16:45:11 I can only build with F21 and use a work around for the bug 16:45:29 ok 16:45:32 So with this, I can get Kubernetes V1 running 16:46:14 I did get confirmation from someone at eBay that they did get V1 load balancer working with OpenStack, so that's good to know 16:46:40 We just need to get the configuration correct 16:46:42 adrian_otto I do not see anything in the tools dir related to building new images. Can you provide a link to the build tool you are referencing? 16:46:59 daneyon: It's still a patch from me 16:47:14 I need to submit another patch based on sdake comment 16:47:18 #link https://review.openstack.org/#/c/196145/ 16:47:30 thanks Hongbin 16:47:40 I haven't heard from anyone else, so I will do that update now 16:47:46 my memory was flawed. I's a guide, not a tool 16:48:05 right, just instruction on how to build 16:48:17 ok, I have a couple of more status'es to request 16:48:24 thanks for the update Tango 16:48:44 #link https://blueprints.launchpad.net/magnum/+spec/magnum-swarm-scheduling Provide container anti-affinity through swarm constraint filtering (diga) 16:48:58 diga: I downgraded this from Essential to Medium last week 16:49:08 is that something you want us to revisit? 16:49:08 yes adrian_otto 16:49:17 yes 16:49:51 I will submit a patch on this 16:50:08 because I adjusted the priority I will not ask for weekly reports back on this 16:50:28 but we will look forward to your patches in gerrit 16:50:29 yes, I understand 16:50:35 ok, next one here... 16:50:37 #link https://blueprints.launchpad.net/magnum/+spec/secure-docker Secure client/server communication using TLS (apmelton) 16:50:50 apmelton is not present today 16:50:54 Still dependent on tls kube work 16:51:04 ok, I'll advance to the next 16:51:13 this one is actually a new one 16:51:33 we should have looked at it in the previous sub-topic, but let's check it now…. 16:51:34 #link https://blueprints.launchpad.net/magnum/+spec/magnum-service-list Add service list to magnum (suro-patz) 16:51:41 suro-patz: proceed 16:51:55 I am looking for some clarification on the ask of this BP 16:52:27 so this is a previously triaged BP, set to priority Medium 16:52:41 it seems jay-lau-513 wanted that we should implement 'magnum service-list' as similar to 'nova service-list' 16:52:46 so we are looking at this for team discussion today, not administrative processing? 16:53:01 yes 16:53:05 ok, thanks suro-patz 16:53:16 Can we move this to the ML? 16:53:28 sure tcammann: 16:53:41 I will initiate a thread there, then 16:53:42 tcammann: I did agree to taking a look today, but we are running a little short on time 16:54:16 so let's take this one to the ML, and we can pull in stakeholders to offer feedback 16:54:37 please add yourself as a subscriber on this BP if you would like to discuss it 16:54:50 suro-patz: is that ok with you? 16:55:03 adiran_otto: absolutely 16:55:18 thanks so much. would you like to be assigned to this? 16:55:18 let's proceed 16:55:23 yes, please 16:55:42 ok, you got it. 16:55:48 #topic Open Discussion 16:56:07 A lot of talk around google joining openstack. Have their been any discussions on Google joining the M community? 16:56:23 daneyon: yes. I have been in touch with them 16:56:43 any details that can be shared with the community? 16:56:45 we expect to regroup and plan participation ofter OSCON dust has settled 16:56:58 OK 16:57:04 I would like to discuss our heat templates, template definitions and definition entry points. 16:57:20 let me find the announcement for reference, one moment 16:57:27 seems like their is considerable amount of code dup among templates of the same coe ttyoe 16:57:35 do people agree? 16:57:40 #link http://techcrunch.com/2015/07/16/google-signs-on-as-openstack-sponsor-will-contribute-container-tech/ Google Sponsors OpenStack 16:57:53 daneyon: I agree, was thinking about this to the other day 16:57:56 they would have been a higher level of sponsorship but the number of seats are limited 16:58:07 if so, it seems like their is an opportunity to make temapltes as the associated def's/entry points more composible 16:59:02 tcammann have you put thought into refactoring the templates/defs/ep's? 16:59:10 daneyon: yes, we talked in the past about applying more DRY principles to our templates 16:59:32 My wild idea was to make the templates programmatically, but I haven't thought too much about it 16:59:40 tcammann let me know if you would like to have a brainstorming session 16:59:53 we stumbled on the fact that HOT format does not allow for conditional logic, so we started thinking about maybe using the environment feature as a workaround 16:59:54 seems like a bp should be filed to refactor our template approach 17:00:06 I'm tempted to table that pursuit until we have a better way to solve it 17:00:19 adrian_otto OK 17:00:26 I am happy to have a BP for that 17:00:35 but I am concerned about distracting us 17:00:40 time's up for today 17:00:45 thanks everyoen for attending 17:00:55 tcammann let me know if you want to create the heat refactor bp or i will 17:01:03 our next meting is Tuesday 2015-07-28 at UTC 2200 17:01:05 np! 17:01:14 #endmeeting