16:01:08 #startmeeting containers 16:01:09 Meeting started Tue Oct 13 16:01:08 2015 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:14 The meeting name has been set to 'containers' 16:01:16 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-10-13_1600_UTC Our Agenda 16:01:21 #topic Roll Call 16:01:22 o/ 16:01:24 Adrian Otto 16:01:24 o/ 16:01:27 o/ 16:01:27 o/ 16:01:27 o/ 16:01:30 o/ 16:01:32 o/ 16:01:34 o/ 16:01:37 o/ 16:01:43 o/ 16:01:46 O/ 16:01:47 o/ 16:01:48 o/ 16:01:55 o/ 16:01:56 o/ 16:02:19 o/ 16:02:30 o/ 16:02:32 Ton Ngo 16:04:23 hello 16:05:07 o/ 16:05:14 whoops, I had a local network glitch, sorry about that. 16:05:36 #topic Announcements 16:05:39 1) Our PTL Election is complete. Based on the results, I will continue as your PTL for the Mitaka release. 16:05:54 Congratulations adrian_otto1 16:05:57 cool. congrats adrian 16:05:59 Congratulations 16:06:02 Congrats! 16:06:02 adrian_otto1 congrats! 16:06:02 congrats 16:06:05 con! 16:06:06 congrats! 16:06:06 congrats 16:06:11 con! 16:06:12 congratulations! 16:06:18 I am proud to be part of such a terrific team, thank you all. 16:06:25 2) Release is cutting tonight, for liberty/stable, and all open work will need to be resubmitted against this branch. 16:06:39 we have a bunch of straggling bits that we need to land 16:06:59 * joining late * 16:07:26 hi all 16:07:26 so my guidance here is that if you don't have a significant objection to merging the work relating to our essential blueprints, that we merge it with tech debt filed against it 16:07:40 adrian_otto1 do you have the etherpad that we used to track what needs to land for L? 16:07:52 yes, one moment 16:08:05 https://etherpad.openstack.org/p/magnum-liberty-release-todo is this the one 16:08:15 adrian_otto1 It would be nice to review the ep and cross off/update to make sure we're not missing anything. 16:08:28 vilobhmm11: yes, that's it, thanks 16:08:49 adrian_otto : np 16:08:50 so if we canmanage to merge all of that by tonight, then great 16:08:59 #link https://etherpad.openstack.org/p/magnum-liberty-release-todo 16:09:09 adrian_otto1 i found it ^ 16:09:09 if not, then we will need to abandon those reviews and resubmit them against the new branch 16:09:20 nevermind 16:09:21 or release without them 16:09:52 continuing with announcments: 16:10:05 o/ 16:10:11 how about the features which are not complete 16:10:14 3) We will not have a meeting the Tuesday after next because of the OpenStack Summit in Tokyo 16:10:27 I have updated the meeting schedule accordingly. 16:10:49 wanghua: we need to evaluate them individually 16:11:12 the meeting time will be changed after summit ? 16:11:26 based on what I have seen in review it's probably smarter to merge what we have up for review rather than releasing without them 16:11:39 adrian_otto: Congrats for becoming PTL once again for Mitaka :) 16:11:48 and then revisit concerns as a follow-up pursuit 16:11:54 tx diga 16:12:01 Agree 16:12:09 +1 16:12:53 the good news is that our list is only half as long as last week, but we need to draw the line now. 16:13:15 I am still willing to run demos for the Magnum session using code from master 16:13:28 but the code in the release needs to work 16:13:48 and I am also willing to continually cut revisions as we add meaningful features 16:14:01 I am willing to cut a release every day if that makes sense 16:14:20 the OpenStack release process for us is really not that hard 16:14:35 ok, any more announcements from team members? 16:14:42 adrian_otto: daily release would be useful, if we have some validation commitment 16:15:10 I will be raising the functional testing topic just before open discussion 16:15:19 sorry I forgot to place that on the agenda wiki page 16:15:40 #topic Container Networking Subteam Update (daneyon_) 16:15:56 #link http://eavesdrop.openstack.org/meetings/container_networking/2015 Previous Meetings 16:16:03 thanks 16:16:11 we had our usual meeting last week 16:16:24 there are a few action items from the meeting that I would like to address 16:16:38 1) ACTION: danehans to address how to add new drivers with adrian_otto 16:17:10 Does anyone have an opinion how we should support add'l network drivers? 16:17:53 Until the heat templates get refactored, it will be difficult to have the drivers out of tree. 16:18:24 daneyon_: I think better we can use kuryr API's internally as we have VIF support now 16:18:27 For the time being, drivers will be different heat template fragments or add confditional logic to existing fragments. 16:18:57 humm, sounds a bit messy 16:18:59 but what about drivers that do not fall under Kuryr 16:19:27 Ideally, each driver should be mapped to a heat resource 16:19:28 This could be a lengthy discussion that we will need to adress at the design summit 16:19:38 diga: we cannot use kuryr, because it's very coe not Docker specific 16:19:46 daneyon_: ok, let's table this 16:19:49 o/ [some IRC client technical difficulties..] 16:19:52 okay 16:19:57 2) ACTION: danehans to follow-up with adrian_otto regarding summit schedule and details. 16:20:03 I have an action item for closing out the topics for Tokyo 16:20:15 I am trying to coordinate with gsagie from the kuryr team. 16:20:20 OK 16:20:25 I have a tool I can use to update the titles and abstracts in the program 16:20:43 Otherwise, the swarm patch is complete: https://review.openstack.org/#/c/224367/ 16:20:52 so I'll be making the selections based on the topics wiki we referenced last week 16:20:56 WHOOT 16:21:17 I know it's a big one. I could chip away seom of the code, but aligning the swarm templates with the k8s templates make the patch look bigger than what it is 16:21:25 excellent daneyon! 16:21:26 that was around 2000 lines of change 16:21:44 This is b/c the TL yml (swarm.yaml) has the master resource pulled out into master.yaml. 16:21:47 next time, let's try to break that work up a bit more 16:21:57 so it will be easier to review and merge 16:22:05 Again, I am trying to make the swarm templates look as much like the k8s temapltes as possible. 16:22:18 yes, that was the bulk of the change set 16:22:19 adrian_otto will do 16:22:28 dane_leblanc is testing the patch 16:22:40 I beliebe apmelton will too. 16:22:47 but I urge reviewers not to -1 that particular patch on that basis 16:23:06 but that we offer our contributors guidance as the work comes in 16:23:07 Be a big help if the core's can do a review when time permits 16:23:20 will do, daneyon_ 16:23:23 will have a look 16:23:27 I know we have big fish to fry to get L out the door, so I'm not sweating it. 16:23:32 Thanks all. 16:23:40 That's it from me unless their are questions. 16:24:10 we can take questions in open discussion 16:24:13 would like to see test result after every change of template since we don't have functional testing yet. 16:24:19 thanks daneyon_ 16:24:29 #topic Magnum UI Subteam Update (bradjones__) 16:24:34 hey 16:24:47 so main update this week is a big refactor of the bay model table ui 16:24:53 #link https://review.openstack.org/#/c/212039/ 16:25:00 really need that patch to land asap 16:25:31 I have managed to rope Rob Cresswell who works on horizon to help out with some new blueprints 16:25:51 He is going to be working on the UI for Containers 16:26:09 so taking the work I have done for bay models and bays and moving it for that resource 16:26:34 I don't think there is anything up for review yet but he talked me through what is there and it looks good so far 16:26:44 so hopefully we can get that in before tokyo too 16:27:03 Chris Hoge from the OpenStack Foundation asked about this. There is an opportunity to showcase this as part of the Liberty release marketing. 16:27:14 but what's there looks pretty thin 16:27:41 awesome 16:27:45 but we don't have any more time really 16:28:16 adrian_otto: once the review I mentioned previously goes in, in addition to the create view that will be up shortly there is actually a usable UI 16:28:26 bradjones__: I wanted to get a sense from you how much functionality is still up for review that is close to landing 16:28:39 I voted on the one you mentioned 16:28:53 adrian_otto: ah yes I see thanks 16:29:13 ok, so what can we do to help you fast-track the create view? 16:29:46 I will push it up for review in the next hour or so, then if we can just get it merged as quick as possible 16:29:58 once that is done if a few people would actually run it 16:30:08 and test the workflow seems good that would be really useful feedback 16:30:36 ok, thanks bradjones__ 16:30:44 any more on this topic before we advance ? 16:30:54 I think that's all for now thanks 16:31:00 thanks bradjones__ 16:31:04 #topic Review Action Items 16:31:13 1) adrian_otto to check into finalizing our summit discussion topic schedule, and release it for addition to the main schedule 16:31:18 Status: in progress 16:31:23 #action adrian_otto to check into finalizing our summit discussion topic schedule, and release it for addition to the main schedule 16:31:38 that concludes action items from last week 16:31:41 #topic Blueprint/Bug Review 16:32:02 Essential Blueprint Updates 16:32:07 #link https://blueprints.launchpad.net/magnum/+spec/objects-from-bay Obtain the objects from the bay endpoint (vilobhmm11) 16:32:19 three reviews are still up for this. 16:32:22 https://review.openstack.org/#/q/status:open+project:openstack/magnum+branch:master+topic:bp/objects-from-bay,n,z we have +1 for pod/rc patches… 16:32:28 from Jenkinds 16:32:32 Jenkins 16:32:45 jay lau, hongbin and adrian_otto thanks for the review 16:33:10 there are remaining comments from hongbin on https://review.openstack.org/223367 16:33:21 that were not solved in the most recent patchset 16:33:23 hongbin has nit comments on these patches to change a variable name to another 16:33:33 should be a simple fix for those 16:33:47 he just asked for a few variables to be renamed 16:33:49 yes adrian_otto after the meeting will upload the varibale name change 16:33:54 ok, thanks 16:33:54 yes you are right 16:34:06 thx 16:34:08 so need reviews with these patches 16:34:23 ok, after we have those merged, we can mark this BP as Implemented, correct? 16:34:33 yes adrian_otto 16:34:49 excellent! Let's do it today. 16:34:53 ok 16:34:56 thanks! 16:35:02 thats it from my side 16:35:23 thanks vilobhmm11 16:35:26 #link https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes Secure the client/server communication between ReST client and ReST server (madhuri) 16:36:04 Actual functionality patches have been merged 16:36:13 Just guide is remaining 16:36:26 That's needs a revision 16:36:40 ok, great 16:37:00 will we be able to get the guide done before we branch tonight? 16:37:07 Madhuri: I like to see the guide landed in L release, if you can make it 16:37:33 actually I don't have magnum env to test that guide 16:37:45 Madhuri: I can test it 16:37:46 aah, we can help with that 16:37:49 Can we merge with tech-debt 16:38:04 Will be a great help 16:38:04 I can give you a fresh environemnt off of master if you want to run it through manual tests 16:38:47 Adrian I will not be able to use your env, no internet to do that now 16:38:56 oh, ok 16:38:58 I am currently online on phone 16:39:03 yikes! 16:39:14 Hongbin can you do that? 16:39:15 yikes indeed! 16:39:20 yes 16:39:28 okay, can I have a volunteer to run through the doc to verify it and record any gaps as bugs against it? 16:39:37 Thanks hongbin 16:39:43 thanks hongbin 16:39:45 np 16:39:47 <3 16:40:03 I think that will complete our bp 16:40:03 adrian_otto: i did yesterday and it works 16:40:12 oh, that's terrific!! 16:40:25 Few improvements is needed but that can be done later I guess 16:40:29 thanks eghobo 16:40:38 +1 eghobo 16:40:45 yes, let's merge ant iterate on it 16:40:48 *and 16:40:49 Can we merge it, if eghobo have tested it? 16:40:58 yes. 16:40:59 +1 16:41:04 +1 16:41:12 and we can continue to scrutinize the release branch 16:41:20 Yes sure 16:41:44 thank you Madhuri 16:42:05 That's all 16:42:11 Thanks 16:42:35 great, thanks Madhuri 16:42:42 #link https://blueprints.launchpad.net/magnum/+spec/external-lb Support the kubernetes service external-load-balancer feature (Tango) 16:42:46 Implemented, right? 16:42:58 Yes 😊 16:42:59 All the current patches merged, thanks everyone for the reviews 16:43:11 Tango : +1 16:43:21 I'd like to de-scpe the func test 16:43:25 I opened 2 tech debt bugs for user credential and functional test 16:43:37 scope that into another child blueprint 16:43:50 One minor tweak to the doc, will try to get that in 16:43:51 ok, or bugs are ok 16:44:09 Either way is OK 16:44:14 drop the #8 work item, and link the bugs to the BP 16:44:31 then we can address as follow ups 16:44:46 ok, sounds good. Are you creating the child BP? or should I do that? 16:45:02 yes, please take that. I am here if you need any help. 16:45:10 ok, I will do that 16:45:12 #link https://blueprints.launchpad.net/magnum/+spec/secure-docker Secure client/server communication using TLS (apmelton) 16:45:21 I think this is Implemented too 16:45:27 adrian_otto: yea, that was done last week 16:45:40 sweet, that's it... 16:45:43 next topic 16:45:55 #topic Functional Testing Strategy 16:46:07 maybe less of a strategy and more of a tactical plan 16:46:16 we need a way for more testing to happen in parallel 16:46:33 because we simply can't wait hours for all of our functional tests to run 16:46:51 so we wither need the tests to be faster, or for more to happen in parallel… my idea… 16:46:57 One thing is that we can move the magnumclient test to the python-magnumclient project 16:47:01 we can splite functional testing per coe 16:47:17 use the 3rd party CI feature to set up a farm of machines that all do a grouping of functional tests… per COE 16:47:22 eliqiao: yes! 16:47:38 adrian_otto: what does the 3rd party-ness get you though? 16:47:49 this is something we can probably use the OSIC cluster(s) for 16:48:09 adrian_otto: something like tempest is better for functional testing 16:48:11 rlrossit: they get kicked off all at the same time 16:48:29 we can still use tempest for execution of those tests 16:48:30 they are anyways aren't they? or at least they're added to the queue of jobs that need to be run 16:48:49 rlrossit: right now the concurrency in our gates is set to 1 16:48:52 our current func tests appear to be serlialized 16:49:01 each time we add one, our runtime gets longer 16:49:15 dimtruck: How do we fix that? 16:49:26 has anyone looked at http://docs.openstack.org/developer/tempest/plugin.html ? 16:49:43 we can remove it but then it's an added problem of having multiple bay creates at the same time in our gates 16:50:00 and from what i've gathered that would make things even slower 16:50:00 I'm thinking we need a different job for each coe/os 16:50:17 granted it will make more queue jobs happen, but the queue is handled by zuul in parallel 16:50:21 dimtruck: We should deal with that gracefully, and if we don't then it's a bug 16:50:34 +1 for split by COE 16:50:45 we will discuss this more in Tokyo 16:50:50 in a workroom session 16:50:55 has anyone successfully been able to run a number of bay CRUDs (swarm or k8s) for a longish period of time? 16:51:00 but I want something to get us through the next two weeks 16:51:00 +1 , will in. 16:51:05 adrian_otto: makes sense 16:51:13 because we are hitting the upper limit of the 2 hour runtime limit 16:51:19 what are our options? 16:51:33 it's taking 1 hour on most runs 16:51:36 tcammann: nod 16:51:36 dmitryme: I did 16:51:45 adrian_otto: I can take a look at it and put something in the ML about it 16:51:52 but I am using new version of atomic image 16:52:03 tcammann: we have new tests in review that double that runtime 16:52:08 oh I see 16:52:12 someone please help to check this https://review.openstack.org/#/c/232421/ 16:52:14 so I am thinking of -2 that work 16:52:22 which pains me so 16:52:23 -2 16:52:36 I hate to block tests from merging, but I can't break the gate 16:52:44 completely agree 16:53:10 We have lived without so far 16:53:25 well if it keeps failing jenkins we don't have to worry about it merging 16:53:28 ok, so unless a better solution is proposed, that's what I will do 16:53:45 rlrossit: well, yes… but sometimes you might land on a really fast node 16:53:55 oh this actually is a race 16:53:57 my bad 16:54:00 yes 16:54:03 I thought it was an always failing thing 16:54:09 it failed once 16:54:13 I don't know about always 16:54:15 rlrossit: re: tempest plugin - that's the next step we should take 16:54:37 ok, I'm going to advance to Open Discussion 16:54:43 offhand is there someone we know in a similar sized or larger project that has implemented our intended solution successfully? 16:54:46 we can keep brainstorming on this topic as well 16:54:47 +1 for tempest 16:54:57 juggler: nova and cinder 16:55:01 and neutron, I think 16:55:09 #topic Open Discussion 16:55:15 for driver testing 16:55:26 COE testing is analogous to driver testing 16:55:27 ah 16:55:38 adrian_otto: one topic to discuss before cut 16:55:49 eghobo: yes? 16:56:13 we need to move to new atomic image which Tango build recently 16:56:39 } 16:56:41 { 16:56:41 This item is on the todo list. 16:56:41 yes 16:56:47 this image works for kub and swarm 16:56:53 yes, we need to get that onto tarballs.rackspace.com 16:56:55 +1 16:57:03 so we can reference it for download as an image 16:57:13 that will allow us to use it in gate tests 16:57:27 So that's different from the fedorapeople site? 16:57:42 yes, but it's just an optimization 16:57:56 adrian_otto: do we have wiki on how to using new image on gate? 16:58:08 the tarballs site is more local to the machines that run CI 16:58:32 ok, who will copy the images there? 16:58:34 eliqiao: I am not sure , but our friends in #openstack-infra are always very helpful on that topic 16:58:41 I don't find any script on CI to pull that image. 16:59:00 adrian_otto: i think it will take time, why we cannot use current model? 16:59:07 coming to the end of our time now 16:59:24 new image is in fedora public and anyone can use it 16:59:36 please upgrade CI to use new image 16:59:36 we will ahve one more team meeting before the summit, on Tuesday 2015-10-20 at 1600 UTC 17:00:00 we can continue in #openstack-containers 17:00:10 thanks for attending everyone. I'm super pumped!! 17:00:16 #endmeeting