14:01:15 #startmeeting kuryr 14:01:15 Meeting started Mon Jan 30 14:01:15 2017 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:16 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:18 The meeting name has been set to 'kuryr' 14:01:29 Hello everybody and welcome to another kuryr meeting! 14:01:34 Who's here for the show? 14:01:37 o/ 14:01:38 o/ 14:01:39 o/ 14:01:44 hi 14:02:25 ivc_: ltomasbo: ? 14:02:30 o/ 14:02:33 ivc_: is already here, I'm on the moon 14:02:36 sorry 14:02:42 apuimedo :) 14:02:53 alright then! 14:02:57 let's move forward 14:03:04 #topic kuryr-libnetwork 14:03:10 o/ 14:04:03 o/ 14:04:19 #info yedongcan's tag optimization patch seems to have progressed well https://review.openstack.org/#/c/420610/ 14:04:32 irenab: please review it when you get the chance 14:04:53 apuimedo: ok 14:05:28 Can I ask about Magnum<>Kuryr integration? 14:05:39 Gideon: sure thing 14:05:57 Is it planned for a specific OS release? 14:06:49 #info yedongcan's https://review.openstack.org/419735 is also ready for merge 14:06:56 tag support is improving :-) 14:07:21 Gideon: If there is people working on it, we could probably do pike for kuryr-libnetwork 14:07:37 for kuryr-kubernetes it may be a bit tighter, but we'd like to get there too 14:08:06 just there's nobody at the moment sponsoring this work directly, although hongbin has been pushing on it 14:08:16 and we've been doing the groundwork necessary to tackle it 14:08:55 I think the remaining work is outside our repos, so we have to do a bit of asking 14:09:48 Thanks. I'm from Nokia and we see added value in integrating both. I'll check with Redhat. 14:09:55 #info janonymous's tls patch that adds TLS support for our ipam/rd driver seems ready https://review.openstack.org/#/c/410609/ 14:10:22 Gideon: I remember you from the meeting we ahd 14:10:24 *had 14:10:45 ;-) 14:10:48 oh. The nicknames are not so clear... 14:11:00 I could not find the reason for non-voting gate failure though... 14:11:04 I'm the bearded PTL of Kuryr (who works at RH) 14:11:11 Haha 14:11:17 janonymous: that's exactly what I was gonna ask you 14:11:22 Antony 14:11:27 Gideon: yes, Antoni 14:11:38 apuimedo: i tried digging down, i saw only busy box failure 14:11:51 nothing that might fail the gate though 14:12:25 janonymous: could you run the fullstack tests locally on a new 16.04 machine and paste me the output? 14:12:42 which brings up 14:12:56 I would like to propose to make the fullstack tests voting 14:13:06 janonymous: in k8s or libnetwork? 14:13:08 I think they have been non-voting for enough time 14:13:09 apuimedo: sure, will do it tomorrow on top priority! 14:13:17 irenab: kuryr-libnetwork 14:13:19 thanks Jaivish 14:13:21 ok 14:13:30 irenabL libenetwork 14:14:33 alraddarla: what's the status on https://review.openstack.org/#/c/422394/ 14:14:45 I saw there was some discussion, but I think I probably missed some context 14:15:06 Essentially that patch and mine could easily be merged. I was waiting to see what everyone wanted to do. 14:15:21 I am more than happy to merge that patch with mine and put the co-author tag on. 14:15:31 No one ever responded to my question though so I didn't want to overstep 14:15:47 alraddarla: doesn't it make more sense to simple rebase rajiv's patch on top of yours? 14:15:59 s/simple/simply/ 14:16:53 That would work as well! If you guys would like to merge mine 14:17:03 https://review.openstack.org/#/c/424198/ 14:17:13 At least review it, I mean :) 14:17:50 alraddarla: please, remember to add people as reviewers, otherwise sometimes we miss patches ;-) 14:18:13 (just did for this patch now 14:18:14 ) 14:18:38 #action apuimedo irenab vikasc limao to review the reno support patch https://review.openstack.org/#/c/424198/ 14:19:08 #action rajiv rebase https://review.openstack.org/#/c/422394/2 on top of https://review.openstack.org/#/c/424198/ 14:19:15 Anything else on kuryr-libnetwork? 14:20:03 alright then... Moving on! 14:20:04 apuimedo, yes. My apologies. I forgot 14:20:13 #topic fuxi 14:20:29 hongbin is not here today since it's the Chinese new year's Holidays :-) 14:21:07 #info The proposal to containerize Fuxi has been accepted for Pike https://blueprints.launchpad.net/kolla/+spec/containerized-fuxi 14:21:40 #info Kuryr-kubernetes has been accepted as a subproject https://review.openstack.org/#/c/423791/ 14:21:45 fsck!!! 14:22:07 #info fuxi-kubernetes has been accepted as a subproject https://review.openstack.org/#/c/423791/ 14:22:31 but why fuxi-kubernetes? 14:22:33 For those keeping the score at home, k8s already has cinder integration, but only for when k8s runs on OSt VMs 14:22:38 this is for bare-metal 14:22:58 sorry but wanted to know about fuxi-kubernetes as kubernetes has already cinder support 14:23:16 janonymous: as I stated above, for bare-metal cases 14:23:22 yeah 14:23:25 and maybe for some functionality like snapshotting 14:23:30 but we'll see about that one 14:23:30 apuiemdo:thanks 14:23:36 I want to keep the scope small 14:23:58 We already have bare-metal kubernetes in scope, so it was logical to accept the proposal 14:24:16 yeah great! 14:24:21 but going to replicate the in-tree functionality is going to be a harder sell :-) 14:24:31 #topic kuryr-kubernetes 14:25:19 apuimedo: regarding previous topic 14:25:20 Thanks janonymous for https://review.openstack.org/#/c/424972/1 14:25:21 :-) 14:25:33 irenab: go ahead, sorry I closed it so fast 14:25:42 fuxi is for libnetwork and fuxi-kubernetes is for k8s? 14:25:42 trivial bump :D 14:26:30 irenab: yes. We should probably consider a rename of 'fuxi' 14:26:43 to keep consistency with the networking naming 14:26:44 apuimedo: yes, will be less confusing 14:27:10 irenab: also. I'm not sure if fuxi folks shouldn't just add handlers and drivers to kuryr-kubernetes 14:27:29 otherwise they are going to duplicate a good bit of effort 14:27:40 maybe they can import kuryr-k8s in their repo 14:27:50 but that's gonna be painful with the fast evolution 14:27:52 apuimedo: I also would like to undrstand more on what is ging to land there 14:28:28 apuimedo: wasthere any email on this? 14:28:30 irenab: it will be tackled on the vtg session 14:28:44 there were some emails but for some reason they did not send them to the list 14:29:14 ok, just looks its getting crowded in the kury arena :-) 14:29:14 I'll start a thread in the ml 14:29:43 irenab: I still want to possibly help fuxi to move out of home, since it's starting to grow a beard 14:30:06 #action apuimedo to send ML thread about fuxi-kubernetes 14:30:53 #info Merging ivc_'s patch to move from id to selflink to ease troubleshooting https://review.openstack.org/#/c/423903/2 14:31:45 #action irenab to look at this dragonflow gate failure https://review.openstack.org/#/c/425597/1 14:32:04 apuimedo: https://bugs.launchpad.net/dragonflow/+bug/1660346 14:32:04 Launchpad bug 1660346 in DragonFlow "setting initial networking with df devstack fails" [Undecided,New] 14:32:16 irenab: that was fast! 14:32:24 :-) 14:32:32 #info df nv gate affected by https://bugs.launchpad.net/dragonflow/+bug/1660346 14:33:05 #action ivc_ to address comments to https://review.openstack.org/#/c/422946/ 14:33:07 :-) 14:33:50 THis patch helps us me more understanding of resourceVersions conflicts and not just fail when there's a missmatch 14:33:52 thanks for that! 14:35:52 #info ivc_ pushed a temp improvement for skipping events that would force us into repeating handler work https://review.openstack.org/#/c/422910/ It will make the handler 'actor' for an entity sleep to increase the likelihood of having the annotation by the time the next "irrelevant" event comes around 14:36:36 The idea is to make some code in the k8s client that handles this case more gracefully, but for now it is a good improvement 14:36:53 #action irenab ltomasbo vikasc to review https://review.openstack.org/#/c/422910/ 14:37:15 apuimedo: there is also k8s provided client in the incubation state, we may want to move using it 14:37:16 ok, I'll take a look asap 14:37:25 irenab: we do want to move to use it 14:37:46 irenab: we have to propose it for openstack/requirements if we see it as likely to be maintained 14:37:57 apuimedo: +1 14:38:06 I think Magnum team is interested as well 14:38:17 irenab: got a contact? 14:38:28 (on who's been looking into it in the magnum side) 14:38:38 apuimedo: saw some email on this, can check later 14:39:04 #info ltomasbo has been working on port reuse so that we reduce the cost of new pod addition to networks by using ports that were previously freed 14:39:10 #link https://review.openstack.org/#/c/426687/ 14:39:15 irenab: thanks irena! 14:39:45 yes, but that is just initial work 14:39:55 I will update the blueprint with a more detailed information 14:40:14 apuimedo: http://lists.openstack.org/pipermail/openstack-dev/2017-January/111021.html 14:40:34 #link http://lists.openstack.org/pipermail/openstack-dev/2017-January/111021.html 14:40:38 and also push some changes to cover some of the TO DOs 14:40:56 ltomasbo: maybe better to update existing or add devref for this optimization 14:40:56 basically ltomasbo's patch saves the neutron creation (it still updates the port with the name of the port to ease troubleshooting), but the ovs binding needs to happen due to the veths getting destroyed when K8s bombs the infra container 14:41:18 ltomasbo: did you try what happens in OVS native mode if you use an internal port for the pod instead of a veth? 14:41:51 irenab: ltomasbo: what do you think about a different devref about resource management? 14:41:58 or is it better to use the same? 14:41:59 +1 14:42:07 better separate 14:42:10 apuimedo, I didn't try that 14:42:11 (I'd probably use a different one and I'd link it from the original one) 14:42:29 ltomasbo: don't let me distract you then :P 14:42:41 apuimedo, resource management for the ports pool? 14:43:14 #action ltomasbo to push a basic resource management devref. Maybe even an etherpad or public gdocs 14:43:28 think it makes sense, yes 14:43:30 ltomasbo: I would suggest to maintain pors per project id to be ready to extend for multi-tenance 14:43:38 ltomasbo: resource management in general, you take the first part "Port resources" 14:43:40 :-) 14:44:01 irenab: we'll have to account for policy too :P 14:44:17 but we'll see about that when we get there 14:44:28 with the name update we can probably update the sg 14:44:45 the tenancy may need a separate grouping 14:44:52 ivc_: any news on services? 14:45:05 maybe we can just have different pools for different tenants 14:45:09 and security groups 14:45:10 Could you use some help with the UTs? 14:45:17 ltomasbo: sounds good 14:45:36 o/ 14:45:40 ltomasbo: for tenants I'd say yes, for SGs let's see if it's not more trouble than it is worth 14:45:45 hongbin: you should be at a party! 14:45:54 sorry, need to leave 14:45:54 apuimedo: :) 14:45:54 we already updated on fuxi :P 14:46:00 irenab: thanks for joining! 14:46:01 apuimedo: sure. thx 14:46:27 hongbin: we agreed to send an email to the ML about fuxi-kubernetes 14:46:37 apuimedo: sure thing 14:46:40 we wondered if the work couldn't be done in kuryr-kubernetes 14:46:47 since it's about adding handlers and drivers 14:47:02 and it could then share the controller code 14:47:04 i am open for the community input 14:47:08 but it needs further discussion 14:47:14 sure 14:48:17 hongbin: also, Gideon was asking about the kuryr magnum integration. I wonder if you could put some email on the ML about the missing pieces so people can sign up for them 14:48:33 (asking you since you are the most magnum knowledgeable guy we have :P ) 14:48:41 apuimedo: about libnetwork or kubernetes? 14:49:12 apuimedo: sure, i will try that 14:49:13 hongbin: both :-) 14:49:19 apuimedo: ack 14:49:34 thanks a lot hongbin! 14:49:41 apuimedo: np 14:49:42 and thanks for joining even on holidays 14:49:58 #topic general discussion 14:50:06 Does anybody else have a topic to bring up? 14:50:14 yes 14:50:15 Question: We use Contrail at AT&T....have you all started any integration with Contrail? If yes, ca we help? If no, are you itnerested? 14:50:38 typos...it's definitely Monday morning for me 14:51:36 alraddarla: we have not started and of course we are interested! 14:51:55 alraddarla: for kuryr-libnetwork, kuryr-kubernetes or both? 14:52:17 kuryr-kubernetes....potentially both eventually 14:52:35 very well 14:52:49 in that case, let's go over it now on #openstack-kuryr 14:52:51 :-) 14:52:57 (after the meeting) 14:53:01 Anything else? 14:53:42 I sent this a week ago or so 14:54:13 apuimedo: It would be great if 28th Feb meeting could be scheduled in the end if possible. 14:54:21 I sent this a week ago or so https://review.openstack.org/#/c/422641 14:54:51 janonymous: you mean the virtual team gathering? 14:55:12 yeah, But that is upto community completely ,i would suggest 1,2,3 march 14:55:18 ltomasbo: it seems you are blocked on unit tests 14:55:19 :P 14:55:45 there is no unit test for any seg_driver 14:55:53 janonymous: please, send the date change proposal as a reply to the mailing list thread I send on teh virtual team gatehring so otehr people can weigh in 14:56:18 ltomasbo: hence the 'add' of irena 14:56:20 :P 14:56:21 apuimedo: sure,thanks 14:56:40 She doesn't mean modify, but add a new one :P 14:56:46 you can do it as follow-up 14:57:43 very well, closing time https://www.youtube.com/watch?v=7-0lV5qs1Qw 14:57:50 Thank you all for joining! 14:57:55 #endmeeting