14:01:22 #startmeeting kuryr 14:01:23 Meeting started Mon Jul 31 14:01:22 2017 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:25 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:27 The meeting name has been set to 'kuryr' 14:01:33 Hi everybody, who's here for the kuryr meeting? 14:01:48 o/ 14:01:49 hi, apuimedo 14:02:45 kzaitsev_pi: irenab: here? 14:02:51 hi zengchen1 14:03:32 o/ 14:03:35 sure 14:03:57 o/ 14:04:06 hi 14:04:36 Thank you all for joining 14:04:48 limao: do you have some kuryr-libnetwork topic? 14:05:25 apuimedo: https://review.openstack.org/#/c/487802/ https://review.openstack.org/#/c/487258/ 14:05:45 #topic kuryr-libnetwork 14:05:54 Hyunsun Moon reported some bug , they are related with docker swarm mode 14:06:08 #link https://review.openstack.org/#/c/487802/ 14:06:14 #link https://review.openstack.org/#/c/487258/ 14:06:27 Please help to review 14:06:42 o/ 14:07:27 merging them :-) 14:07:29 thanks limao 14:07:31 anything else? 14:07:43 Another thing is https://bugs.launchpad.net/kuryr-libnetwork/+bug/1703698 14:07:43 Launchpad bug 1703698 in kuryr-libnetwork "Couldn't get docker plugin to work" [Undecided,Confirmed] 14:08:14 Debuging with hongbin , if you have any idea, please also help to add comments 14:09:33 Nothing else, Thanks apuimedo 14:10:00 limao: so it seems a problem with pyroute2, isn't it? 14:10:43 apuimedo: maybe, same code works when it is not in pluginv2 14:11:31 import pyroute2 14:11:31 ip = pyroute2.IPDB() 14:11:31 with ip.interfaces['tap1e510214-51'] as iface: 14:11:33 iface.add_ip('fd34:d27c:33d3:0:f816:3eff:fe3f:8693/64') 14:11:38 mmm 14:12:07 This works if I run it directly in vm, but when I run in pluginv2 container, it get some error 14:12:36 tap1e510214-51 Link encap:Ethernet HWaddr FE:DA:A6:93:E3:28 14:12:37 inet6 addr: fd34:d27c:33d3::f816:3eff:fe3f:8693%32687/64 14:12:41 limao: just to debug, can you dump with pyroute2 the interfaces before trying to do the rename and address config? 14:12:57 ipdb interfaces, to be more precise 14:13:38 apuimedo: thanks , let me try it 14:14:05 (It strange for me to see %32687 in the ipv6 address) 14:14:28 apuimedo: can we take it offline,later on kuryr channel? 14:15:06 apuimedo irenab : sure, that's all, thanks 14:15:18 limao: it's not so strange 14:15:40 IIRC I saw it when deploying ipv6 at home 14:15:50 that you could just ping addr%iface 14:15:54 or something like that 14:16:06 #topic kuryr-kubernetes 14:16:18 apuimedo: thanks for the info 14:16:47 #info irenab submitted a patch to make multi node with devstack where some nodes are only workers possible 14:17:01 a lot of work going into devstack lately 14:17:22 #info I got octavia to run and I'm landing a few patches to make that easy to do with devstack 14:17:40 #info vikasc patch for network addon seems ready. Missing verification from reviewers 14:18:16 In that note, irenab, I'm in the middle of making a devstack patch that optionally uses the containerized kuryr (as a follow-up patch to vikas') 14:18:33 apuimedo: great 14:18:33 #info kzaitsev_ws's multi vif patch seems quite ready as well 14:18:46 the first one at least 14:18:52 irenab: kzaitsev_ws: do you want to bring up the vif driver discussion for the record? 14:18:55 I just need to do a little cleanup there 14:19:00 kzaitsev_ws: yes, talking about the first one 14:19:19 would try to do that today/tomorrow 14:19:31 apuimedo: we can. 14:20:17 good 14:20:18 the basic idea is that danil and I are adding a bunch of things to generic vif handler. most of our patches are parsing some specific annotation (spec ports) or resource requests (sriov) 14:20:58 modifying handlers feels dirty 14:21:11 irenab noted that this pollutes the generic vif and makes it less supportable. also if you want to add a new way to request some additional vifs you have to edit the vif's code 14:21:11 I'm in the same boat with lbaas l2/l3 configurability for octavia 14:21:32 I spent most of friday trying to find something that didn't make me eye pain 14:21:51 irenab: kzaitsev_ws: Did you come up with some proposal? 14:22:00 Personally I considered a few things 14:22:06 so the idea so far is to add some generic code that would get (from config for starters) a list of enabled drivers and would pass pod obj to them. 14:22:16 one of them was to have multiple handlers 14:22:22 similar to neutron ml2 14:22:53 irenab: kzaitsev_ws: So instead of multiple vif handlers, you'd move the split inside the vif handler 14:23:13 my thought was to just register multiple handlers 14:23:20 less code reuse though 14:23:32 say we have a config var: enabled_drivers='pool,sriov,spec_ports'; then vif_handler passes pod to each driver and get's vifs from it 14:23:40 yes, since its mainly to parse the annotation, most of the code should be similar 14:24:00 apuimedo: I thought we've been there with the multiple handlers though\ 14:24:15 I mean that was my very first attempt at sriov =) 14:24:44 ivc was gravely against the idea 14:24:56 I recall we discussed this, but do not remember the details 14:25:05 :P 14:25:11 I remember the opposition 14:25:13 I tend to second ivc though 14:25:40 I'll get to the prototype right after the cleanup =) 14:25:47 kzaitsev_ws: irenab: I would just make handlers configurable once and for all 14:25:56 as that's the pre-requisite currently anyways 14:25:58 and then have multiple handlers for the pod event 14:26:39 apuimedo: that's roughly equivalent. you either make handlers configurable or drivers configurable 14:26:45 I know 14:26:55 apuimedo: lets see what kzaitsev_ws comes up with 14:26:56 I'd even say that drivers may mean less code ) 14:27:00 but since we need configurable handlers any way... 14:27:06 we can do both and see which one we like more? 14:27:15 kzaitsev_ws: alright. Sounds like a plan 14:27:35 then generic brobably woill be renamed to moduler or something like it 14:27:35 #action kzaitsev_ws danil to come up with a configurable drivers PoC 14:27:51 #action apuimedo to come up with configurable handlers PoC 14:28:09 instead of ML2 MLPod 14:28:12 :D 14:28:37 yea 14:28:58 #info an OpenDaylight local.conf sample was merged this week as well 14:29:24 janonymous: how is the split CNi work progressing? 14:29:29 any other roadblocks? 14:30:03 apuimedo, I would like to have a spec for OpenDaylight, similarly to the opencontrail one 14:30:14 BM with ODL is already working 14:30:23 but there is a problem with nested, similarly for OVN and DF 14:30:41 apuimdeo: oh , yeah just testing more 14:30:54 ltomasbo: I saw the spec and the blueprint 14:30:55 I filed a couple of bugs for networking-odl and networking-ovn, as the problem with the nested kuryr is related to subport not becoming active 14:31:03 apuimedo: and would improve on reviews 14:31:13 it felt like most of the work needs to land on networking-odl and netowrking-ovn, isn't that right? 14:31:28 ltomasbo: it is fixed for DF 14:31:45 irenab: you and omer fix DF too fast 14:31:52 apuimedo, yes, and regarding hte spec, I kind of agree with kzaitsev_ws regarding there is not much to do 14:31:53 apuimedo: :-) 14:32:21 ltomasbo: apuimedo :do we need the spec in kuryr then? 14:32:26 ltomasbo: you had some -1s to the spec, right? 14:32:36 but I would like to land something similar to opencontrail, perhaps it is better as a doc 14:32:42 apuimedo, yep, due to this reason 14:32:44 irenab: personally I don't mind. But if I have to choose, I prefer a doc 14:33:00 well, the second -1 is because I'm missing an 's' 14:33:04 a doc section for SDNs 14:33:05 I agree, soec means there is some feature to add 14:33:10 with explanation of how to use it 14:33:12 tradeoffs 14:33:14 etc 14:33:21 s/soec/spec 14:33:29 even some graphs 14:33:34 that's what I'd like to see 14:33:36 ok 14:33:42 >"do we need the spec in kuryr then" that was exactly my though. although I would not attempt to block it if you guys think it's worth having it in say for patch-grouping or history reasons 14:33:43 I'll move it then 14:33:44 so the users of these SDNs can get a better idea 14:33:55 thanks ltomasbo 14:34:01 apuimedo: in general, maybe we should pay more attention to the documentation 14:34:05 looks like a BP would suffice ) 14:34:12 I just put it there as there is a similar document for OpenContrail there 14:34:14 irenab: yes. We rely too much on kzaitsev_ws and you 14:34:30 I plan on adding a section on loadbalancing soon 14:34:39 ltomasbo: proably need the same for Dragonflow 14:34:46 irenab: would be nice 14:34:54 irenab, it will be nice yes! 14:35:02 apuimedo: I think we should add documentation topic to the VTG agenda 14:35:20 Finally somebody proposes a topic 14:35:22 :-) 14:35:24 :D 14:35:41 irenab: I note it down and I'll put it for the topic votes 14:35:50 not exciting as new feature, but still something that maturing project needs to have 14:36:16 irenab, by the way, do you have a link about how did you fix it in DF? (the subport active problem) 14:36:29 ltomasbo: sure 14:36:58 we already propose some fix (that I don't like) for OVN, and I'm looking for a similar solution for ODL 14:37:10 https://review.openstack.org/#/c/487305/ 14:37:15 irenab, thanks! 14:39:42 ltomasbo: let me know what do you think once you check it 14:40:48 we thought about the same solution for OVN, but there was some stuff that was using some neutron fields (binding-profile) that was breaking it 14:41:11 and it seems it is being use by some plugins and cannot be reverted 14:41:46 anything else for kuryr-kubernetes? 14:41:47 so, we ended up having to use that field too: https://review.openstack.org/#/c/488354/ 14:42:42 irenab, the only thing I don't like (but I'm not sure how to do it in a different way) is to have to check/subscribe port updates at the trunk service 14:43:28 ltomasbo: agree with you, but we didn’t find alternative 14:43:56 :D same here 14:44:03 maybe trunk service should update subports 14:44:20 irenab: it does not?! 14:44:31 not on trunk status change 14:44:44 only configurable stuff 14:45:04 anyway, we can take it offline 14:45:10 yep 14:45:18 very well 14:45:22 let's move to fuxi then 14:45:25 #topic fuxi 14:45:30 great 14:45:30 zengchen1: you have the floor 14:46:06 ok 14:46:17 I start to develop the service of provisioner which create/delete pv for k8s. 14:46:38 2. The patches of flex volume driver needs more reviews. i hope they will be merged asap. 14:46:38 3. Will Fuxi-kubernetes be released along with Pike? I think it can not be released until the provisioner service is implemented. 14:47:37 apuimedo: i have these 3 things to talk about. 14:48:23 zengchen1: please, remember to add me as a reviewer, otherwise I fail to notice them 14:48:37 zengchen1: same here 14:48:51 apuimedo:irenab: i will. 14:49:10 zengchen1: I would expect fuxi-kubernetes to see its release with Queens 14:49:10 apuimedo:irenab: thanks very much 14:49:54 apuimedo:got it. I think fuxi-k8s will work well when Queens 14:50:06 great! 14:50:15 I still have one question. I see kuryr access the watch interface of k8s directly to get the each events of resource. But I also see the project like kubernetes-incubator/external-storage use the library of https://github.com/kubernetes/client-go/blob/master/tools/cache/controller.go to watch resource which is more complex than the current mechanism of kuryr. my question is why kury does not implement a similar library to watch the resources. 14:50:21 zengchen1: and if you feel like we should have some intermediate release, that can be done as well 14:50:41 apuimedo:understand 14:51:22 zengchen1: we have a patch from janonymous to move to https://github.com/kubernetes-incubator/client-python 14:51:34 which should eventually have feature parity with the golang client 14:51:52 we just have not had the time to test it extensively enough to do the switch 14:52:33 apuimedo:i have reviewed that patch. but i think that is not enough. 14:53:05 zengchen1: do you have some suggestion? 14:53:31 I have to say a few times I've considered a move to golang, which is what we wanted to do from the start (but the TC did not allow it back then) 14:54:12 apuimedo:i mean https://github.com/kubernetes/client-go/blob/master/tools/cache/controller.go is still more complex and maybe more robust 14:54:41 zengchen1: so is it go vs python efficiency? 14:54:51 apuimedo: i will try to implement a similar library in fuxi-k8s like that. 14:55:04 there is a couple of small general topics I wanted to bring up before the meeting ends. 14:55:14 pls ping me if now's the time ) 14:55:58 janonymous: it's not about efficiency. It's about the engineering that has been put in the go client vs the python client 14:56:05 zengchen1: it is indeed more robust 14:56:16 apuimedo: not only the efficiency but for the correctness 14:56:21 exactly 14:56:43 apuimedo:client-go use the two interfaces that are list and watch 14:56:47 zengchen1: ovn kubernetes moved to golang because of the robustness 14:57:12 apuimedo: agree 14:58:13 We have to close the meeting 14:58:21 apuimedo:sorry, what does 'ovn' means? 14:58:21 anybody has anything else to add? 14:58:32 apuimedo: me just a couple of quick points 14:58:37 zengchen1: it's an SDN from the openvswitch folks 14:58:42 #topic general 14:59:07 zengchen1: I look forward to see the patches for robustness 14:59:07 1) If there are no objections I would ask to be release liaison for kuryr. 14:59:16 kzaitsev_ws: you have my support 14:59:20 +1 14:59:23 I know that's a lot of power to ask for and we have a bit of time left 14:59:34 so we can decide that next week (= 15:00:02 2d one is related — I would probably ask for permissions to tend milestones and releases on launchpad 15:00:22 apuimedo: I think you started some 15:00:22 I do that for murano and could as well keep kuryr's lp uptodate ) 15:00:24 kzaitsev_ws: I agree with that as well 15:00:30 but yes, send an email to the mailing list 15:00:44 so no rush to do that. will bring it on next meeting (= or ML 15:00:48 #info This week is the time for PTL candidacy 15:01:09 if you feel like being PTL, don't forget to send a self-nomination 15:01:10 apuimedo: you do it perfectly 15:01:24 irenab: far from that. I learn on the way 15:01:28 and play it by ear 15:01:41 alright, we're overtime 15:01:44 let's close this 15:01:49 thank you all for joining today! 15:01:51 #endmeeting