14:01:22 <apuimedo> #startmeeting kuryr
14:01:23 <openstack> Meeting started Mon Jul 31 14:01:22 2017 UTC and is due to finish in 60 minutes.  The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:25 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:27 <openstack> The meeting name has been set to 'kuryr'
14:01:33 <apuimedo> Hi everybody, who's here for the kuryr meeting?
14:01:48 <ltomasbo> o/
14:01:49 <zengchen1> hi, apuimedo
14:02:45 <apuimedo> kzaitsev_pi: irenab: here?
14:02:51 <apuimedo> hi zengchen1
14:03:32 <kzaitsev_ws> o/
14:03:35 <kzaitsev_ws> sure
14:03:57 <limao> o/
14:04:06 <irenab> hi
14:04:36 <apuimedo> Thank you all for joining
14:04:48 <apuimedo> limao: do you have some kuryr-libnetwork topic?
14:05:25 <limao> apuimedo: https://review.openstack.org/#/c/487802/ https://review.openstack.org/#/c/487258/
14:05:45 <apuimedo> #topic kuryr-libnetwork
14:05:54 <limao> Hyunsun Moon reported some bug , they are related with docker swarm mode
14:06:08 <apuimedo> #link https://review.openstack.org/#/c/487802/
14:06:14 <apuimedo> #link https://review.openstack.org/#/c/487258/
14:06:27 <limao> Please help to review
14:06:42 <janonymous> o/
14:07:27 <apuimedo> merging them :-)
14:07:29 <apuimedo> thanks limao
14:07:31 <apuimedo> anything else?
14:07:43 <limao> Another thing is https://bugs.launchpad.net/kuryr-libnetwork/+bug/1703698
14:07:43 <openstack> Launchpad bug 1703698 in kuryr-libnetwork "Couldn't get docker plugin to work" [Undecided,Confirmed]
14:08:14 <limao> Debuging with hongbin , if you have any idea, please also help to add comments
14:09:33 <limao> Nothing else, Thanks apuimedo
14:10:00 <apuimedo> limao: so it seems a problem with pyroute2, isn't it?
14:10:43 <limao> apuimedo: maybe, same code works when it is not in pluginv2
14:11:31 <limao> import pyroute2
14:11:31 <limao> ip = pyroute2.IPDB()
14:11:31 <limao> with ip.interfaces['tap1e510214-51'] as iface:
14:11:33 <limao> iface.add_ip('fd34:d27c:33d3:0:f816:3eff:fe3f:8693/64')
14:11:38 <apuimedo> mmm
14:12:07 <limao> This works if I run it directly in vm, but when I run in pluginv2 container, it get some error
14:12:36 <limao> tap1e510214-51 Link encap:Ethernet HWaddr FE:DA:A6:93:E3:28
14:12:37 <limao> inet6 addr: fd34:d27c:33d3::f816:3eff:fe3f:8693%32687/64
14:12:41 <apuimedo> limao: just to debug, can you dump with pyroute2 the interfaces before trying to do the rename and address config?
14:12:57 <apuimedo> ipdb interfaces, to be more precise
14:13:38 <limao> apuimedo: thanks , let me try it
14:14:05 <limao> (It strange for me to see %32687 in the ipv6 address)
14:14:28 <irenab> apuimedo: can we take it offline,later on kuryr channel?
14:15:06 <limao> apuimedo irenab : sure, that's all, thanks
14:15:18 <apuimedo> limao: it's not so strange
14:15:40 <apuimedo> IIRC I saw it when deploying ipv6 at home
14:15:50 <apuimedo> that you could just ping addr%iface
14:15:54 <apuimedo> or something like that
14:16:06 <apuimedo> #topic kuryr-kubernetes
14:16:18 <limao> apuimedo: thanks for the info
14:16:47 <apuimedo> #info irenab submitted a patch to make multi node with devstack where some nodes are only workers possible
14:17:01 <apuimedo> a lot of work going into devstack lately
14:17:22 <apuimedo> #info I got octavia to run and I'm landing a few patches to make that easy to do with devstack
14:17:40 <apuimedo> #info vikasc patch for network addon seems ready. Missing verification from reviewers
14:18:16 <apuimedo> In that note, irenab, I'm in the middle of making a devstack patch that optionally uses the containerized kuryr (as a follow-up patch to vikas')
14:18:33 <irenab> apuimedo: great
14:18:33 <apuimedo> #info kzaitsev_ws's multi vif patch seems quite ready as well
14:18:46 <kzaitsev_ws> the first one at least
14:18:52 <apuimedo> irenab: kzaitsev_ws: do you want to bring up the vif driver discussion for the record?
14:18:55 <kzaitsev_ws> I just need to do a little cleanup there
14:19:00 <apuimedo> kzaitsev_ws: yes, talking about the first one
14:19:19 <kzaitsev_ws> would try to do that today/tomorrow
14:19:31 <kzaitsev_ws> apuimedo: we can.
14:20:17 <apuimedo> good
14:20:18 <kzaitsev_ws> the basic idea is that danil and I are adding a bunch of things to generic vif handler. most of our patches are parsing some specific annotation (spec ports) or resource requests (sriov)
14:20:58 <apuimedo> modifying handlers feels dirty
14:21:11 <kzaitsev_ws> irenab noted that this pollutes the generic vif and makes it less supportable. also if you want to add a new way to request some additional vifs you have to edit the vif's code
14:21:11 <apuimedo> I'm in the same boat with lbaas l2/l3 configurability for octavia
14:21:32 <apuimedo> I spent most of friday trying to find something that didn't make me eye pain
14:21:51 <apuimedo> irenab: kzaitsev_ws: Did you come up with some proposal?
14:22:00 <apuimedo> Personally I considered a few things
14:22:06 <kzaitsev_ws> so the idea so far is to add some generic code that would get (from config for starters) a list of enabled drivers and would pass pod obj to them.
14:22:16 <apuimedo> one of them was to have multiple handlers
14:22:22 <irenab> similar to neutron ml2
14:22:53 <apuimedo> irenab: kzaitsev_ws: So instead of multiple vif handlers, you'd move the split inside the vif handler
14:23:13 <apuimedo> my thought was to just register multiple handlers
14:23:20 <apuimedo> less code reuse though
14:23:32 <kzaitsev_ws> say we have a config var: enabled_drivers='pool,sriov,spec_ports'; then vif_handler passes pod to each driver and get's vifs from it
14:23:40 <irenab> yes, since its mainly to parse the annotation, most of the code should be similar
14:24:00 <kzaitsev_ws> apuimedo: I thought we've been there with the multiple handlers though\
14:24:15 <kzaitsev_ws> I mean that was my very first attempt at sriov =)
14:24:44 <kzaitsev_ws> ivc was gravely against the idea
14:24:56 <irenab> I recall we discussed this, but do not remember the details
14:25:05 <apuimedo> :P
14:25:11 <apuimedo> I remember the opposition
14:25:13 <irenab> I tend to second ivc though
14:25:40 <kzaitsev_ws> I'll get to the prototype right after the cleanup =)
14:25:47 <apuimedo> kzaitsev_ws: irenab: I would just make handlers configurable once and for all
14:25:56 <kzaitsev_ws> as that's the pre-requisite currently anyways
14:25:58 <apuimedo> and then have multiple handlers for the pod event
14:26:39 <kzaitsev_ws> apuimedo: that's roughly equivalent. you either make handlers configurable or drivers configurable
14:26:45 <apuimedo> I know
14:26:55 <irenab> apuimedo: lets see what kzaitsev_ws comes up with
14:26:56 <kzaitsev_ws> I'd even say that drivers may mean less code )
14:27:00 <apuimedo> but since we need configurable handlers any way...
14:27:06 <kzaitsev_ws> we can do both and see which one we like more?
14:27:15 <apuimedo> kzaitsev_ws: alright. Sounds like a plan
14:27:35 <irenab> then generic brobably woill be renamed to moduler or something like it
14:27:35 <apuimedo> #action kzaitsev_ws danil to come up with a configurable drivers PoC
14:27:51 <apuimedo> #action apuimedo to come up with configurable handlers PoC
14:28:09 <apuimedo> instead of ML2 MLPod
14:28:12 <apuimedo> :D
14:28:37 <irenab> yea
14:28:58 <apuimedo> #info an OpenDaylight local.conf sample was merged this week as well
14:29:24 <apuimedo> janonymous: how is the split CNi work progressing?
14:29:29 <apuimedo> any other roadblocks?
14:30:03 <ltomasbo> apuimedo, I would like to have a spec for OpenDaylight, similarly to the opencontrail one
14:30:14 <ltomasbo> BM with ODL is already working
14:30:23 <ltomasbo> but there is a problem with nested, similarly for OVN and DF
14:30:41 <janonymous> apuimdeo: oh , yeah just testing more
14:30:54 <apuimedo> ltomasbo: I saw the spec and the blueprint
14:30:55 <ltomasbo> I filed a couple of bugs for networking-odl and networking-ovn, as the problem with the nested kuryr is related to subport not becoming active
14:31:03 <janonymous> apuimedo:  and would improve on reviews
14:31:13 <apuimedo> it felt like most of the work needs to land on networking-odl and netowrking-ovn, isn't that right?
14:31:28 <irenab> ltomasbo: it is fixed for DF
14:31:45 <apuimedo> irenab: you and omer fix DF too fast
14:31:52 <ltomasbo> apuimedo, yes, and regarding hte spec, I kind of agree with kzaitsev_ws regarding there is not much to do
14:31:53 <irenab> apuimedo: :-)
14:32:21 <irenab> ltomasbo: apuimedo :do we need the spec in kuryr then?
14:32:26 <apuimedo> ltomasbo: you had some -1s to the spec, right?
14:32:36 <ltomasbo> but I would like to land something similar to opencontrail, perhaps it is better as a doc
14:32:42 <ltomasbo> apuimedo, yep, due to this reason
14:32:44 <apuimedo> irenab: personally I don't mind. But if I have to choose, I prefer a doc
14:33:00 <ltomasbo> well, the second -1 is because I'm missing an 's'
14:33:04 <apuimedo> a doc section for SDNs
14:33:05 <irenab> I agree, soec means there is some feature to add
14:33:10 <apuimedo> with explanation of how to use it
14:33:12 <apuimedo> tradeoffs
14:33:14 <apuimedo> etc
14:33:21 <irenab> s/soec/spec
14:33:29 <apuimedo> even some graphs
14:33:34 <apuimedo> that's what I'd like to see
14:33:36 <ltomasbo> ok
14:33:42 <kzaitsev_ws> >"do we need the spec in kuryr then" that was exactly my though. although I would not attempt to block it if you guys think it's worth having it in say for patch-grouping or history reasons
14:33:43 <ltomasbo> I'll move it then
14:33:44 <apuimedo> so the users of these SDNs can get a better idea
14:33:55 <apuimedo> thanks ltomasbo
14:34:01 <irenab> apuimedo: in general, maybe we should pay more attention to the documentation
14:34:05 <kzaitsev_ws> looks like a BP would suffice )
14:34:12 <ltomasbo> I just put it there as there is a similar document for OpenContrail there
14:34:14 <apuimedo> irenab: yes. We rely too much on kzaitsev_ws and you
14:34:30 <apuimedo> I plan on adding a section on loadbalancing soon
14:34:39 <irenab> ltomasbo: proably need the same for Dragonflow
14:34:46 <apuimedo> irenab: would be nice
14:34:54 <ltomasbo> irenab, it will be nice yes!
14:35:02 <irenab> apuimedo: I think we should add documentation topic to the VTG agenda
14:35:20 <apuimedo> Finally somebody proposes a topic
14:35:22 <apuimedo> :-)
14:35:24 <ltomasbo> :D
14:35:41 <apuimedo> irenab: I note it down and I'll put it for the topic votes
14:35:50 <irenab> not exciting as new feature, but still something that maturing project needs to have
14:36:16 <ltomasbo> irenab, by the way, do you have a link about how did you fix it in DF? (the subport active problem)
14:36:29 <irenab> ltomasbo: sure
14:36:58 <ltomasbo> we already propose some fix (that I don't like) for OVN, and I'm looking for a similar solution for ODL
14:37:10 <irenab> https://review.openstack.org/#/c/487305/
14:37:15 <ltomasbo> irenab, thanks!
14:39:42 <irenab> ltomasbo: let me know what do you think once you check it
14:40:48 <ltomasbo> we thought about the same solution for OVN, but there was some stuff that was using some neutron fields (binding-profile) that was breaking it
14:41:11 <ltomasbo> and it seems it is being use by some plugins and cannot be reverted
14:41:46 <apuimedo> anything else for kuryr-kubernetes?
14:41:47 <ltomasbo> so, we ended up having to use that field too: https://review.openstack.org/#/c/488354/
14:42:42 <ltomasbo> irenab, the only thing I don't like (but I'm not sure how to do it in a different way) is to have to check/subscribe port updates at the trunk service
14:43:28 <irenab> ltomasbo: agree with you, but we didn’t find alternative
14:43:56 <ltomasbo> :D same here
14:44:03 <irenab> maybe trunk service should update subports
14:44:20 <apuimedo> irenab: it does not?!
14:44:31 <irenab> not on trunk status change
14:44:44 <irenab> only configurable stuff
14:45:04 <irenab> anyway, we can take it offline
14:45:10 <ltomasbo> yep
14:45:18 <apuimedo> very well
14:45:22 <apuimedo> let's move to fuxi then
14:45:25 <apuimedo> #topic fuxi
14:45:30 <zengchen1> great
14:45:30 <apuimedo> zengchen1: you have the floor
14:46:06 <zengchen1> ok
14:46:17 <zengchen1> I start to develop the service of provisioner which create/delete pv for k8s.
14:46:38 <zengchen1> 2. The patches of flex volume driver needs more reviews. i hope they will be merged asap.
14:46:38 <zengchen1> 3. Will Fuxi-kubernetes be released along with Pike? I think it can not be released until the provisioner service is implemented.
14:47:37 <zengchen1> apuimedo: i have these 3 things to talk about.
14:48:23 <apuimedo> zengchen1: please, remember to add me as a reviewer, otherwise I fail to notice them
14:48:37 <irenab> zengchen1: same here
14:48:51 <zengchen1> apuimedo:irenab: i will.
14:49:10 <apuimedo> zengchen1: I would expect fuxi-kubernetes to see its release with Queens
14:49:10 <zengchen1> apuimedo:irenab: thanks very much
14:49:54 <zengchen1> apuimedo:got it. I think fuxi-k8s will work well when Queens
14:50:06 <apuimedo> great!
14:50:15 <zengchen1> I still have one question. I see kuryr access the watch interface of k8s directly to get the each events of resource. But I also see the project like kubernetes-incubator/external-storage use the library of https://github.com/kubernetes/client-go/blob/master/tools/cache/controller.go to watch resource which is more complex than the current mechanism of kuryr. my question is why kury does not implement a similar library to watch the resources.
14:50:21 <apuimedo> zengchen1: and if you feel like we should have some intermediate release, that can be done as well
14:50:41 <zengchen1> apuimedo:understand
14:51:22 <apuimedo> zengchen1: we have a patch from janonymous to move to https://github.com/kubernetes-incubator/client-python
14:51:34 <apuimedo> which should eventually have feature parity with the golang client
14:51:52 <apuimedo> we just have not had the time to test it extensively enough to do the switch
14:52:33 <zengchen1> apuimedo:i have reviewed that patch. but i think that is not enough.
14:53:05 <apuimedo> zengchen1: do you have some suggestion?
14:53:31 <apuimedo> I have to say a few times I've considered a move to golang, which is what we wanted to do from the start (but the TC did not allow it back then)
14:54:12 <zengchen1> apuimedo:i mean https://github.com/kubernetes/client-go/blob/master/tools/cache/controller.go is still more complex and maybe more robust
14:54:41 <janonymous> zengchen1:  so is it go vs python efficiency?
14:54:51 <zengchen1> apuimedo: i will try to implement a similar library in fuxi-k8s like that.
14:55:04 <kzaitsev_ws> there is a couple of small general topics I wanted to bring up before the meeting ends.
14:55:14 <kzaitsev_ws> pls ping me if now's the time )
14:55:58 <apuimedo> janonymous: it's not about efficiency. It's about the engineering that has been put in the go client vs the python client
14:56:05 <apuimedo> zengchen1: it is indeed more robust
14:56:16 <zengchen1> apuimedo: not only the efficiency but for the correctness
14:56:21 <apuimedo> exactly
14:56:43 <zengchen1> apuimedo:client-go use the two interfaces that are list and watch
14:56:47 <apuimedo> zengchen1: ovn kubernetes moved to golang because of the robustness
14:57:12 <janonymous> apuimedo:  agree
14:58:13 <apuimedo> We have to close the meeting
14:58:21 <zengchen1> apuimedo:sorry, what does 'ovn' means?
14:58:21 <apuimedo> anybody has anything else to add?
14:58:32 <kzaitsev_ws> apuimedo: me just a couple of quick points
14:58:37 <apuimedo> zengchen1: it's an SDN from the openvswitch folks
14:58:42 <apuimedo> #topic general
14:59:07 <apuimedo> zengchen1: I look forward to see the patches for robustness
14:59:07 <kzaitsev_ws> 1) If there are no objections I would ask to be release liaison for kuryr.
14:59:16 <apuimedo> kzaitsev_ws: you have my support
14:59:20 <irenab> +1
14:59:23 <kzaitsev_ws> I know that's a lot of power to ask for and we have a bit of time left
14:59:34 <kzaitsev_ws> so we can decide that next week (=
15:00:02 <kzaitsev_ws> 2d one is related — I would probably ask for permissions to tend milestones and releases on launchpad
15:00:22 <irenab> apuimedo: I think you started some
15:00:22 <kzaitsev_ws> I do that for murano and could as well keep kuryr's lp uptodate )
15:00:24 <apuimedo> kzaitsev_ws: I agree with that as well
15:00:30 <apuimedo> but yes, send an email to the mailing list
15:00:44 <kzaitsev_ws> so no rush to do that. will bring it on next meeting (= or ML
15:00:48 <apuimedo> #info This week is the time for PTL candidacy
15:01:09 <apuimedo> if you feel like being PTL, don't forget to send a self-nomination
15:01:10 <irenab> apuimedo: you do it perfectly
15:01:24 <apuimedo> irenab: far from that. I learn on the way
15:01:28 <apuimedo> and play it by ear
15:01:41 <apuimedo> alright, we're overtime
15:01:44 <apuimedo> let's close this
15:01:49 <apuimedo> thank you all for joining today!
15:01:51 <apuimedo> #endmeeting