14:03:13 <apuimedo> #startmeeting kuryr
14:03:13 <openstack> Meeting started Mon Aug 14 14:03:13 2017 UTC and is due to finish in 60 minutes.  The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:03:15 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:03:17 <openstack> The meeting name has been set to 'kuryr'
14:03:22 <apuimedo> Hi everybody
14:03:38 <apuimedo> Welcome! who's here today for the weekly IRC meeting?
14:03:43 <irenab> hi
14:03:56 <limao> o/
14:04:26 <ivc> o/
14:05:24 <apuimedo> kzaitsev_ws: janonymous: are you here by chance?
14:05:49 <apuimedo> alright, let's get started
14:05:49 <kzaitsev_ws> tenatively
14:06:03 <apuimedo> kzaitsev_ws: is that tentatively or tenaciously?
14:06:04 <apuimedo> xD
14:06:07 <apuimedo> anyway
14:06:13 <apuimedo> #topic kuryr-libnetwork
14:06:38 <apuimedo> limao: we should make a libnetwork release and branch
14:06:58 <kzaitsev_ws> (=
14:07:01 <apuimedo> for that we should have the rally tests working, so I'm very happy to see your patch in that direction
14:07:15 <limao> https://review.openstack.org/#/c/484555/
14:07:21 <apuimedo> ah, right
14:07:25 <apuimedo> forgot to paste the link
14:07:33 <apuimedo> #link https://review.openstack.org/#/c/484555/
14:07:56 <apuimedo> #link https://review.openstack.org/#/c/487476/
14:07:58 <limao> apuimedo: this is Updated from global requirements
14:08:11 <apuimedo> limao: yes, yes. I saw
14:08:13 <apuimedo> I merged it
14:08:21 <limao> ;-)
14:08:24 <apuimedo> limao: do we have anything else apart the rally fix?
14:08:50 <apuimedo> otherwise we can probably create the openstack/release patch and put it as depending on the rally fix
14:09:35 <limao> nothing else depend on I think
14:09:43 <apuimedo> perfect
14:09:50 <apuimedo> thanks limao!
14:10:00 <apuimedo> does anybody have anything else on kuryr-libnetwork?
14:10:40 <irenab> nope
14:11:36 <apuimedo> alright
14:11:44 <apuimedo> #topic fuxi
14:12:03 <apuimedo> irenab: I saw you caught up with quite a few reviews on both fuxi and fuxi-kubernetes
14:12:09 <apuimedo> hongbin: are you here?
14:12:18 <hongbin> apuimedo: o/
14:12:24 <zengchen1> hi,guys
14:12:40 <hongbin> i think zengchen1 can cover the fuxi-kubernetes part
14:13:08 <hongbin> one note from me is that i have submitted a release request to cut the stable/pike branch for fuxi
14:13:27 <apuimedo> ah, I didn't see zengchen1 in the channel :P
14:13:34 <apuimedo> hongbin: perfect
14:13:42 <zengchen1> last week, we discussed the design of watch framework. i think we have not understood the mechanism of list+watch.
14:13:51 <hongbin> #link https://review.openstack.org/#/c/491215/
14:14:21 <apuimedo> ah, you went faster than me
14:14:27 <apuimedo> I was searching for the link now :P
14:15:34 <zengchen1> this week i will continue to study the pattern of list + watch and hope to find the principle of list + watch.
14:15:52 <apuimedo> zengchen1: I would rephrase that as we understand how it is done list+watch but we do not have the full picture as to why that is better than watch with resourceversion=0
14:16:33 <irenab> is this part of the k8s client related work?
14:16:44 <ivc> apuimedo zengchen1 doesn't watch even without 'resourceversion=0' return an initial state on first request anyway?
14:17:00 <apuimedo> irenab: fuxi-kubernetes is translating client-golang into python for its own use
14:17:04 <ivc> (by initial i mean most current)
14:17:14 <apuimedo> ivc: not afaik
14:17:25 <apuimedo> unless it defaults to 0
14:17:35 <apuimedo> which I don't recall if it does
14:17:36 <ivc> thats what i mean
14:17:49 <irenab> apuimedo: seems can be reused for kuryr-kubernetes as well. Not sure why need separate klient for fuxi and kuryr
14:18:11 <ivc> iirc if you specify resourceversion=something you only get events after that 'something'
14:18:27 <apuimedo> zengchen1: I think that irenab will +2 once you put a link to a blueprint https://review.openstack.org/#/c/476839/8
14:18:42 <ivc> but if you omit it you get 1 initial event anyway
14:18:52 <apuimedo> irenab: my position is to help this move forward
14:18:56 <zengchen1> apuimedo:ok, i will update
14:19:21 <apuimedo> and if we see it behaves better, to try and contribute it to upstream client-python and/or adopt it in kuryr-kubernetes
14:19:39 <zengchen1> apuimedo:+1
14:20:03 <irenab> apuimedo: ok. I think that eventually it should not be duplicated effort
14:20:21 <apuimedo> zengchen1: is https://review.openstack.org/#/c/489138/ ready for review? I see it posted but nobody added as reviewers
14:20:53 <zengchen1> ivc:you have good understanding about list+watch. could you give more details about that.
14:21:37 <ivc> zengchen1 we can discuss on #openstack-kuryr or in private messages after meeting
14:21:56 <zengchen1> ivc:ok
14:22:19 <apuimedo> ivc: I reconsidered my earlier position on using resourceversion=0 since the k8s devs want to drop that special meaning for v2
14:22:27 <zengchen1> apuimedo:it may still need update.
14:22:30 <apuimedo> so it is sorta deprecated
14:22:36 <apuimedo> zengchen1: what may need update?
14:22:46 <zengchen1> https://review.openstack.org/#/c/489138/
14:22:53 <apuimedo> ah, perfec
14:22:56 <apuimedo> *perfect
14:22:58 <apuimedo> thanks zengchen1
14:23:06 <apuimedo> anything else on fuxi-kubernetes land?
14:23:28 <zengchen1> apuimedo:first i should merge the watch framework, then update that patches.
14:24:23 <apuimedo> understood
14:24:54 <apuimedo> zengchen1: ivc: if possible, hold the discussion in #openstack-kuryr, so I can watch it and maybe chip in
14:25:00 <apuimedo> thanks zengchen1
14:25:04 <apuimedo> moving on
14:25:10 <apuimedo> #topic kuryr-kubernetes
14:25:40 <zengchen1> apuimedo:ok
14:25:52 <apuimedo> #info last week I discovered a devstack bug when using Octavia. I will file it soon, but in the meantime, I fixed the documentaiton
14:26:06 <apuimedo> s/fixed/created without the bug/
14:26:14 <apuimedo> https://review.openstack.org/#/c/492959/
14:26:17 <apuimedo> #link https://review.openstack.org/#/c/492959/
14:26:40 <apuimedo> #info I was working on ipv6 and found that we had a couple of bugs for ipv6 pod networking.
14:26:40 <irenab> apuimedo: so devstack is now having some problem?
14:26:50 <apuimedo> #link https://review.openstack.org/#/c/493267/1
14:26:58 <apuimedo> irenab: you're unlikely to face it
14:27:06 <apuimedo> due to it's nature
14:27:10 <apuimedo> s/it's/its/
14:27:54 <apuimedo> irenab: the bug was that if you do not "split" the service subnet between what is in the neutron subnet allocation start/end and what is out
14:28:20 <apuimedo> you could get an address taken by an octavia lb vrrp port that then is claimed by kubernetes as a serviceIP
14:28:28 <apuimedo> and we'd fail to allocate the clusterip that k8s assigned
14:28:43 <apuimedo> so, if your subnet is big, you may not run into it for a while
14:28:52 <apuimedo> anyway, the documentation explains it
14:28:58 <irenab> got it
14:29:00 <apuimedo> and devstack will be updated soon
14:29:10 <apuimedo> so I think I only need ot create the bug entry :P
14:29:18 <apuimedo> (which should have been the first, but I'm terrible)
14:29:26 <apuimedo> I did file two bugs with Octavia
14:30:04 <apuimedo> #link https://bugs.launchpad.net/octavia/+bug/1709922
14:30:06 <openstack> Launchpad bug 1709922 in octavia "Octavia fails to create a loadbalancer when just getting the VIP port" [Critical,Fix released] - Assigned to Michael Johnson (johnsom)
14:30:53 <janonymous> o/ sry got late :)
14:31:00 <apuimedo> the other one was a duplicated that basically octavia would replace the clusterip with one of the allocation range
14:31:06 <apuimedo> because it was not saving it well
14:31:16 <apuimedo> but it got fixed the night before I reported it
14:31:17 <apuimedo> :P
14:31:37 <apuimedo> Regarding IPv6
14:32:00 <apuimedo> with my WIP patch pod networking works
14:32:06 <apuimedo> and shows up well in kubernetes
14:32:19 <apuimedo> but I need to finish documenting and verifying the services to work
14:32:38 <apuimedo> I had a mistake there that I was trying to give k8s a /64 for the clusterip range
14:32:58 <apuimedo> but it doesn't want more than 2²⁰ addrs
14:33:04 <apuimedo> so I am changing now to a /112
14:33:10 <apuimedo> and that seems to be accepted
14:33:15 <irenab> apuimedo: k8s limit?
14:33:15 <apuimedo> will probably update later today
14:33:21 <apuimedo> irenab: yes, hardcoded limit
14:33:26 <apuimedo> a bit strange, if you ask me
14:33:28 <apuimedo> but whatever
14:33:45 <apuimedo> I saw some other SDNs allocating /112 for node addresses so I tried my luck
14:33:53 <apuimedo> and then I found out about the 20 bit mask limitation
14:34:08 <apuimedo> I also have to check which the minimum version is
14:36:38 <apuimedo> kzaitsev_ws: any update on the cni multi vif?
14:36:44 <apuimedo> I saw there's a new patch set
14:37:00 <kzaitsev_ws> not really. I've only updated the 1st patch in the queue
14:37:09 <kzaitsev_ws> the one that changes single vif into a dict
14:37:38 <apuimedo> kzaitsev_ws: I wonder, can it be manually tested by crafting a pod with vif annotation already created?
14:37:43 <kzaitsev_ws> still gotta work on the driver configuration thing we agreed to with irenab
14:37:47 <apuimedo> if so, it would be good to put in the commit message
14:38:30 <kzaitsev_ws> apuimedo: the patch is self-contained. e.g. if you deploy kuryr-k8s with this patch it just starts using dicts instead of vif-objects
14:38:55 <kzaitsev_ws> no special input needed =)
14:38:56 <apuimedo> kzaitsev_ws: but it does not give multiple vifs ;P
14:39:00 <apuimedo> I meant trying that aspect
14:39:05 <kzaitsev_ws> ah
14:39:09 <kzaitsev_ws> right
14:40:07 <kzaitsev_ws> well, you can always copy-paste the code and add like 2 lines to the controller, that would request eth1 from the same subnet ;)
14:40:50 <kzaitsev_ws> would probably be way simpler than pasting/crafting the vif object..
14:40:54 <apuimedo> ok
14:41:04 <apuimedo> kzaitsev_ws: I was planning on starting two pods
14:41:10 <apuimedo> then delete the second
14:41:20 <apuimedo> and copy both vifs
14:41:24 <apuimedo> delete the first
14:41:29 <apuimedo> create the ports manually
14:41:39 <apuimedo> and then create the pod with both annotations
14:41:53 <kzaitsev_ws> that should work too )
14:42:14 <kzaitsev_ws> except that you would have hard time managing active-ness
14:42:26 <kzaitsev_ws> probably..
14:43:10 <kzaitsev_ws> you would need to mark both thngs active and I guess it would work then )
14:43:32 <apuimedo> kzaitsev_ws: it's something I've wanted to check for some time
14:43:49 <apuimedo> I think the controller should check for their activity as well
14:44:16 <apuimedo> anything else on kuryr-kubernetes?
14:46:15 <apuimedo> alright then
14:46:20 <apuimedo> #topic general?
14:46:25 <apuimedo> lol
14:46:34 <apuimedo> I went and typed a question mark
14:46:38 <apuimedo> #topic General!
14:46:43 <apuimedo> now it is better
14:46:48 <apuimedo> anything else from anybody?
14:47:01 <apuimedo> I still hope for vtg session proposals
14:47:02 <kzaitsev_ws> seems like no =)
14:49:59 <apuimedo> #endmeeting