14:03:13 #startmeeting kuryr 14:03:13 Meeting started Mon Aug 14 14:03:13 2017 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:03:15 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:03:17 The meeting name has been set to 'kuryr' 14:03:22 Hi everybody 14:03:38 Welcome! who's here today for the weekly IRC meeting? 14:03:43 hi 14:03:56 o/ 14:04:26 o/ 14:05:24 kzaitsev_ws: janonymous: are you here by chance? 14:05:49 alright, let's get started 14:05:49 tenatively 14:06:03 kzaitsev_ws: is that tentatively or tenaciously? 14:06:04 xD 14:06:07 anyway 14:06:13 #topic kuryr-libnetwork 14:06:38 limao: we should make a libnetwork release and branch 14:06:58 (= 14:07:01 for that we should have the rally tests working, so I'm very happy to see your patch in that direction 14:07:15 https://review.openstack.org/#/c/484555/ 14:07:21 ah, right 14:07:25 forgot to paste the link 14:07:33 #link https://review.openstack.org/#/c/484555/ 14:07:56 #link https://review.openstack.org/#/c/487476/ 14:07:58 apuimedo: this is Updated from global requirements 14:08:11 limao: yes, yes. I saw 14:08:13 I merged it 14:08:21 ;-) 14:08:24 limao: do we have anything else apart the rally fix? 14:08:50 otherwise we can probably create the openstack/release patch and put it as depending on the rally fix 14:09:35 nothing else depend on I think 14:09:43 perfect 14:09:50 thanks limao! 14:10:00 does anybody have anything else on kuryr-libnetwork? 14:10:40 nope 14:11:36 alright 14:11:44 #topic fuxi 14:12:03 irenab: I saw you caught up with quite a few reviews on both fuxi and fuxi-kubernetes 14:12:09 hongbin: are you here? 14:12:18 apuimedo: o/ 14:12:24 hi,guys 14:12:40 i think zengchen1 can cover the fuxi-kubernetes part 14:13:08 one note from me is that i have submitted a release request to cut the stable/pike branch for fuxi 14:13:27 ah, I didn't see zengchen1 in the channel :P 14:13:34 hongbin: perfect 14:13:42 last week, we discussed the design of watch framework. i think we have not understood the mechanism of list+watch. 14:13:51 #link https://review.openstack.org/#/c/491215/ 14:14:21 ah, you went faster than me 14:14:27 I was searching for the link now :P 14:15:34 this week i will continue to study the pattern of list + watch and hope to find the principle of list + watch. 14:15:52 zengchen1: I would rephrase that as we understand how it is done list+watch but we do not have the full picture as to why that is better than watch with resourceversion=0 14:16:33 is this part of the k8s client related work? 14:16:44 apuimedo zengchen1 doesn't watch even without 'resourceversion=0' return an initial state on first request anyway? 14:17:00 irenab: fuxi-kubernetes is translating client-golang into python for its own use 14:17:04 (by initial i mean most current) 14:17:14 ivc: not afaik 14:17:25 unless it defaults to 0 14:17:35 which I don't recall if it does 14:17:36 thats what i mean 14:17:49 apuimedo: seems can be reused for kuryr-kubernetes as well. Not sure why need separate klient for fuxi and kuryr 14:18:11 iirc if you specify resourceversion=something you only get events after that 'something' 14:18:27 zengchen1: I think that irenab will +2 once you put a link to a blueprint https://review.openstack.org/#/c/476839/8 14:18:42 but if you omit it you get 1 initial event anyway 14:18:52 irenab: my position is to help this move forward 14:18:56 apuimedo:ok, i will update 14:19:21 and if we see it behaves better, to try and contribute it to upstream client-python and/or adopt it in kuryr-kubernetes 14:19:39 apuimedo:+1 14:20:03 apuimedo: ok. I think that eventually it should not be duplicated effort 14:20:21 zengchen1: is https://review.openstack.org/#/c/489138/ ready for review? I see it posted but nobody added as reviewers 14:20:53 ivc:you have good understanding about list+watch. could you give more details about that. 14:21:37 zengchen1 we can discuss on #openstack-kuryr or in private messages after meeting 14:21:56 ivc:ok 14:22:19 ivc: I reconsidered my earlier position on using resourceversion=0 since the k8s devs want to drop that special meaning for v2 14:22:27 apuimedo:it may still need update. 14:22:30 so it is sorta deprecated 14:22:36 zengchen1: what may need update? 14:22:46 https://review.openstack.org/#/c/489138/ 14:22:53 ah, perfec 14:22:56 *perfect 14:22:58 thanks zengchen1 14:23:06 anything else on fuxi-kubernetes land? 14:23:28 apuimedo:first i should merge the watch framework, then update that patches. 14:24:23 understood 14:24:54 zengchen1: ivc: if possible, hold the discussion in #openstack-kuryr, so I can watch it and maybe chip in 14:25:00 thanks zengchen1 14:25:04 moving on 14:25:10 #topic kuryr-kubernetes 14:25:40 apuimedo:ok 14:25:52 #info last week I discovered a devstack bug when using Octavia. I will file it soon, but in the meantime, I fixed the documentaiton 14:26:06 s/fixed/created without the bug/ 14:26:14 https://review.openstack.org/#/c/492959/ 14:26:17 #link https://review.openstack.org/#/c/492959/ 14:26:40 #info I was working on ipv6 and found that we had a couple of bugs for ipv6 pod networking. 14:26:40 apuimedo: so devstack is now having some problem? 14:26:50 #link https://review.openstack.org/#/c/493267/1 14:26:58 irenab: you're unlikely to face it 14:27:06 due to it's nature 14:27:10 s/it's/its/ 14:27:54 irenab: the bug was that if you do not "split" the service subnet between what is in the neutron subnet allocation start/end and what is out 14:28:20 you could get an address taken by an octavia lb vrrp port that then is claimed by kubernetes as a serviceIP 14:28:28 and we'd fail to allocate the clusterip that k8s assigned 14:28:43 so, if your subnet is big, you may not run into it for a while 14:28:52 anyway, the documentation explains it 14:28:58 got it 14:29:00 and devstack will be updated soon 14:29:10 so I think I only need ot create the bug entry :P 14:29:18 (which should have been the first, but I'm terrible) 14:29:26 I did file two bugs with Octavia 14:30:04 #link https://bugs.launchpad.net/octavia/+bug/1709922 14:30:06 Launchpad bug 1709922 in octavia "Octavia fails to create a loadbalancer when just getting the VIP port" [Critical,Fix released] - Assigned to Michael Johnson (johnsom) 14:30:53 o/ sry got late :) 14:31:00 the other one was a duplicated that basically octavia would replace the clusterip with one of the allocation range 14:31:06 because it was not saving it well 14:31:16 but it got fixed the night before I reported it 14:31:17 :P 14:31:37 Regarding IPv6 14:32:00 with my WIP patch pod networking works 14:32:06 and shows up well in kubernetes 14:32:19 but I need to finish documenting and verifying the services to work 14:32:38 I had a mistake there that I was trying to give k8s a /64 for the clusterip range 14:32:58 but it doesn't want more than 2²⁰ addrs 14:33:04 so I am changing now to a /112 14:33:10 and that seems to be accepted 14:33:15 apuimedo: k8s limit? 14:33:15 will probably update later today 14:33:21 irenab: yes, hardcoded limit 14:33:26 a bit strange, if you ask me 14:33:28 but whatever 14:33:45 I saw some other SDNs allocating /112 for node addresses so I tried my luck 14:33:53 and then I found out about the 20 bit mask limitation 14:34:08 I also have to check which the minimum version is 14:36:38 kzaitsev_ws: any update on the cni multi vif? 14:36:44 I saw there's a new patch set 14:37:00 not really. I've only updated the 1st patch in the queue 14:37:09 the one that changes single vif into a dict 14:37:38 kzaitsev_ws: I wonder, can it be manually tested by crafting a pod with vif annotation already created? 14:37:43 still gotta work on the driver configuration thing we agreed to with irenab 14:37:47 if so, it would be good to put in the commit message 14:38:30 apuimedo: the patch is self-contained. e.g. if you deploy kuryr-k8s with this patch it just starts using dicts instead of vif-objects 14:38:55 no special input needed =) 14:38:56 kzaitsev_ws: but it does not give multiple vifs ;P 14:39:00 I meant trying that aspect 14:39:05 ah 14:39:09 right 14:40:07 well, you can always copy-paste the code and add like 2 lines to the controller, that would request eth1 from the same subnet ;) 14:40:50 would probably be way simpler than pasting/crafting the vif object.. 14:40:54 ok 14:41:04 kzaitsev_ws: I was planning on starting two pods 14:41:10 then delete the second 14:41:20 and copy both vifs 14:41:24 delete the first 14:41:29 create the ports manually 14:41:39 and then create the pod with both annotations 14:41:53 that should work too ) 14:42:14 except that you would have hard time managing active-ness 14:42:26 probably.. 14:43:10 you would need to mark both thngs active and I guess it would work then ) 14:43:32 kzaitsev_ws: it's something I've wanted to check for some time 14:43:49 I think the controller should check for their activity as well 14:44:16 anything else on kuryr-kubernetes? 14:46:15 alright then 14:46:20 #topic general? 14:46:25 lol 14:46:34 I went and typed a question mark 14:46:38 #topic General! 14:46:43 now it is better 14:46:48 anything else from anybody? 14:47:01 I still hope for vtg session proposals 14:47:02 seems like no =) 14:49:59 #endmeeting