14:00:25 #startmeeting kuryr 14:00:25 Meeting started Mon Jul 24 14:00:25 2017 UTC and is due to finish in 60 minutes. The chair is dmellado. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:28 The meeting name has been set to 'kuryr' 14:00:36 Hi kuryrs, who's around here today? ;) 14:00:43 o/ 14:00:45 o/ 14:00:55 o/ 14:02:26 all right, let's start ;) 14:02:37 #topic kuryr-kubernetes 14:02:51 Has anyone anything to say on the topic? 14:03:08 yes 14:03:13 From my side I'd love to have people go and try to catch up on reviews 14:03:14 just quick update 14:03:16 https://review.openstack.org/#/q/project:openstack/kuryr-kubernetes+status:open 14:03:20 please! 14:03:24 garyloug: go ahead! 14:03:25 hi, sorry for being late 14:03:29 hi irenab ;) 14:03:52 we are currently updating the document regarding CRD that we promised last week 14:04:32 garyloug: ack, awesome. Do you need any help on that? 14:04:44 o/, sory late 14:04:48 o/ 14:04:51 o/ hey janonymous ;) 14:04:59 o/ :) 14:05:17 apuimedo: we were covering kuryr-kubernetes side 14:05:17 dmellado: you can go on ;-) 14:05:18 Maybe some review if possible - Kural will have it finished in a few minutes 14:05:22 do you have any topic on that? 14:05:22 ah, perfect 14:05:32 well, now that janonymous is here, I do 14:05:34 :-) 14:05:46 janonymous has been working on splitting CNI 14:05:46 I wanted to ask him on the devstack patch (which I'll have to review too!) 14:05:58 :D 14:06:14 janonymous: could you summarize the current status? 14:06:19 and we got stuck with eventlet/pyroute2 issues 14:06:33 oh, true! /me sighs 14:07:17 yeah.. 14:07:42 any details? 14:07:43 janonymous: changing to the threaded unix server for the CNI part fixed the issues, right? 14:07:56 my son stepped on the cable 14:07:59 lol 14:08:05 what is the last thing you read from me? 14:08:10 I need to drop.. I may be back before meeting is finished.. 14:08:18 that you got stuck with pyroute2 issues 14:08:19 apuimedo: yes 14:08:22 garyloug: alright, thanks 14:08:24 pls go ahead apuimedo 14:08:24 ah right 14:08:32 janonymous: so I was saying 14:08:37 janonymous: changing to the threaded unix server for the CNI part fixed the issues, right? 14:09:10 apuimedo: yes, but now the command is run through cmd/ dir which has eventlet patch 14:09:24 what do you mean? 14:09:30 which command? 14:09:37 apuimedo: to run kuryr-daemon 14:10:08 apuimedo: but that might be not a very big issue 14:10:26 janonymous: so what's the issue on that 14:10:29 does it import eventlet? 14:10:33 apuimedo: yes 14:10:36 janonymous: that's fine 14:10:46 dmellado: yes.. 14:10:57 dmellado: I suspec the issue was that janonymous was using the non threaded unix server 14:11:08 so if there was more than a call to the unix domain socket 14:11:09 BOOM 14:11:13 hmmm I see 14:11:27 because the unixstreamserver class in socketserver module (SocketServer in py2) 14:11:37 explicitly states that one request at a time only 14:12:06 apuimedo: one more thing , is there a limit on access/connections to unix socket? 14:12:36 janonymous: I assume there may be sysctl configurable param 14:12:41 but I haven't checked 14:12:48 cool! 14:13:02 apuimedo: i thought first to make Watch passed with thread pool like controller does to watch events.. 14:13:06 apuimedo: janonymous : link to patch? 14:13:37 https://review.openstack.org/#/c/480028/ 14:14:01 #link https://review.openstack.org/#/c/480028/ 14:14:08 janonymous: you can still have threading for the watching, can't you? 14:14:36 apuimedo: yea, but i would keep that experimental for now :) 14:15:00 heh, sounds like a safe approach for now 14:15:11 in any case IMHO that's way better than the another options we were commenting the another day, apuimedo 14:15:32 janonymous: ok 14:15:38 dmellado: indeed 14:15:44 dmellado: that's better 14:16:08 I wonder if we may end up having to go with the mitigation of running pyroute2 in privsep fork mode anyway 14:16:13 but let's go step by step 14:16:26 * dmellado trembles when he hears privsep... 14:16:47 apuimedo: yup agree, i pasted serialization error in channel with that 14:17:09 janonymous: on the other side, and totally low-hanging-fruit, any progress with the screen devstack patch? ;) 14:17:24 janonymous: can you paste again? 14:17:38 dmellado: janonymous that devstack patch should not use --detach 14:17:52 otherwise nothing will be visible on journalctl 14:18:05 http://paste.openstack.org/show/616262/ 14:18:41 dmellado: yeah, we need to find a way to use logs 14:18:47 huh 14:19:04 i checked in devstack , there is not pre/Post exec sections to run multiple commands 14:19:11 janonymous: with the threaded unix server, does it work without privsep? 14:19:23 apuimedo: yes 14:20:47 janonymous: so let's keep privsep for the future then 14:21:17 On other topics, I've been testing with Octavia instead of neutron-lbaasv2-haproxy 14:21:17 apuimedo: sure, thanks for your great help :) 14:21:26 janonymous: thanks to you for the hard work! 14:21:38 thanks janonymous ;) 14:21:43 so far I'm stuck on a bug somewhere that makes funny behavior 14:21:52 the one about the ports? 14:22:05 ltomasbo: in that patch I sent to split the service subnet into a new network 14:22:10 for neutron-lbaasv2 it works 14:22:14 (you can see the gate) 14:22:29 but for octavia it ends up that the subnet doesn't have allocation pools 14:22:36 and loadbalancer creation fails 14:22:45 and you can't even create ports in the subnet anymore 14:22:48 the funny thing is 14:22:50 if after that 14:22:50 heh 14:22:55 you create another network and subnet 14:23:04 (from teh same v4 allocation pool even) 14:23:09 then it works? 14:23:11 and then you create a loadbalancer 14:23:13 it just works 14:23:28 fscking race between octavia and neutron or something 14:23:31 hmmm apuimedo did you try to do that manually and check? it looks like some race condition 14:23:33 yeah 14:23:43 I don't rule out some other error, but it is very odd 14:23:55 the good news 14:24:05 is that once this race is gone, we'll add octavia gate 14:24:13 and it seems that no code changes should be necessary 14:24:29 which clears the way for things like ingress controllers 14:24:45 on the gate side, I've added a patch that would trigger nova too on the tempest gate so we can have mixed scenarios pod-vm 14:24:50 feel free to review it if you have the time 14:24:58 dmellado: for all gates?! 14:25:08 apuimedo: nope 14:25:10 for tempest gate 14:25:13 dmellado: link? 14:25:13 read up xD 14:25:16 https://review.openstack.org/#/c/486525/ 14:25:20 #link https://review.openstack.org/#/c/486525/ 14:25:54 thanks in advance, irenab 14:26:09 dmellado: apuimedo : there are few patches you started that require some updates, please check your queues 14:26:21 irenab: you're very right 14:26:25 things are getting stale 14:26:40 yep, that's why I also sent a reminder for everyone to pls review patches 14:26:49 so we don't get stuck before going on holidays ;) 14:27:13 but you're totally right irenab 14:27:33 dmellado: maybe we need to have separate gate for VMs+containers and containers only 14:27:38 #link https://review.openstack.org/#/c/484754 14:27:40 I'd really like to get the network addon patch in this week 14:27:42 irenab: do you think so? 14:27:56 I don't think it would change things that much 14:27:57 dmellado: ^^ patch link for you :D 14:28:00 dmellado: two gates is better than one 14:28:02 :-) 14:28:06 maybe what we could do is add a flag 14:28:18 to run different kind of tests, once added 14:28:18 at least with devstack, sometimes having nova can shadow issues for the case you only have neutron + keystone 14:28:50 janonymous: thanks! ;) 14:29:06 but from the deployment cases view, both may be real deployment options 14:29:09 irenab: I see your point. Well, I don't think adding a non-nova gate would hurt at all ;) 14:29:21 thanks 14:30:45 as for the job config, we need to add devstack-container-plugin to make this patch pass the gate: https://review.openstack.org/#/c/474238/ 14:34:40 irenab: I sent https://review.openstack.org/#/c/480983/ but it seems I was wrong 14:35:42 apuimedo: hmm.. I think both are needed 14:36:05 http://logs.openstack.org/38/474238/6/check/gate-install-dsvm-default-kuryr-kubernetes/d84d775/logs/devstacklog.txt.gz 14:36:18 log mentiones adding plugin to the projects 14:38:32 do we have anything else? 14:38:42 dmellado: maybe you can help me with this crap after the meeting 14:38:57 apuimedo: sure, let's try to put this back in shape 14:39:12 should we go for next topic 14:39:25 if there's anything on kuryr-libnetowrk, thoug 14:39:33 though 14:39:40 fuxi? 14:39:48 #topic fuxi 14:41:46 hi, about fuxi-k8s, there is not much change. There are several patches of Flexvolume driver, it needs more review. I am working on the component of provisioner which watches the PVC/PV and create/delete PV for K8S. 14:42:15 zengchen1: cool. Will review this week 14:42:22 I think it will take me more time to design and code on provisioner. 14:42:57 apuimedo:ok, thanks. 14:42:58 most likely 14:43:08 i have one question. 14:44:17 there is a available k8s python client, why kuryr does not use it. 14:44:46 zengchen1: there is a patch from janonymous to use it 14:44:52 we have to test it and merge it 14:45:43 ok, got it. i also did some test, and find some bug of it. 14:46:48 apuimedo: zengchen1 : it is nearly complete, but i havn't got much time these days to revisit.. 14:47:07 zengchen1: please feel free to push patch in it, or report it :) 14:47:19 janonymous:ok, i will. 14:47:29 apuimedo: I'll need to afk for some minutes, could you please run the remaining meeting? 14:48:24 #chair apuimedo 14:48:25 Current chairs: apuimedo dmellado 14:48:35 janonymous:could you please give me the link about it. I think fuxi-k8s will use it too. 14:49:15 #link https://review.openstack.org/#/c/454555/ 14:49:18 zengchen1: ^^ 14:49:39 apuimedo:thanks. 14:50:32 apuimedo: i think that 14:50:43 apuimedo: i think that is all for me. 14:51:08 thanks zengchen1 14:51:14 #topic general 14:51:19 Anything else from anybody? 14:51:23 oh, yes, from me 14:51:48 #info please send me suggestions for vtg sessions at asegurap@redhat.com or in irc 14:51:57 I'll make an etherpad to vote then 14:52:32 ok 14:52:45 +1 14:54:11 thanks 14:54:14 anything else? 14:56:39 alright then 14:56:41 closing 14:56:45 thank you all for joining 14:56:47 #endmeeting