14:00:25 <dmellado> #startmeeting kuryr
14:00:25 <openstack> Meeting started Mon Jul 24 14:00:25 2017 UTC and is due to finish in 60 minutes.  The chair is dmellado. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:28 <openstack> The meeting name has been set to 'kuryr'
14:00:36 <dmellado> Hi kuryrs, who's around here today? ;)
14:00:43 <garyloug> o/
14:00:45 <limao> o/
14:00:55 <ltomasbo> o/
14:02:26 <dmellado> all right, let's start ;)
14:02:37 <dmellado> #topic kuryr-kubernetes
14:02:51 <dmellado> Has anyone anything to say on the topic?
14:03:08 <garyloug> yes
14:03:13 <dmellado> From my side I'd love to have people go and try to catch up on reviews
14:03:14 <garyloug> just quick update
14:03:16 <dmellado> https://review.openstack.org/#/q/project:openstack/kuryr-kubernetes+status:open
14:03:20 <dmellado> please!
14:03:24 <dmellado> garyloug: go ahead!
14:03:25 <irenab> hi, sorry for being late
14:03:29 <dmellado> hi irenab ;)
14:03:52 <garyloug> we are currently updating the document regarding CRD that we promised last week
14:04:32 <dmellado> garyloug: ack, awesome. Do you need any help on that?
14:04:44 <janonymous> o/, sory late
14:04:48 <apuimedo> o/
14:04:51 <dmellado> o/ hey janonymous ;)
14:04:59 <janonymous> o/ :)
14:05:17 <dmellado> apuimedo: we were covering kuryr-kubernetes side
14:05:17 <apuimedo> dmellado: you can go on ;-)
14:05:18 <garyloug> Maybe some review if possible - Kural will have it finished in a few minutes
14:05:22 <dmellado> do you have any topic on that?
14:05:22 <apuimedo> ah, perfect
14:05:32 <apuimedo> well, now that janonymous is here, I do
14:05:34 <apuimedo> :-)
14:05:46 <apuimedo> janonymous has been working on splitting CNI
14:05:46 <dmellado> I wanted to ask him on the devstack patch (which I'll have to review too!)
14:05:58 <janonymous> :D
14:06:14 <dmellado> janonymous: could you summarize the current status?
14:06:19 <apuimedo> and we got stuck with eventlet/pyroute2 issues
14:06:33 <dmellado> oh, true! /me sighs
14:07:17 <janonymous> yeah..
14:07:42 <irenab> any details?
14:07:43 <apuimedo> janonymous: changing to the threaded unix server for the CNI part fixed the issues, right?
14:07:56 <apuimedo> my son stepped on the cable
14:07:59 <dmellado> lol
14:08:05 <apuimedo> what is the last thing you read from me?
14:08:10 <garyloug> I need to drop.. I may be back before meeting is finished..
14:08:18 <dmellado> that you got stuck with pyroute2 issues
14:08:19 <janonymous> apuimedo: yes
14:08:22 <apuimedo> garyloug: alright, thanks
14:08:24 <dmellado> pls go ahead apuimedo
14:08:24 <apuimedo> ah right
14:08:32 <apuimedo> janonymous: so I was saying
14:08:37 <apuimedo> janonymous: changing to the threaded unix server for the CNI part fixed the issues, right?
14:09:10 <janonymous> apuimedo: yes, but now the command is run through cmd/ dir which has eventlet patch
14:09:24 <apuimedo> what do you mean?
14:09:30 <apuimedo> which command?
14:09:37 <janonymous> apuimedo: to run kuryr-daemon
14:10:08 <janonymous> apuimedo: but that might be not a very big issue
14:10:26 <dmellado> janonymous: so what's the issue on that
14:10:29 <dmellado> does it import eventlet?
14:10:33 <janonymous> apuimedo:  yes
14:10:36 <apuimedo> janonymous: that's fine
14:10:46 <janonymous> dmellado: yes..
14:10:57 <apuimedo> dmellado: I suspec the issue was that janonymous was using the non threaded unix server
14:11:08 <apuimedo> so if there was more than a call to the unix domain socket
14:11:09 <apuimedo> BOOM
14:11:13 <dmellado> hmmm I see
14:11:27 <apuimedo> because the unixstreamserver class in socketserver module (SocketServer in py2)
14:11:37 <apuimedo> explicitly states that one request at a time only
14:12:06 <janonymous> apuimedo: one more thing , is there a limit on access/connections  to unix socket?
14:12:36 <apuimedo> janonymous: I assume there may be sysctl configurable param
14:12:41 <apuimedo> but I haven't checked
14:12:48 <janonymous> cool!
14:13:02 <janonymous> apuimedo: i thought first  to make Watch passed with thread pool like controller does to watch events..
14:13:06 <irenab> apuimedo: janonymous : link to patch?
14:13:37 <apuimedo> https://review.openstack.org/#/c/480028/
14:14:01 <dmellado> #link https://review.openstack.org/#/c/480028/
14:14:08 <apuimedo> janonymous: you can still have threading for the watching, can't you?
14:14:36 <janonymous> apuimedo: yea, but i would keep that experimental for now :)
14:15:00 <dmellado> heh, sounds like a safe approach for now
14:15:11 <dmellado> in any case IMHO that's way better than the another options we were commenting the another day, apuimedo
14:15:32 <apuimedo> janonymous: ok
14:15:38 <apuimedo> dmellado: indeed
14:15:44 <apuimedo> dmellado: that's better
14:16:08 <apuimedo> I wonder if we may end up having to go with the mitigation of running pyroute2 in privsep fork mode anyway
14:16:13 <apuimedo> but let's go step by step
14:16:26 * dmellado trembles when he hears privsep...
14:16:47 <janonymous> apuimedo:  yup agree, i pasted serialization error in channel with that
14:17:09 <dmellado> janonymous: on the other side, and totally low-hanging-fruit, any progress with the screen devstack patch? ;)
14:17:24 <apuimedo> janonymous: can you paste again?
14:17:38 <apuimedo> dmellado: janonymous that devstack patch should not use --detach
14:17:52 <apuimedo> otherwise nothing will be visible on journalctl
14:18:05 <janonymous> http://paste.openstack.org/show/616262/
14:18:41 <janonymous> dmellado: yeah, we need to find a way to use logs
14:18:47 <dmellado> huh
14:19:04 <janonymous> i checked in devstack , there is not pre/Post exec sections to run multiple commands
14:19:11 <apuimedo> janonymous: with the threaded unix server, does it work without privsep?
14:19:23 <janonymous> apuimedo: yes
14:20:47 <apuimedo> janonymous: so let's keep privsep for the future then
14:21:17 <apuimedo> On other topics, I've been testing with Octavia instead of neutron-lbaasv2-haproxy
14:21:17 <janonymous> apuimedo: sure, thanks for your great help :)
14:21:26 <apuimedo> janonymous: thanks to you for the hard work!
14:21:38 <dmellado> thanks janonymous ;)
14:21:43 <apuimedo> so far I'm stuck on a bug somewhere that makes funny behavior
14:21:52 <dmellado> the one about the ports?
14:22:05 <apuimedo> ltomasbo: in that patch I sent to split the service subnet into a new network
14:22:10 <apuimedo> for neutron-lbaasv2 it works
14:22:14 <apuimedo> (you can see the gate)
14:22:29 <apuimedo> but for octavia it ends up that the subnet doesn't have allocation pools
14:22:36 <apuimedo> and loadbalancer creation fails
14:22:45 <apuimedo> and you can't even create ports in the subnet anymore
14:22:48 <apuimedo> the funny thing is
14:22:50 <apuimedo> if after that
14:22:50 <dmellado> heh
14:22:55 <apuimedo> you create another network and subnet
14:23:04 <apuimedo> (from teh same v4 allocation pool even)
14:23:09 <dmellado> then it works?
14:23:11 <apuimedo> and then you create a loadbalancer
14:23:13 <apuimedo> it just works
14:23:28 <apuimedo> fscking race between octavia and neutron or something
14:23:31 <dmellado> hmmm apuimedo did you try to do that manually and check? it looks like some race condition
14:23:33 <dmellado> yeah
14:23:43 <apuimedo> I don't rule out some other error, but it is very odd
14:23:55 <apuimedo> the good news
14:24:05 <apuimedo> is that once this race is gone, we'll add octavia gate
14:24:13 <apuimedo> and it seems that no code changes should be necessary
14:24:29 <apuimedo> which clears the way for things like ingress controllers
14:24:45 <dmellado> on the gate side, I've added a patch that would trigger nova too on the tempest gate so we can have mixed scenarios pod-vm
14:24:50 <dmellado> feel free to review it if you have the time
14:24:58 <apuimedo> dmellado: for all gates?!
14:25:08 <dmellado> apuimedo: nope
14:25:10 <dmellado> for tempest gate
14:25:13 <irenab> dmellado: link?
14:25:13 <dmellado> read up xD
14:25:16 <dmellado> https://review.openstack.org/#/c/486525/
14:25:20 <dmellado> #link https://review.openstack.org/#/c/486525/
14:25:54 <dmellado> thanks in advance, irenab
14:26:09 <irenab> dmellado: apuimedo : there are few patches you started that require some updates, please check your queues
14:26:21 <apuimedo> irenab: you're very right
14:26:25 <apuimedo> things are getting stale
14:26:40 <dmellado> yep, that's why I also sent a reminder for everyone to pls review patches
14:26:49 <dmellado> so we don't get stuck before going on holidays ;)
14:27:13 <dmellado> but you're totally right irenab
14:27:33 <irenab> dmellado: maybe we need to have separate gate for VMs+containers and containers only
14:27:38 <janonymous> #link https://review.openstack.org/#/c/484754
14:27:40 <apuimedo> I'd really like to get the network addon patch in this week
14:27:42 <dmellado> irenab: do you think so?
14:27:56 <dmellado> I don't think it would change things that much
14:27:57 <janonymous> dmellado: ^^ patch link for you :D
14:28:00 <apuimedo> dmellado: two gates is better than one
14:28:02 <apuimedo> :-)
14:28:06 <dmellado> maybe what we could do is add a flag
14:28:18 <dmellado> to run different kind of tests, once added
14:28:18 <irenab> at least with devstack, sometimes having nova can shadow issues for the case you only have neutron + keystone
14:28:50 <dmellado> janonymous: thanks! ;)
14:29:06 <irenab> but from the deployment cases view, both may be real deployment options
14:29:09 <dmellado> irenab: I see your point. Well, I don't think adding a non-nova gate would hurt at all ;)
14:29:21 <irenab> thanks
14:30:45 <irenab> as for the job config, we need to add devstack-container-plugin to make this patch pass the gate: https://review.openstack.org/#/c/474238/
14:34:40 <apuimedo> irenab: I sent https://review.openstack.org/#/c/480983/ but it seems I was wrong
14:35:42 <irenab> apuimedo: hmm.. I think both are needed
14:36:05 <irenab> http://logs.openstack.org/38/474238/6/check/gate-install-dsvm-default-kuryr-kubernetes/d84d775/logs/devstacklog.txt.gz
14:36:18 <irenab> log mentiones adding plugin to the projects
14:38:32 <apuimedo> do we have anything else?
14:38:42 <apuimedo> dmellado: maybe you can help me with this crap after the meeting
14:38:57 <dmellado> apuimedo: sure, let's try to put this back in shape
14:39:12 <dmellado> should we go for next topic
14:39:25 <dmellado> if there's anything on kuryr-libnetowrk, thoug
14:39:33 <dmellado> though
14:39:40 <irenab> fuxi?
14:39:48 <dmellado> #topic fuxi
14:41:46 <zengchen1> hi, about fuxi-k8s, there is not much change. There are several patches of Flexvolume driver, it needs more review. I am working on the component of provisioner which watches the PVC/PV and create/delete PV for K8S.
14:42:15 <apuimedo> zengchen1: cool. Will review this week
14:42:22 <zengchen1> I think it will take me more time to design and code on provisioner.
14:42:57 <zengchen1> apuimedo:ok, thanks.
14:42:58 <apuimedo> most likely
14:43:08 <zengchen1> i have one question.
14:44:17 <zengchen1> there is a available k8s python client, why  kuryr does not use it.
14:44:46 <apuimedo> zengchen1: there is a patch from janonymous to use it
14:44:52 <apuimedo> we have to test it and merge it
14:45:43 <zengchen1> ok, got it. i also did some test, and find some bug of it.
14:46:48 <janonymous> apuimedo: zengchen1 : it is nearly complete, but i havn't got much time these days to revisit..
14:47:07 <janonymous> zengchen1: please feel free to push patch in it, or report it :)
14:47:19 <zengchen1> janonymous:ok, i will.
14:47:29 <dmellado> apuimedo: I'll need to afk for some minutes, could you please run the remaining meeting?
14:48:24 <dmellado> #chair apuimedo
14:48:25 <openstack> Current chairs: apuimedo dmellado
14:48:35 <zengchen1> janonymous:could you please give me the link about it. I think fuxi-k8s will use it too.
14:49:15 <apuimedo> #link https://review.openstack.org/#/c/454555/
14:49:18 <apuimedo> zengchen1: ^^
14:49:39 <zengchen1> apuimedo:thanks.
14:50:32 <zengchen1> apuimedo: i think that
14:50:43 <zengchen1> apuimedo: i think that is all for me.
14:51:08 <apuimedo> thanks zengchen1
14:51:14 <apuimedo> #topic general
14:51:19 <apuimedo> Anything else from anybody?
14:51:23 <apuimedo> oh, yes, from me
14:51:48 <apuimedo> #info please send me suggestions for vtg sessions at asegurap@redhat.com or in irc
14:51:57 <apuimedo> I'll make an etherpad to vote then
14:52:32 <irenab> ok
14:52:45 <janonymous> +1
14:54:11 <apuimedo> thanks
14:54:14 <apuimedo> anything else?
14:56:39 <apuimedo> alright then
14:56:41 <apuimedo> closing
14:56:45 <apuimedo> thank you all for joining
14:56:47 <apuimedo> #endmeeting