14:01:49 #startmeeting kuryr 14:01:50 Meeting started Mon Jan 9 14:01:49 2017 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:51 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:54 The meeting name has been set to 'kuryr' 14:01:59 Hello everybody! 14:02:05 Welcome to another kuryr meeting? 14:02:10 s/?/!/ 14:02:12 xD 14:02:14 hi 14:02:17 Start on the wrong foot 14:02:20 hi 14:02:22 o/ 14:02:26 o/ 14:02:29 o/ 14:02:31 let's see how many typos I can make today 14:03:03 o/ 14:03:12 vikas messaged me that he most likely won't be able to attend today 14:03:34 thank you all for showing up 14:03:37 #topic kuryr-lib 14:03:50 #info We released kuryr-lib 0.3.0 14:04:12 #info it includes a refactor of the keystonauth1 code so now fuxi can leverage it too 14:04:31 In other news 14:05:19 #info svinota pushed a new release of pyroute2 https://github.com/svinota/pyroute2/releases/tag/0.4.12 which includes the namespace fixes we needed 14:05:35 #action apuimedo to send a patch to up the pyroute2 upper constraints 14:06:21 apuimedo what exactly did we need from those? 14:07:20 #link https://github.com/svinota/pyroute2/commit/db97c896c12d4aee555fd182b97e426cb561e29b 14:07:34 #link https://github.com/svinota/pyroute2/commit/0aff806bd504172ddc8bb951cc707a1d390a7043 14:07:35 apuimedo thats https://github.com/svinota/pyroute2/issues/317 14:07:39 and there was one more important 14:07:50 right 14:08:09 ivc_: that's the issue, and all the patches that fix it were finally part of a release last week 14:08:24 i mean we could use net_ns_pid instead of net_ns_fd 14:08:57 instead of /proc/self 14:09:30 I prefer we move to use paths, isn't it better? 14:09:56 either one is fine 14:10:22 afaik net_ns_pid were available in kernel before net_ns_fd 14:10:32 I think vikasc's patch for k8s container-in-vm used the paths, and hence why I ask svinota to cut a new release 14:11:28 anything more on kuryr-lib? 14:12:27 o/ 14:12:57 hi hongbin! 14:13:01 #topic kuryr-libnetwork 14:13:23 we got quite a few patches to review irenab :-) 14:13:32 apuimedo: indeed 14:13:47 ltomasbo's https://review.openstack.org/402462 is passing the gates, so we should try to get it in soon 14:14:28 also we should review this new tests 14:14:29 are we waiting to get few fullstack tests or they will come as a folowup? 14:14:35 #action irenab apuimedo to review https://review.openstack.org/414903 14:14:45 for container-in-vm? 14:14:48 Follow-up 14:15:22 we probably want them as a separate gate with a beefier node 14:15:33 the documentation for setting the environment for the nested should be included 14:16:02 similar to the one in kuryr-k8s 14:16:31 agreed 14:16:34 ltomasbo: ^^ 14:16:51 #action ltomasbo to include docs for testing out the container-in-vm 14:17:05 ok, I can add something similar, but please do the review on the rest 14:17:11 sure 14:18:47 anything more on kuryr-libnetwork (apart from vikas, irenab and I having to finish the review queue)? 14:19:28 uhm, I would like to ask whether it could be interesting to change the subnetpool request logic 14:19:43 mchiappero: go ahead 14:20:18 currently when a pool is requested: if the name is passed in it always creates a new subnetpool (never reuses) 14:20:37 without the name useses the default but fails if already present 14:20:47 so, there is no way to reuse a pool by providing the name 14:20:57 this can be useful for nested containers 14:21:12 mchiappero: can you link the code sections 14:21:15 ? 14:21:22 of the existing code? 14:21:34 yeah 14:21:46 * apuimedo thinking about meeting logs :P 14:21:47 I'm referring to the whole RequestPool function 14:22:03 mchiappero: are you talking about the case when provided name matches already existing name? 14:22:43 it would be nice to find a clean way to let different containers in different VMs use consistent docker networks, so essentially sharing the same network and subnetpool resources 14:22:58 irenab: right 14:23:23 mchiappero: sounds to me like its a bug in current impementation 14:23:25 well whatever method that is deemed appropriate to allow this 14:23:34 is that interesting? 14:23:50 #link https://github.com/openstack/kuryr-libnetwork/blob/f44c3603802af9918fa9021e64eb1add425ba41e/kuryr_libnetwork/controllers.py#L1181-L1272 14:24:16 yes 14:24:17 irenab: I'm not to sure about the rationale behind this, maybe there is a reason I ignore :) 14:25:32 ok, so, I don't have a proposal and lack time these days, but if you have one let's discuss about it in the chat 14:25:44 "This API is for registering an address pool with the IPAM driver. Multiple identical calls must return the same result. It is the IPAM driver's responsibility to keep a reference count for the pool." 14:26:07 mchiappero: so we are violating the contract of "Multiple identical calls must return the same result" ? 14:26:39 uhm, probably... 14:26:45 mchiappero: can you please report a bug? 14:27:05 ok, I'll double check and report in case 14:27:43 (I'm done on this topic) 14:28:17 thanks 14:28:21 moving on 14:28:35 #topic kuryr-kubernetes 14:29:48 #action ivc_ to review Irena's services doc patch https://review.openstack.org/#/c/416228/ 14:30:03 irenab: do you know why dragonflow gate fails? 14:30:31 apuimedo: I think the patch to infra that fixesthis, was just merged. Let me retriger the gate 14:31:27 irenab: thank 14:31:29 *thanks 14:32:34 #action ivc_ to review vikas container-in-vm patch https://review.openstack.org/#/c/410578/ 14:33:16 apuimedo sure 14:33:48 thanks ivc_ ! 14:35:07 ok 14:35:47 vikasc has also been working on a demo that shows kuryr-kubernetes with openshift and openstack in baremetal 14:36:00 I think next week he'll probably share an asciicinema link ;-) 14:36:04 apuimedo: any video available? 14:36:11 thanks :-) 14:36:25 we're still shortening the video 14:36:27 :-) 14:36:38 but the good news is that everything works well 14:36:45 he'll send some patches upstream 14:37:01 about connecting to SSL kubernetes API servers and other things 14:37:15 that he just hacked on the environment 14:37:47 apuimedo it's probably time to move to some real k8s client in kuryr-k8s 14:38:00 apuimedo pykube maybe 14:38:23 ivc_: for devstack? 14:38:29 ivc_: that's the one we used for the fullstack tests on Midokura's PoC 14:38:53 the problem with using that for the "watch" functionality is that there's doubts on pykube's mainatenance 14:39:06 I, at least, haven't checked how they maintain it 14:39:09 irenab no, to replace 'requests' in https://github.com/openstack/kuryr-kubernetes/blob/master/kuryr_kubernetes/k8s_client.py 14:39:25 and unless we are at peace with how that is done 14:39:34 I'd not be in favor 14:39:52 (as much as I want to drop code) 14:40:13 #action apuimedo study pykube maintainership and distros presence 14:40:34 ivc_: is there any recomended clients by k8s guys? 14:40:44 apuimedo what i mean is that extending k8s_client.py into a full-featured client does not seem to be a good idea 14:40:55 ivc_: agree 14:40:57 irenab: we can move to grumpy and use existing k8s go client xD 14:41:22 ivc_: did you hear about grumpy? 14:41:28 apuimedo: maybe not a bad idea :-) 14:41:48 not at all, but we'd have to get rid of any c extensions we have 14:41:56 and grumpy is alpha 14:42:19 ivc_: for the short term, my concern is that if we pick pykube, we have to be able to contribute to it 14:42:25 yea, seems abit early to use it 14:42:35 and add things we may need, like SSL, websockets, etc 14:42:36 apuimedo isn't grumpy a python -> go? 14:42:48 ivc_: it allows you to use Go classes as well 14:42:50 :-) 14:42:54 ah ok 14:43:27 i wonder what TC would have to say about python running on golang env 14:43:31 but it's a very good point on the k8s client code. It's something that ideally shouldn't be part of openstack/kuryr-kubernetes 14:43:41 ivc_: I'll let you guess :P 14:43:45 :P 14:44:20 alright 14:44:23 anything else? 14:44:45 ivc_: I suppose you'll start splitting up the services patch 14:44:52 apuimedo yup 14:45:05 cool, hopefully we can merge container-in-vm soon too 14:45:20 I'll go to PTG and see if magnum can start using kuryr-kubernetes too 14:45:32 once that is merged 14:45:52 magnum requires the trust feature AFAIK 14:45:53 i hope to finish most of the services patch split this/next week 14:46:07 hongbin: I thought that's transparent if we support keystoneauth1 14:46:12 isn't it? 14:46:33 apuimedo: not sure, need to do an investigation on that 14:46:41 * janonymous got late for meeting, reading backlogs 14:46:51 janonymous: welcome 14:46:53 ivc_: cool 14:47:18 I'm hoping next week I can start work on the port file patch 14:47:33 and we can start planning the cni split 14:47:50 (which is almost a precondition for the port reusal driver) 14:48:03 s/reusal/reuse/ 14:48:52 i saw another k8s client here few days back : https://github.com/kubernetes-incubator/client-python 14:49:15 :O 14:49:19 great 14:49:22 #link https://github.com/kubernetes-incubator/client-python 14:49:27 we gotta check that out! 14:50:05 it looks very young though (3 months history) 14:50:27 yeah! thought to mention to avoid double efforts 14:50:27 interesting, urllib3 directly instead of requests 14:50:48 apuimedo it also uses swagger to generate code similar to https://github.com/openstack/python-k8sclient 14:51:27 :O 14:52:09 I'm in principle against code generation... Makes for unidiomatic APIs, but let's check it 14:52:27 apuimedo in this case swagger is the right thing imo 14:52:33 noted 14:52:36 #topic general 14:52:44 any other topic before we close for today? 14:53:05 apuimedo: i would spend a few minutes to give a update on fuxi :) 14:53:36 apuimedo: might i ? 14:54:05 hongbin: of course 14:54:08 sorry 14:54:09 #topic fuxi 14:54:14 #chair hongbin 14:54:14 Current chairs: apuimedo hongbin 14:54:31 during the holiday, we got the gate voting on dsvm jobs 14:55:01 besides that, there are several small fixes proposed 14:55:44 most importantly, we relseased a new kuryr-lib, which make the keystoneauth patch passed the gate 14:55:46 #link https://review.openstack.org/#/c/410403/ 14:55:56 apuimedo: that is it from me 14:56:06 very well 14:56:21 thanks hongbin 14:56:28 and sorry that I forgot the section 14:56:33 apuimedo: oh, btw, zhangni was looking for feedback for the manila integration: https://review.openstack.org/#/c/375452/ 14:56:53 apuimedo: np 14:57:01 #action irenab apuimedo to review manila integration patch 14:57:16 Thank you all for joining today! 14:57:18 #endmeeting