14:03:48 #startmeeting kuryr 14:03:49 Meeting started Mon Dec 19 14:03:48 2016 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:03:50 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:03:52 The meeting name has been set to 'kuryr' 14:03:59 Hello everybody and welcome to another weekly IRC meeting! 14:04:02 Who's here? 14:04:08 \o/ 14:04:09 o/ 14:04:11 o/ 14:04:15 o/ 14:04:17 hi 14:04:17 o/ 14:04:18 o/ 14:04:25 o/ 14:04:31 o/ 14:04:36 o/ 14:04:45 o/ 14:04:48 That's a nice attendance :-) 14:05:05 #topic kuryr-libnetwork 14:05:15 o/ 14:05:31 Alright then, let's get started 14:06:51 We got some nice fixes last week 14:07:14 I want to draw attention to one in particular from yedongcan 14:07:16 https://review.openstack.org/404591 14:07:19 #link https://review.openstack.org/404591 14:07:34 thanks for the patch yedongcan! 14:08:13 apuimedo: you're welcome 14:08:50 heh, I actually wanted to talk about https://review.openstack.org/#/c/401874/5 14:08:53 sorry about that 14:08:58 I copied the wrong link 14:09:00 :-) 14:09:03 #link https://review.openstack.org/#/c/401874/5 14:09:09 this was merged the previous week 14:09:26 and it solved an ipam address release issue 14:10:08 mchiappero brought up that this solution poses some difficulty for the ongoing nested work 14:10:26 #link https://review.openstack.org/#/c/400365/ 14:10:37 that would otherwise be pretty ready for merging 14:10:47 mchiappero: could you please describe the issue 14:11:52 I'm not sure about where the problem is actually originated, but that code ends up returning no subnets for nested containers 14:12:15 that means that neutron ports don't get deleted anymore with nested containers 14:12:40 I'm wondering whether having a subnetpool_id in the subnet is now a requirement 14:12:55 mchiappero: you mean that after this change, when a port was being used for nested, it can't be deleted, right? 14:13:22 or whether there is another way to refactor that code in order to handle nested containers as well 14:13:27 apuimedo: exactly 14:13:49 it doesn't get deleted by kuryr anymore, of course you can delete it manually 14:13:58 apuidemo: got it, i had disscuss it with ltomasho, and limao file a bug for this: https://bugs.launchpad.net/kuryr-libnetwork/+bug/1651015 14:13:59 Launchpad bug 1651015 in kuryr-libnetwork "kuryr-libnetwork did not clearn neutron port when use existed neutron network" [Undecided,New] 14:14:20 #link https://bugs.launchpad.net/kuryr-libnetwork/+bug/1651015 14:14:48 what would be the best approach in your opinion? 14:15:35 yedongcan: I just 'confirmed' the bug 14:17:02 yedongcan: did you have something in mind to address the bug already? 14:17:21 maybe we can continue offline, but it would be nice to get this sorted 14:18:47 mchiappero: lets try to draft proposal at the bug description board 14:18:48 I guess we have a number of other topic, let's continue on the kuryr channel :) 14:18:48 mchiappero: we can devote a few more minues 14:18:50 :-) 14:19:10 I will sort some consideration in the LP, I think this need further discussion. 14:19:51 yedongcan: mchiappero: irenab: Agreed 14:19:55 irenab: I don't have a proposal yet, lately I have little time so I could not check the new code, but I'll trye 14:19:57 Yes, we can discuss it in channel later 14:20:00 it's related neutron existing resource, overlapping cidrs and others. 14:20:02 *try 14:20:12 for now. Maybe we can put the workaround that the network for the VM be created first with Kuryr by running a container 14:20:22 and linking the bug in mchiappero's patch 14:20:49 this would be enough to allow us to merge mchiappero's patch IMHO, since it is not introducing a problem, just experiencing it harder 14:21:09 since the normal flow to try nested is to create Nova instances on end user created neutron nets 14:21:25 do you all agree with that? 14:21:55 apuimedo: sounds reasonable 14:22:04 ok by me 14:22:05 Agree. 14:22:24 mchiappero: I'll push an update of your patch with a link to the bug 14:22:32 then I'll +2 14:22:40 thank you 14:22:43 and ping irenab and vikasc for review 14:22:57 #topic kuryr-kubernetes 14:23:25 #info ivc_ made a very nice demo of the services patch https://asciinema.org/a/51qn06lgz78gcbzascpl2du4w 14:23:31 Thanks a lot for that ivc_ 14:23:34 aye 14:23:46 regarding the patch, the unit tests should be ok now, maybe some mock instances could be reused, anyone willing to have a look and improve is very welcome 14:23:48 +1 14:24:04 as a reminder, this patch is not meant to be merged right now 14:24:16 rather, it should be split into smaller patches 14:24:27 so with master code you can't replicate the demo just yet 14:24:56 apuimedo in that thread http://lists.openstack.org/pipermail/openstack-dev/2016-December/109163.html 14:25:05 #info irenab posted a WIP patch for adding OVS native support https://review.openstack.org/#/c/412215/ 14:25:16 Alexander Stafeyev has a good point that we dont have a very informative user-facing docs 14:25:24 up until now we only have ovs hybrid 14:25:58 it is missing unit tests, but please review/try it is you have time 14:26:09 #info irenab reports ovs native ~ 2x pod creation burst speed performance improvement when using ovs-native binding with Dragonflow 14:26:35 irenab i've skimmed over that patch and it does look good, but i've not tested it yet 14:26:39 ivc_: docs in general or on k8s? 14:26:48 irenab: May I suggest that you put a local.conf.df and a local.conf.ovsnative in devstack plugin dir? 14:27:02 apuimedo: good idea, will add it 14:27:05 It makes things easier for reviewers and people wanting to try it out 14:27:07 thanks irenab 14:27:24 apuimedo: will post as separate patch 14:27:27 #action irenab to add local.conf.df and local.conf.ovsnative to the ovs native WIP patch 14:27:31 irenab i'm mostly speaking about kuryr-k8s now, i've not checked kuryr-libnetwork docs situation 14:27:31 darn 14:27:35 put the action too soon 14:27:37 :P 14:27:57 mchiappero: tests can be improved afterwards 14:28:11 but let's put it in the list of low hanging fruit for new contributors 14:28:31 irenab apuimedo also our README.md for kuryr-k8s has some stubs/leftovers from template 14:28:33 ivc_: lets add doc item to the critical tasks for the first k8s-kuryr relelase 14:28:43 ^ +1 14:28:43 agreed 14:29:18 #action put README.md and doc fixes for kuryr-kubernetes 1.0.0 milestone 14:29:42 I'll be posting a Dockerfile for the controller 14:30:08 apuimedo we also still have some races with docker containers in devstack 14:30:09 so we can have automatically built Docker containers of the controller 14:30:27 I'm still considering whether making one for kubelet that inherits from hyperkube 14:30:37 ivc_: maybe worth to add the demo dockerfile to the repo if someone wants to easily reproduce the demo you did 14:30:42 ivc_: which? 14:31:06 apuimedo k8s-controller-manager starts before etcd is fully up and running and crashes 14:31:16 irenab: agreed. I propose to make it into contrib 14:31:24 apuimedo we need some more 'wait_for' there 14:31:27 ivc_: thanks 14:31:38 I thought I had it. Anyways, I have code for that 14:31:40 I'll fix it 14:32:01 #action apuimedo to add the wait for dependencies in devstack container start 14:32:03 apuimedo it's not specific, its just that it is racy by its nature of being async with 'run_container' 14:32:26 ivc_: sure, in the midonet PoC I had it already solved, so it won't be a big deal to move over what I did there 14:33:44 #info vikasc posted a patch for bringing nested vlan vifs to kuryr-kubernets 14:33:50 #link https://review.openstack.org/#/c/410578/ 14:34:13 vikasc: looking forward to the new patch set addressing the posted comments :-) 14:34:19 i still need to test changes locally 14:34:21 :) 14:34:35 soon will be updating 14:34:45 vikasc: no worries. I know you are busy with deployments 14:35:08 irenab: vikasc: https://review.openstack.org/#/c/409797/ 14:35:22 let's have the doc issue solved before it materializes 14:35:37 vikasc maybe we can have some doc 'how to run kuryr-k8s with nested mode' along with those patch? 14:36:01 ivc_: agreed 14:36:08 ivc_, sure , good idea 14:36:21 vikasc: please, make it part of the patch, and if it needs some local.conf changes, please, put a sample in devstack/ 14:36:38 and this goes in general for everybody 14:36:46 apuimedo, ack 14:36:48 It's important to help people reproduce 14:36:49 :-) 14:37:00 +1 14:37:04 +1 14:37:04 +1 14:37:13 anything else about kuryr-kubernetes? 14:38:02 apuimedo pete from port-direct suggested that we could have some integration with kolla 14:38:06 just as a reminder, some of the more burning items are the /run files for deletion and the fullstack tests 14:38:11 ivc_: I agree 14:38:20 I think this is a very interesting field to explore 14:38:34 portdirect did already a similar experiment before 14:38:37 with his 'harbor' 14:38:55 aye 14:39:00 do we have anybody here directly interested in deploying openstack over kuryr-kubernetes/kubernetes ? 14:39:22 apuimedo: not sure what it means. Can you please elaborate? 14:39:23 me 14:39:30 i wanna give a try 14:39:33 sure 14:39:38 what it means is 14:39:52 you deploy keystone and Neutron with K8s hostmode networking 14:40:03 then, you deploy kuryr-kubernetes too in hostmode 14:40:10 pointing to the aforementioned services 14:40:37 from then on, the rest of OSt services, get deployed with kuryr/neutron providing the networking 14:40:56 and potentially having a driver for having each service in a different subnet 14:41:02 (by namespace) 14:41:14 depending on the deployer architecture and needs 14:41:27 that's why I'm asking for people directly interested 14:41:32 apuimedo: thanks 14:41:43 apuimedo it's essentially a trippleO, right? 14:41:51 because it'd be nice to have some requirements for the first plugin (set of resource drivers) 14:42:04 ivc_: quite similar to what it does 14:42:21 I believe in harbor case, for example, there was no second level neutron 14:42:57 there's a lot of possibilities as to which deployment models you can serve 14:43:00 apuimedo we need to take ironic case into account too 14:43:06 ivc_: Indeed 14:43:11 with keystone i had a question, is there a need to register kuryr endpoints in keystone also? 14:43:27 janonymous: it is not necessary 14:43:28 janonymous for kuryr-k8s there are no endpoings 14:43:36 but it is nice to do 14:43:42 ahh..okay 14:43:54 ivc_: IIRC in devstack I registered it somewaht 14:43:57 *somehow 14:44:07 I don't remember if it is in the master version though 14:44:09 only username was registerred 14:44:15 okay 14:44:23 ivc_: it will probably end up having an endpoint for reporting 14:44:25 apuimedo janonymous maybe we'd need to look at k8s integration with OSt keystone 14:44:28 and watchdog 14:44:41 but that's not the sort of stuff that keystone is interested on 14:44:43 :-) 14:45:00 ivc_: agreed 14:45:02 yeah, 14:45:26 i mean k8s already has it, we just need to leverage it 14:45:58 ivc_: I think it is an ongoing thing, the k8s keystone support 14:46:26 apuimedo: so the answer that we probably need to register kuryr as a service? 14:46:40 alright then. As I said, please, let's use ivc_'s started thread on the ML that portdirect answered to to move this forward 14:46:58 irenab: 'need' is a strong word 14:47:13 I think we should in general register the service 14:47:41 but we do not have it as a hard requirement 14:47:51 nor are we likely to leverage it in this cycle 14:47:52 apuimedo we might eventually get to the point where kuryr-k8s has some restapi 14:48:00 ivc_: I was hinting at that 14:48:14 apuimedo but imo thats not gonna happen before 1.0.0 release 14:48:21 apuimedo: for libnetwork it is not needed, correct? 14:48:24 for reports, watchdogs, (even for split daemon duty made by the controller) 14:48:46 irenab: for libnetwork you can have a service without endpoints too 14:48:59 but with our model, putting endpoints wouldn't work 14:49:12 ok, let's move on 14:49:14 #topic fuxi 14:49:28 hi 14:49:39 hongbin: thanks for reaching out to cinder go 14:49:45 #chair hongbin 14:49:46 Current chairs: apuimedo hongbin 14:49:49 apuimedo: my pleasure 14:49:49 you have the floor 14:50:19 apuimedo mentioned that i just sent a ML to jhon grifft for hte ciinder docker driver 14:50:37 we agreed to consolidate the effort into one (sort-of) 14:50:51 i think this is a good news, i will reach out to him about that 14:51:06 next one is 14:51:16 it definitely is good news 14:51:18 The fuxi is switching to keystone v3 14:51:22 hongbin: please, use 'info' 14:51:44 hongbin: do you think you can reuse some of the openstack/kuryr library code for the keystone v3 support? 14:52:07 apuimedo: not sure exactly, need to look into that 14:52:19 #info The fuxi is switching to keystone v3 14:52:29 hongbin: please do 14:52:33 apuimedo: here is the patch i proposed 14:52:34 if we can reuse code, all the better 14:52:37 #link https://review.openstack.org/#/c/410403/ 14:52:42 #link https://review.openstack.org/#/c/409982/ 14:52:56 #action hongbin look into how to resue keystone v3 code in kuryr lib 14:53:21 apuimedo: however, we need to merge the v3 patches above to pass the gate 14:53:44 hongbin: understood 14:53:53 I'll take it into account for the reviews 14:53:54 The non-voting fullstack pipeline is complainting 14:53:59 apuimedo: thx 14:54:13 Here is a few fullstack tests patch: 14:54:14 #link https://review.openstack.org/#/c/403931/ 14:54:19 #link https://review.openstack.org/#/c/403941/ 14:54:31 In addition, we need to get hte lastest requirement 14:54:36 #link https://review.openstack.org/#/c/373745/ 14:54:47 need to merge the patch above as well 14:55:00 good 14:55:13 those are important items last week, there are a few fixes as well 14:55:19 #link https://review.openstack.org/#/c/408845/ 14:55:24 #link https://review.openstack.org/#/c/409968/ 14:55:47 the last thing is i submitted a patch to docker upstream to list fuxi in their docs 14:55:52 #link https://github.com/docker/docker/pull/29468 14:55:58 apuimedo: that is all from my side 14:56:03 thanks hongbin 14:56:06 #topic general 14:56:17 Any other topic, issue, comment from anybody? 14:56:21 https://review.openstack.org/#/c/405203/ could use one more +2 :) 14:56:33 alraddarla_: thanks! 14:56:38 It went under my radar 14:56:42 will get to it today 14:56:48 anything else? 14:56:50 apuimedo, thanks! 14:57:01 * apuimedo is having mox/mock dreams 14:57:38 if I have one more dream of mox appearing like a mushroom amidst a clean forest of mock, I'll take holidays 14:58:19 hahaha, bye bye mox just 1 more patch for cleanup 14:58:41 hahaha 14:58:52 yay! 14:58:59 THank you all for joining! 14:59:01 #endmeeting