14:00:38 #startmeeting kuryr 14:00:39 Meeting started Mon May 29 14:00:38 2017 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:40 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:43 The meeting name has been set to 'kuryr' 14:00:49 Welcome to Kuryr's weekly IRC meeting 14:00:52 who's here today? 14:01:05 o/ 14:01:09 o/ 14:01:10 o/ 14:02:20 o/ 14:02:56 o/ 14:03:21 \o 14:03:36 Thank you all for joining! 14:03:41 let's get started 14:03:47 #topic kuryr-libnetwork 14:04:11 #info we've moved to use devstack's etcd as the cluster store 14:04:35 this eases the burden on our devstack plugin and makes us play nices with other services that may need it 14:04:47 etcd3? 14:04:50 yes 14:04:54 cool then ;) 14:05:05 The same should be done for docker, now that there's a separate docker plugin for devstack 14:05:32 however, it currently doesn't let you choose the origin of the docker repo nor the version, so that may be dangerous 14:07:05 I don't foresee many more things before p3 and eventually for the pike cycle 14:07:22 although having global scope nets would be really good 14:07:30 Anything else on kuryr-libnetwork? 14:08:47 alright! Moving on ;-) 14:08:55 #topic kuryr-kubernetes 14:09:59 #info started getting driver/handler opts out of the main config and on to their respective modules with https://review.openstack.org/450113 14:10:14 This should be subject to a refactor later 14:10:43 We have a problem that the CI doesn't catch configuration generation issues 14:10:58 #action apuimedo to file a bug on catching opt generation bugs in CI 14:11:10 heh, apuimedo thanks for that 14:11:19 I was about to file it too 14:11:20 We've been getting a lot of improvements on devstack lately 14:11:44 we should include the oslo_config_generator check on some gate 14:11:49 #info It is now configurable whether devstack should set the neutron defaults 14:11:59 dmellado: I'd prefer it's a separate get 14:12:05 *test 14:12:11 darm 14:12:11 wanted to thank irenab and ivc_ for reviews of my sriov spec and code. 14:12:14 gate 14:12:19 not test, not get, gate 14:12:25 apuimedo: lol 14:12:40 we could create one, but it might be a real quick one, heh 14:12:48 kzaitsev_ws: we'll get there 14:12:50 :-) 14:13:09 oh, ok =) not rushing then ) 14:13:40 #info There's some patches under review to make it possible for baremetal devstack deploymentes to be more functional. You'll be able to run kube-dns, dashboard and so on 14:14:16 ok, next to the reviews part 14:14:19 kzaitsev_ws: almost there 14:14:23 apuimedo kubeadmin? 14:14:34 ivc_: kubeadm deployment? 14:14:39 aye 14:14:43 ivc_: not yet 14:14:47 kubeadm's not there yet 14:14:58 but we should be moving towards that 14:15:06 as hyperkube seems to be getting less and less stable 14:15:08 I decided to postpone it a bit until we can use the standalone devstack docker plugin 14:15:35 I already checked out what's necessary (they messed a lot with how to pass config to specify things like existing etcd) 14:15:54 so as soon as we make docker plugin source configurable, we can get started with the move 14:17:11 #info mchiappero: gloug's macvlan effort is practically ready 14:17:36 there's just a couple of comments on the readme that was added at the end of the week 14:17:40 and after that we should merge it 14:18:08 yes, sorry, Gary started working on the documentation but he's out of office these days 14:18:24 mchiappero: no worries. But I'd like to merge asap to avoid rebases 14:18:27 I rebased but forgot to edit the README file, I'm working on it, now, should be ready in minutes 14:18:31 we count on kzaitsev_ws to reproduce as well 14:18:33 ;-) 14:18:51 mchiappero: pls put up the README file, I'd love to have a look into it ;) 14:18:55 #info sriov spec discussion progresses but there's still contention on multiple handlers 14:19:03 dmellado: there is a readme 14:19:08 it just misses a couple of things 14:19:11 ah, cool then 14:19:18 (sorry but I cannot change the workflow for that patch) 14:19:33 mchiappero: ? 14:19:53 ivc_: kzaitsev_ws: irenab: what's the argument against multiple handlers for a single K8s object? 14:20:15 I ask earnestly because I can almost only see positive aspects to that approach 14:20:37 there's no argument against them per se. It's just irenab and ivc_ believe these should not apply here 14:20:39 i mean, I knew the previous patchset was not complete and the README was still in progress but I could not set -1 to workflow 14:21:14 apuimedo there's no reason to duplicate code. sr-iov is just a special case of multi-network pods (that were already discussed) 14:21:22 but rather sriov can be used as basis for multi-vif pods 14:22:37 mchiappero: ah, no worries on that 14:22:39 and eventually the sriov/multi-vif handler will become one with the current single-vif handler 14:22:39 ;-) 14:23:50 ivc_: I can't see it duplicating much 14:24:09 handlers receive and event already well formed and then do something about it 14:24:36 right now it uses different annotations. i'd prefer the sriov pod annotation to be stored in the same vif annotation 14:24:45 if that implies a lot of duplication we probably need to move things around 14:24:46 I don't have anything agains this approach (extending vif with multi-vif capabilities), so I'm ok with both options 14:24:49 Me too 14:25:07 ivc_: do you mean a single vif annotation for each vif of a multi homed pod? 14:25:22 or a list of those objects, to be more precies 14:25:24 *precise? 14:25:36 apuimedo: rather an annotation, that holds a list of vifs 14:25:38 apuimedo i mean updated vif annotation will be a list/dict instead of single object 14:26:26 This breaks current contract with cni 14:26:55 cni will have to be updated too 14:26:55 Irenab_: that's true 14:27:13 right 14:27:25 but we have to do it in this cycle 14:27:27 we need to be consistent across both parts 14:27:32 but that is inevitable anyway if we were to support multi-vif pods 14:27:37 true 14:27:41 ivc_: Irenab_: right 14:27:43 also ivc_ mentioned that it is possible to go with 2-phase approach. phase-1) polish current approach, agree on specifics of a multi-vif phase-2) get rid of sriov-handler, use multi-vif for everything 14:28:20 I feel like this is a bit like ml2 drivers vs plugins 14:28:22 :-) 14:28:32 indeed :-) 14:28:56 Personally I'd like both approaches to be possible 14:28:59 heh 14:29:22 those that are closest to dev can live in the multi vif handler 14:29:41 kzaitsev_ws: ultimately, for me, it's about how you want to get the ball rolling 14:29:44 apuimedo they are possible, but sriov fits the vifhandler design for the most part and don't forget e.g. pooling support for vifhandler 14:29:57 if it's separate handler and then we move it to multi vif, that's fine for me 14:30:06 if you are willing to drive the multi vif handler 14:30:09 I'll be even happier 14:30:11 xD 14:30:15 i'm sure sriov could benefit from pools too and duplicating that effort across 2 handlers is meh 14:30:23 i agree with ivc on the direction 14:30:37 ivc_: it's a bit more special pooling though :-) 14:30:49 i also like going with multi-vif straightway 14:30:52 but I agree that since they are neutron vifs, it makes a lot of sense 14:30:57 rather two phased one 14:31:11 kzaitsev_ws: how do you see it? 14:31:54 Vikasc , we may have multi vie support befor k8s 14:31:55 vikasc multi-vif requires quite a bit of effort, so splitting the effort in 2 phases does not sound unreasonable 14:32:08 like I said I don't have anything against the multi-vif approach. shouldn't be that much changes. 14:32:47 noted. 14:32:50 ivc_: kzaitsev_ws: I think it should be doable before p3 14:32:54 shall we have multi vif spec rolling? 14:33:01 if it is not there by p3 we'll get into danger zone 14:33:06 that is if we want an experimental/poc sriov support in tree before multi-vif 14:33:30 ivc_: afaik kzaitsev_ws already has experimental code working 14:33:36 in case agreeing on multi-vif specifics would start taking too much time — I would start whining about two phase thing =)) 14:33:48 apuimedo thats my point - we already have the code for non-multi-vif 14:33:49 I think leaving it for now as a [PoC] on gerrit would bring value 14:33:50 :-) 14:34:00 Irenab_: I will start a blueprint 14:34:08 I'm on a roll making blueprints 14:34:09 :-) 14:34:18 xD 14:34:28 hehe 14:34:30 p3 is Jul 24-28 so 2 month from now ) 14:34:35 right 14:34:37 cool, we can chat on design next week 14:35:16 btw do we plan daemon-exec cni split for P or Q ? 14:35:20 if we're not done by then with multi vif I'll eat a full plate of vychyssoise 14:35:29 ivc_: I'd like to have P 14:35:35 I already made the blueprint 14:35:46 and janonymous said he'd start work on it 14:35:58 (and there's little I like less than vychyssoise) 14:36:09 apuimedo, do you have the link? 14:36:09 apuimedo: no wonder 14:36:13 sure 14:36:19 (not of the vychyssoise recipe) 14:36:37 https://blueprints.launchpad.net/kuryr-kubernetes/+spec/cni-split-exec-daemon 14:36:40 #lin khttps://blueprints.launchpad.net/kuryr-kubernetes/+spec/cni-split-exec-daemon 14:36:41 thanks! 14:36:42 #link https://blueprints.launchpad.net/kuryr-kubernetes/+spec/cni-split-exec-daemon 14:37:22 anything else on multi vif? 14:37:29 o/ 14:37:35 So, what's the resume. we make multi-vif then sriov? 14:37:45 or multi-vif as part of sriov? 14:37:48 kzaitsev_ws: let me put it in points 14:38:07 1. Post the current working code for sriov as [PoC] tags 14:38:19 2. Draft the spec on multi-vif 14:38:38 3. Implement multi-vif with sriov 14:38:49 4. pray for 3rd party CI for sriov 14:38:51 :-) 14:39:00 for 4) pray even harder 14:39:02 xD 14:39:39 on (2) I meant blueprint. I don't think a spec is in order 14:39:56 phew =) 14:40:00 apuimedo i have mixed feelings about "http server on the unix domain socket" from daemon-exec bp 14:40:02 Maybe devref 14:40:09 ltomasbo: I think you wanted to talk about multi-network 14:40:14 ivc_: oh 14:40:23 apuimedo, yes 14:40:28 ivc_: please, express the bad feelings 14:40:36 ltomasbo: wait a moment 14:40:40 to make sure the next implementation will be obvious what to follow 14:40:40 wanted to talk about it 14:40:41 ok 14:40:55 anyways. I'm going to update my spec tomorrow(or the day after) and would try to mention things around multi-vif 14:41:04 unless someone bests me at that =) 14:41:49 apuimedo http/rest feels like an overkill for a trivial rpc over uds 14:41:49 Irenab_: it would either be part of my spec or a separate defref/spec anyway 14:41:59 great 14:42:09 kzaitsev_ws: alright. Sorry about the need to change it again 14:42:25 lets follow up at the review 14:42:50 ivc_: what would you propose then? 14:42:57 ivc_: I chose it because it is what docker already uses, so principle of least surprise 14:43:19 but yes. I'm curious about what you propose as well :-) 14:43:40 hongbin: we'll get to fuxi in a moment 14:43:48 apuimedo: ack 14:44:11 apuimedo something akin to python's multiprocessing module's ipc 14:44:12 o/ sry late 14:44:43 before we get there, I'd like also to ask wether who would be available to talk about functional testing next Week. Irenab_ I know you're almost off this week but would next week's Wed 15:00 CEST work? 14:45:00 ivc_: kzaitsev_ws mchiappero ltomasbo ^^ 14:45:11 ivc_: I'll have to check it out. I do not remember what they use 14:45:25 dmellado: I will be available 14:45:33 dmellado, works for me 14:45:34 I should be available — my Wednesdays are mostly free 14:45:34 dmellado, I will check the time. The day is fine 14:45:35 apuimedo: I already counted with you, that's why I didn't ask 14:45:40 :D 14:46:04 a bit earlier will be better 14:46:06 ivc_: I'll check it and ping you 14:46:07 CEST is +2? 14:46:13 ivc_: that's right 14:46:14 ivc_: yep 14:46:22 ltomasbo: go ahead with multi network 14:46:25 we have 4 mins 14:46:27 before fuxi 14:46:31 jeje 14:46:32 ok 14:46:33 let me rephrase it to 13:00 UTC, just in case ;) 14:46:40 dmellado: much better 14:46:49 FYI I've put a draft here, comments are welcome 14:46:52 https://github.com/danielmellado/kuryr-tempest-plugin 14:46:54 just want to check what was the opinion on allowing kuryr-kubernetes to use multiple subnets 14:47:04 or to create the subnet if it that not exist 14:47:09 similarly to kuryr-libnetwork 14:47:19 ltomasbo: what would the usecase for that be? 14:47:23 subnet per namespace? 14:47:25 ltomasbo: You should get up to date on the PoC going on upstream k8s 14:47:31 about using annotations for multi network 14:47:42 and if possible, make our implementation match what they are doing 14:47:50 +1 14:47:52 apuimedo, that would be great! 14:47:53 so that we are an implementation of what will eventually settle 14:48:10 apuimedo ltomasbo maybe get in touch with kzaitsev_ws about multi network support 14:48:12 Irenab_: I'd rather we go head first into multiple nets as in upstream 14:48:12 my current use case is from the NFV world 14:48:13 ltomasbo: we can have a look at that, + going to k8s slack at some point 14:48:30 wher eyou may have a template defining your network service (with or without multi network support) 14:48:35 just wanted to understand the requirement 14:48:43 but that needs to deploy some networks/subnets and create pods on them 14:48:51 Irenab_: upstream they decided to do multiple annotations IIRC 14:48:59 so it should match our workflow nicely 14:49:04 its implementation, 14:49:23 annotations + TPR 14:49:26 yes, this is the direction 14:49:51 dmellado, agreed 14:50:01 vikasc: can you remind me what TPR stands for? 14:50:11 third party resources 14:50:11 I was about to ask the same... 14:50:16 ahh, right 14:50:27 vikasc: ltomasbo: kzaitsev_ws: you three should probably sync up on this 14:50:35 +1 14:50:37 heh, yeah 14:50:41 +1 14:50:41 agree 14:50:42 I checked and it didn't make any sense 14:50:44 http://www.urbandictionary.com/define.php?term=TPR 14:50:46 may be we can schedule a discussion 14:50:54 although it probably deserves a videoconf 14:51:04 apuimedo: maybe we can cover both that and FT 14:51:06 multi net? 14:51:09 in next week's 14:51:19 dmellado: sounds good 14:51:22 you have 20 minutes 14:51:34 multi net has 35 14:51:35 xD 14:51:38 lol 14:51:49 dmellado, 12:00 Utc is better for me 14:51:55 ltomasbo: vikasc: kzaitsev_ws: try to have checked upstream direction and PoC status by then 14:51:56 Irenab_: works for me as well 14:52:04 I'll send the invite 12:00 UTC then 14:52:05 and have some reference links 14:52:18 apuimedo, sure 14:53:17 alright 14:53:21 #topic fuxi 14:53:29 #chair hongbin 14:53:30 Current chairs: apuimedo hongbin 14:53:34 hi all 14:54:00 in last week, we have a meeting with john griffin, dims, hyper folks, and others 14:54:11 we disucssed the plan of k8s storage 14:54:17 here is the logs of the meeting 14:54:23 #link http://eavesdrop.openstack.org/meetings/fuxi_stackube_k8s_storage/2017/fuxi_stackube_k8s_storage.2017-05-23-14.02.log.html 14:54:41 as a follow-up, we will create a new repo for fuxi-go 14:54:52 there is a proposal for that 14:54:58 #link https://review.openstack.org/#/c/468635/ 14:55:00 #info fuxi golang port repo submitted for review 14:55:12 this is for docker volume api 14:55:22 yes 14:55:39 we needs to deal with the missing of os-brick in golang as well 14:55:57 because the fuxi-go will depend on os-brick to do the volume connection 14:56:20 therefore, we will create another repo for port os-brick to go 14:56:22 #link #link https://review.openstack.org/#/c/468635/ 14:56:52 that is about everything from my side 14:56:59 comments? 14:57:13 hongbin: can os-brick be called as a command? 14:57:26 apuimedo: right now, no 14:57:30 #link https://review.openstack.org/#/c/468536/ 14:57:40 for reference ) 14:58:25 apuimedo: the alternative is to turn os-brick into a service, but after discussing with cinder team, we decided to port the whole library to go 14:58:26 the reason for go is to move it to K8s latter? 14:58:37 thanks kzaitsev_ws! 14:58:50 Irenab_: it's to unify the different efforts 14:59:17 we should be able to import a lot of the code from j-griffith 14:59:37 and the cinder community should probably be closer to it as well 14:59:41 yes, the goal is to use his repo to bootsrap things 14:59:45 Irenab_: then make it available for k8s 15:00:01 sounds reasonable 15:00:20 very well 15:00:27 Thanks to everyone for joining! 15:00:36 #endmeeting