14:00:38 <apuimedo> #startmeeting kuryr
14:00:39 <openstack> Meeting started Mon May 29 14:00:38 2017 UTC and is due to finish in 60 minutes.  The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:40 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:43 <openstack> The meeting name has been set to 'kuryr'
14:00:49 <apuimedo> Welcome to Kuryr's weekly IRC meeting
14:00:52 <apuimedo> who's here today?
14:01:05 <ivc_> o/
14:01:09 <ltomasbo> o/
14:01:10 <vikasc> o/
14:02:20 <dmellado> o/
14:02:56 <mchiappero> o/
14:03:21 <kzaitsev_ws> \o
14:03:36 <apuimedo> Thank you all for joining!
14:03:41 <apuimedo> let's get started
14:03:47 <apuimedo> #topic kuryr-libnetwork
14:04:11 <apuimedo> #info we've moved to use devstack's etcd as the cluster store
14:04:35 <apuimedo> this eases the burden on our devstack plugin and makes us play nices with other services that may need it
14:04:47 <dmellado> etcd3?
14:04:50 <apuimedo> yes
14:04:54 <dmellado> cool then ;)
14:05:05 <apuimedo> The same should be done for docker, now that there's a separate docker plugin for devstack
14:05:32 <apuimedo> however, it currently doesn't let you choose the origin of the docker repo nor the version, so that may be dangerous
14:07:05 <apuimedo> I don't foresee many more things before p3 and eventually for the pike cycle
14:07:22 <apuimedo> although having global scope nets would be really good
14:07:30 <apuimedo> Anything else on kuryr-libnetwork?
14:08:47 <apuimedo> alright! Moving on ;-)
14:08:55 <apuimedo> #topic kuryr-kubernetes
14:09:59 <apuimedo> #info started getting driver/handler opts out of the main config and on to their respective modules with https://review.openstack.org/450113
14:10:14 <apuimedo> This should be subject to a refactor later
14:10:43 <apuimedo> We have a problem that the CI doesn't catch configuration generation issues
14:10:58 <apuimedo> #action apuimedo to file a bug on catching opt generation bugs in CI
14:11:10 <dmellado> heh, apuimedo thanks for that
14:11:19 <dmellado> I was about to file it too
14:11:20 <apuimedo> We've been getting a lot of improvements on devstack lately
14:11:44 <dmellado> we should include the oslo_config_generator check on some gate
14:11:49 <apuimedo> #info It is now configurable whether devstack should set the neutron defaults
14:11:59 <apuimedo> dmellado: I'd prefer it's a separate get
14:12:05 <apuimedo> *test
14:12:11 <apuimedo> darm
14:12:11 <kzaitsev_ws> wanted to thank irenab and ivc_ for reviews of my sriov spec and code.
14:12:14 <apuimedo> gate
14:12:19 <apuimedo> not test, not get, gate
14:12:25 <dmellado> apuimedo: lol
14:12:40 <dmellado> we could create one, but it might be a real quick one, heh
14:12:48 <apuimedo> kzaitsev_ws: we'll get there
14:12:50 <apuimedo> :-)
14:13:09 <kzaitsev_ws> oh, ok =) not rushing then )
14:13:40 <apuimedo> #info There's some patches under review to make it possible for baremetal devstack deploymentes to be more functional. You'll be able to run kube-dns, dashboard and so on
14:14:16 <apuimedo> ok, next to the reviews part
14:14:19 <apuimedo> kzaitsev_ws: almost there
14:14:23 <ivc_> apuimedo kubeadmin?
14:14:34 <apuimedo> ivc_: kubeadm deployment?
14:14:39 <ivc_> aye
14:14:43 <apuimedo> ivc_: not yet
14:14:47 <dmellado> kubeadm's not there yet
14:14:58 <dmellado> but we should be moving towards that
14:15:06 <dmellado> as hyperkube seems to be getting less and less stable
14:15:08 <apuimedo> I decided to postpone it a bit until we can use the standalone devstack docker plugin
14:15:35 <apuimedo> I already checked out what's necessary (they messed a lot with how to pass config to specify things like existing etcd)
14:15:54 <apuimedo> so as soon as we make docker plugin source configurable, we can get started with the move
14:17:11 <apuimedo> #info mchiappero: gloug's macvlan effort is practically ready
14:17:36 <apuimedo> there's just a couple of comments on the readme that was added at the end of the week
14:17:40 <apuimedo> and after that we should merge it
14:18:08 <mchiappero> yes, sorry, Gary started working on the documentation but he's out of office these days
14:18:24 <apuimedo> mchiappero: no worries. But I'd like to merge asap to avoid rebases
14:18:27 <mchiappero> I rebased but forgot to edit the README file, I'm working on it, now, should be ready in minutes
14:18:31 <apuimedo> we count on kzaitsev_ws to reproduce as well
14:18:33 <apuimedo> ;-)
14:18:51 <dmellado> mchiappero: pls put up the README file, I'd love to have a look into it ;)
14:18:55 <apuimedo> #info sriov spec discussion progresses but there's still contention on multiple handlers
14:19:03 <apuimedo> dmellado: there is a readme
14:19:08 <apuimedo> it just misses a couple of things
14:19:11 <dmellado> ah, cool then
14:19:18 <mchiappero> (sorry but I cannot change the workflow for that patch)
14:19:33 <apuimedo> mchiappero: ?
14:19:53 <apuimedo> ivc_: kzaitsev_ws: irenab: what's the argument against multiple handlers for a single K8s object?
14:20:15 <apuimedo> I ask earnestly because I can almost only see positive aspects to that approach
14:20:37 <kzaitsev_ws> there's no argument against them per se. It's just irenab and ivc_ believe these should not apply here
14:20:39 <mchiappero> i mean, I knew the previous patchset was not complete and the README was still in progress but I could not set -1 to workflow
14:21:14 <ivc_> apuimedo there's no reason to duplicate code. sr-iov is just a special case of multi-network pods (that were already discussed)
14:21:22 <kzaitsev_ws> but rather sriov can be used as basis for multi-vif pods
14:22:37 <apuimedo> mchiappero: ah, no worries on that
14:22:39 <ivc_> and eventually the sriov/multi-vif handler will become one with the current single-vif handler
14:22:39 <apuimedo> ;-)
14:23:50 <apuimedo> ivc_: I can't see it duplicating much
14:24:09 <apuimedo> handlers receive and event already well formed and then do something about it
14:24:36 <ivc_> right now it uses different annotations. i'd prefer the sriov pod annotation to be stored in the same vif annotation
14:24:45 <apuimedo> if that implies a lot of duplication we probably need to move things around
14:24:46 <kzaitsev_ws> I don't have anything agains this approach (extending vif with multi-vif capabilities), so I'm ok with both options
14:24:49 <Irenab_> Me too
14:25:07 <apuimedo> ivc_: do you mean a single vif annotation for each vif of a multi homed pod?
14:25:22 <apuimedo> or a list of those objects, to be more precies
14:25:24 <apuimedo> *precise?
14:25:36 <kzaitsev_ws> apuimedo: rather an annotation, that holds a list of vifs
14:25:38 <ivc_> apuimedo i mean updated vif annotation will be a list/dict instead of single object
14:26:26 <Irenab_> This breaks current contract with cni
14:26:55 <ivc_> cni will have to be updated too
14:26:55 <apuimedo> Irenab_: that's true
14:27:13 <apuimedo> right
14:27:25 <apuimedo> but we have to do it in this cycle
14:27:27 <Irenab_> we need to be consistent across both parts
14:27:32 <ivc_> but that is inevitable anyway if we were to support multi-vif pods
14:27:37 <vikasc> true
14:27:41 <apuimedo> ivc_: Irenab_: right
14:27:43 <kzaitsev_ws> also ivc_ mentioned that it is possible to go with 2-phase approach. phase-1) polish current approach, agree on specifics of a multi-vif phase-2) get rid of sriov-handler, use multi-vif for everything
14:28:20 <apuimedo> I feel like this is a bit like ml2 drivers vs plugins
14:28:22 <apuimedo> :-)
14:28:32 <Irenab_> indeed :-)
14:28:56 <apuimedo> Personally I'd like both approaches to be possible
14:28:59 <dmellado> heh
14:29:22 <apuimedo> those that are closest to dev can live in the multi vif handler
14:29:41 <apuimedo> kzaitsev_ws: ultimately, for me, it's about how you want to get the ball rolling
14:29:44 <ivc_> apuimedo they are possible, but sriov fits the vifhandler design for the most part and don't forget e.g. pooling support for vifhandler
14:29:57 <apuimedo> if it's separate handler and then we move it to multi vif, that's fine for me
14:30:06 <apuimedo> if you are willing to drive the multi vif handler
14:30:09 <apuimedo> I'll be even happier
14:30:11 <apuimedo> xD
14:30:15 <ivc_> i'm sure sriov could benefit from pools too and duplicating that effort across 2 handlers is meh
14:30:23 <Irenab_> i agree with ivc on the direction
14:30:37 <apuimedo> ivc_: it's a bit more special pooling though :-)
14:30:49 <vikasc> i also like going with multi-vif straightway
14:30:52 <apuimedo> but I agree that since they are neutron vifs, it makes a lot of sense
14:30:57 <vikasc> rather two phased one
14:31:11 <apuimedo> kzaitsev_ws: how do you see it?
14:31:54 <Irenab_> Vikasc , we may have multi vie support befor k8s
14:31:55 <ivc_> vikasc multi-vif requires quite a bit of effort, so splitting the effort in 2 phases does not sound unreasonable
14:32:08 <kzaitsev_ws> like I said I don't have anything against the multi-vif approach. shouldn't be that much changes.
14:32:47 <vikasc> noted.
14:32:50 <apuimedo> ivc_: kzaitsev_ws: I think it should be doable before p3
14:32:54 <Irenab_> shall we have multi vif spec rolling?
14:33:01 <apuimedo> if it is not there by p3 we'll get into danger zone
14:33:06 <ivc_> that is if we want an experimental/poc sriov support in tree before multi-vif
14:33:30 <apuimedo> ivc_: afaik kzaitsev_ws already has experimental code working
14:33:36 <kzaitsev_ws> in case agreeing on multi-vif specifics would start taking too much time — I would start whining about two phase thing =))
14:33:48 <ivc_> apuimedo thats my point - we already have the code for non-multi-vif
14:33:49 <apuimedo> I think leaving it for now as a [PoC] on gerrit would bring value
14:33:50 <Irenab_> :-)
14:34:00 <apuimedo> Irenab_: I will start a blueprint
14:34:08 <apuimedo> I'm on a roll making blueprints
14:34:09 <apuimedo> :-)
14:34:18 <ltomasbo> xD
14:34:28 <vikasc> hehe
14:34:30 <kzaitsev_ws> p3 is Jul 24-28 so 2 month from now )
14:34:35 <apuimedo> right
14:34:37 <Irenab_> cool, we can chat on design next week
14:35:16 <ivc_> btw do we plan daemon-exec cni split for P or Q ?
14:35:20 <apuimedo> if we're not done by then with multi vif I'll eat a full plate of vychyssoise
14:35:29 <apuimedo> ivc_: I'd like to have P
14:35:35 <apuimedo> I already made the blueprint
14:35:46 <apuimedo> and janonymous said he'd start work on it
14:35:58 <apuimedo> (and there's little I like less than vychyssoise)
14:36:09 <ltomasbo> apuimedo, do you have the link?
14:36:09 <dmellado> apuimedo: no wonder
14:36:13 <apuimedo> sure
14:36:19 <ltomasbo> (not of the vychyssoise recipe)
14:36:37 <apuimedo> https://blueprints.launchpad.net/kuryr-kubernetes/+spec/cni-split-exec-daemon
14:36:40 <apuimedo> #lin khttps://blueprints.launchpad.net/kuryr-kubernetes/+spec/cni-split-exec-daemon
14:36:41 <ltomasbo> thanks!
14:36:42 <apuimedo> #link https://blueprints.launchpad.net/kuryr-kubernetes/+spec/cni-split-exec-daemon
14:37:22 <apuimedo> anything else on multi vif?
14:37:29 <hongbin> o/
14:37:35 <kzaitsev_ws> So, what's the resume. we make multi-vif then sriov?
14:37:45 <kzaitsev_ws> or multi-vif as part of sriov?
14:37:48 <apuimedo> kzaitsev_ws: let me put it in points
14:38:07 <apuimedo> 1. Post the current working code for sriov as [PoC] tags
14:38:19 <apuimedo> 2. Draft the spec on multi-vif
14:38:38 <apuimedo> 3. Implement multi-vif with sriov
14:38:49 <apuimedo> 4. pray for 3rd party CI for sriov
14:38:51 <apuimedo> :-)
14:39:00 <dmellado> for 4) pray even harder
14:39:02 <dmellado> xD
14:39:39 <apuimedo> on (2) I meant blueprint. I don't think a spec is in order
14:39:56 <kzaitsev_ws> phew =)
14:40:00 <ivc_> apuimedo i have mixed feelings about "http server on the unix domain socket" from daemon-exec bp
14:40:02 <Irenab_> Maybe devref
14:40:09 <apuimedo> ltomasbo: I think you wanted to talk about multi-network
14:40:14 <apuimedo> ivc_: oh
14:40:23 <ltomasbo> apuimedo, yes
14:40:28 <apuimedo> ivc_: please, express the bad feelings
14:40:36 <apuimedo> ltomasbo: wait a moment
14:40:40 <Irenab_> to make sure the next implementation will be obvious what to follow
14:40:40 <ltomasbo> wanted to talk about it
14:40:41 <ltomasbo> ok
14:40:55 <kzaitsev_ws> anyways. I'm going to update my spec tomorrow(or the day after) and would try to mention things around multi-vif
14:41:04 <kzaitsev_ws> unless someone bests me at that =)
14:41:49 <ivc_> apuimedo http/rest feels like an overkill for a trivial rpc over uds
14:41:49 <kzaitsev_ws> Irenab_: it would either be part of my spec or a separate defref/spec anyway
14:41:59 <Irenab_> great
14:42:09 <apuimedo> kzaitsev_ws: alright. Sorry about the need to change it again
14:42:25 <Irenab_> lets follow up at the review
14:42:50 <dmellado> ivc_: what would you propose then?
14:42:57 <apuimedo> ivc_: I chose it because it is what docker already uses, so principle of least surprise
14:43:19 <apuimedo> but yes. I'm curious about what you propose as well :-)
14:43:40 <apuimedo> hongbin: we'll get to fuxi in a moment
14:43:48 <hongbin> apuimedo: ack
14:44:11 <ivc_> apuimedo something akin to python's multiprocessing module's ipc
14:44:12 <janonymous> o/ sry late
14:44:43 <dmellado> before we get there, I'd like also to ask wether who would be available to talk about functional testing next Week. Irenab_ I know you're almost off this week but would next week's Wed 15:00 CEST work?
14:45:00 <dmellado> ivc_: kzaitsev_ws mchiappero ltomasbo ^^
14:45:11 <apuimedo> ivc_: I'll have to check it out. I do not remember what they use
14:45:25 <apuimedo> dmellado: I will be available
14:45:33 <ltomasbo> dmellado, works for me
14:45:34 <kzaitsev_ws> I should be available — my Wednesdays are mostly free
14:45:34 <Irenab_> dmellado, I will check the time. The day is fine
14:45:35 <dmellado> apuimedo: I already counted with you, that's why I didn't ask
14:45:40 <dmellado> :D
14:46:04 <Irenab_> a bit earlier will be better
14:46:06 <apuimedo> ivc_: I'll check it and ping you
14:46:07 <ivc_> CEST is +2?
14:46:13 <apuimedo> ivc_: that's right
14:46:14 <dmellado> ivc_: yep
14:46:22 <apuimedo> ltomasbo: go ahead with multi network
14:46:25 <apuimedo> we have 4 mins
14:46:27 <apuimedo> before fuxi
14:46:31 <ltomasbo> jeje
14:46:32 <ltomasbo> ok
14:46:33 <dmellado> let me rephrase it to 13:00 UTC, just in case ;)
14:46:40 <apuimedo> dmellado: much better
14:46:49 <dmellado> FYI I've put a draft here, comments are welcome
14:46:52 <dmellado> https://github.com/danielmellado/kuryr-tempest-plugin
14:46:54 <ltomasbo> just want to check what was the opinion on allowing kuryr-kubernetes to use multiple subnets
14:47:04 <ltomasbo> or to create the subnet if it that not exist
14:47:09 <ltomasbo> similarly to kuryr-libnetwork
14:47:19 <dmellado> ltomasbo: what would the usecase for that be?
14:47:23 <Irenab_> subnet per namespace?
14:47:25 <apuimedo> ltomasbo: You should get up to date on the PoC going on upstream k8s
14:47:31 <apuimedo> about using annotations for multi network
14:47:42 <apuimedo> and if possible, make our implementation match what they are doing
14:47:50 <dmellado> +1
14:47:52 <ltomasbo> apuimedo,  that would be great!
14:47:53 <apuimedo> so that we are an implementation of what will eventually settle
14:48:10 <ivc_> apuimedo ltomasbo maybe get in touch with kzaitsev_ws about multi network support
14:48:12 <apuimedo> Irenab_: I'd rather we go head first into multiple nets as in upstream
14:48:12 <ltomasbo> my current use case is from the NFV world
14:48:13 <dmellado> ltomasbo: we can have a look at that, + going to k8s slack at some point
14:48:30 <ltomasbo> wher eyou may have a template defining your network service (with or without multi network support)
14:48:35 <Irenab_> just wanted to understand the requirement
14:48:43 <ltomasbo> but that needs to deploy some networks/subnets and create pods on them
14:48:51 <apuimedo> Irenab_: upstream they decided to do multiple annotations IIRC
14:48:59 <apuimedo> so it should match our workflow nicely
14:49:04 <Irenab_> its implementation,
14:49:23 <vikasc> annotations + TPR
14:49:26 <Irenab_> yes, this is the direction
14:49:51 <ltomasbo> dmellado, agreed
14:50:01 <apuimedo> vikasc: can you remind me what TPR stands for?
14:50:11 <vikasc> third party resources
14:50:11 <ltomasbo> I was about to ask the same...
14:50:16 <ltomasbo> ahh, right
14:50:27 <apuimedo> vikasc: ltomasbo: kzaitsev_ws: you three should probably sync up on this
14:50:35 <vikasc> +1
14:50:37 <dmellado> heh, yeah
14:50:41 <ltomasbo> +1
14:50:41 <kzaitsev_ws> agree
14:50:42 <dmellado> I checked and it didn't make any sense
14:50:44 <dmellado> http://www.urbandictionary.com/define.php?term=TPR
14:50:46 <vikasc> may be we  can schedule a discussion
14:50:54 <apuimedo> although it probably deserves a videoconf
14:51:04 <dmellado> apuimedo: maybe we can cover both that and FT
14:51:06 <Irenab_> multi net?
14:51:09 <dmellado> in next week's
14:51:19 <apuimedo> dmellado: sounds good
14:51:22 <apuimedo> you have 20 minutes
14:51:34 <apuimedo> multi net has 35
14:51:35 <ltomasbo> xD
14:51:38 <dmellado> lol
14:51:49 <Irenab_> dmellado, 12:00 Utc is better for me
14:51:55 <apuimedo> ltomasbo: vikasc: kzaitsev_ws: try to have checked upstream direction and PoC status by then
14:51:56 <dmellado> Irenab_: works for me as well
14:52:04 <dmellado> I'll send the invite 12:00 UTC then
14:52:05 <apuimedo> and have some reference links
14:52:18 <vikasc> apuimedo, sure
14:53:17 <apuimedo> alright
14:53:21 <apuimedo> #topic fuxi
14:53:29 <apuimedo> #chair hongbin
14:53:30 <openstack> Current chairs: apuimedo hongbin
14:53:34 <hongbin> hi all
14:54:00 <hongbin> in last week, we have a meeting with john griffin, dims, hyper folks, and others
14:54:11 <hongbin> we disucssed the plan of k8s storage
14:54:17 <hongbin> here is the logs of the meeting
14:54:23 <hongbin> #link http://eavesdrop.openstack.org/meetings/fuxi_stackube_k8s_storage/2017/fuxi_stackube_k8s_storage.2017-05-23-14.02.log.html
14:54:41 <hongbin> as a follow-up, we will create a new repo for fuxi-go
14:54:52 <hongbin> there is a proposal for that
14:54:58 <hongbin> #link https://review.openstack.org/#/c/468635/
14:55:00 <apuimedo> #info fuxi golang port repo submitted for review
14:55:12 <apuimedo> this is for docker volume api
14:55:22 <hongbin> yes
14:55:39 <hongbin> we needs to deal with the missing of os-brick in golang as well
14:55:57 <hongbin> because the fuxi-go will depend on os-brick to do the volume connection
14:56:20 <hongbin> therefore, we will create another repo for port os-brick to go
14:56:22 <hongbin> #link #link https://review.openstack.org/#/c/468635/
14:56:52 <hongbin> that is about everything from my side
14:56:59 <hongbin> comments?
14:57:13 <apuimedo> hongbin: can os-brick be called as a command?
14:57:26 <hongbin> apuimedo: right now, no
14:57:30 <kzaitsev_ws> #link https://review.openstack.org/#/c/468536/
14:57:40 <kzaitsev_ws> for reference )
14:58:25 <hongbin> apuimedo: the alternative is to turn os-brick into a service, but after discussing with cinder team, we decided to port the whole library to go
14:58:26 <Irenab_> the reason for go is to move it to K8s latter?
14:58:37 <apuimedo> thanks kzaitsev_ws!
14:58:50 <apuimedo> Irenab_: it's to unify the different efforts
14:59:17 <apuimedo> we should be able to import a lot of the code from j-griffith
14:59:37 <apuimedo> and the cinder community should probably be closer to it as well
14:59:41 <hongbin> yes, the goal is to use his repo to bootsrap things
14:59:45 <apuimedo> Irenab_: then make it available for k8s
15:00:01 <Irenab_> sounds reasonable
15:00:20 <apuimedo> very well
15:00:27 <apuimedo> Thanks to everyone for joining!
15:00:36 <apuimedo> #endmeeting