14:01:15 <apuimedo> #startmeeting kuryr
14:01:15 <openstack> Meeting started Mon Jan 30 14:01:15 2017 UTC and is due to finish in 60 minutes.  The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:16 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:18 <openstack> The meeting name has been set to 'kuryr'
14:01:29 <apuimedo> Hello everybody and welcome to another kuryr meeting!
14:01:34 <apuimedo> Who's here for the show?
14:01:37 <ivc_> o/
14:01:38 <janonymous> o/
14:01:39 <garyloug> o/
14:01:44 <irenab> hi
14:02:25 <apuimedo> ivc_: ltomasbo: ?
14:02:30 <janki> o/
14:02:33 <apuimedo> ivc_: is already here, I'm on the moon
14:02:36 <apuimedo> sorry
14:02:42 <ivc_> apuimedo :)
14:02:53 <apuimedo> alright then!
14:02:57 <apuimedo> let's move forward
14:03:04 <apuimedo> #topic kuryr-libnetwork
14:03:10 <ltomasbo> o/
14:04:03 <alraddarla> o/
14:04:19 <apuimedo> #info yedongcan's tag optimization patch seems to have progressed well https://review.openstack.org/#/c/420610/
14:04:32 <apuimedo> irenab: please review it when you get the chance
14:04:53 <irenab> apuimedo: ok
14:05:28 <Gideon> Can I ask about Magnum<>Kuryr integration?
14:05:39 <apuimedo> Gideon: sure thing
14:05:57 <Gideon> Is it planned for a specific OS release?
14:06:49 <apuimedo> #info yedongcan's https://review.openstack.org/419735 is also ready for merge
14:06:56 <apuimedo> tag support is improving :-)
14:07:21 <apuimedo> Gideon: If there is people working on it, we could probably do pike for kuryr-libnetwork
14:07:37 <apuimedo> for kuryr-kubernetes it may be a bit tighter, but we'd like to get there too
14:08:06 <apuimedo> just there's nobody at the moment sponsoring this work directly, although hongbin has been pushing on it
14:08:16 <apuimedo> and we've been doing the groundwork necessary to tackle it
14:08:55 <apuimedo> I think the remaining work is outside our repos, so we have to do a bit of asking
14:09:48 <Gideon> Thanks. I'm from Nokia and we see added value in integrating both. I'll check with Redhat.
14:09:55 <apuimedo> #info janonymous's tls patch that adds TLS support for our ipam/rd driver seems ready https://review.openstack.org/#/c/410609/
14:10:22 <apuimedo> Gideon: I remember you from the meeting we ahd
14:10:24 <apuimedo> *had
14:10:45 <apuimedo> ;-)
14:10:48 <Gideon> oh. The nicknames are not so clear...
14:11:00 <janonymous> I could not find the reason for non-voting gate failure though...
14:11:04 <apuimedo> I'm the bearded PTL of Kuryr (who works at RH)
14:11:11 <janonymous> Haha
14:11:17 <apuimedo> janonymous: that's exactly what I was gonna ask you
14:11:22 <Gideon> Antony
14:11:27 <apuimedo> Gideon: yes, Antoni
14:11:38 <janonymous> apuimedo: i tried digging down, i saw only busy box failure
14:11:51 <janonymous> nothing that might fail the gate though
14:12:25 <apuimedo> janonymous: could you run the fullstack tests locally on a new 16.04 machine and paste me the output?
14:12:42 <apuimedo> which brings up
14:12:56 <apuimedo> I would like to propose to make the fullstack tests voting
14:13:06 <irenab> janonymous: in k8s or libnetwork?
14:13:08 <apuimedo> I think they have been non-voting for enough time
14:13:09 <janonymous> apuimedo: sure, will do it tomorrow on top priority!
14:13:17 <apuimedo> irenab: kuryr-libnetwork
14:13:19 <apuimedo> thanks Jaivish
14:13:21 <irenab> ok
14:13:30 <janonymous> irenabL libenetwork
14:14:33 <apuimedo> alraddarla: what's the status on https://review.openstack.org/#/c/422394/
14:14:45 <apuimedo> I saw there was some discussion, but I think I probably missed some context
14:15:06 <alraddarla> Essentially that patch and mine could easily be merged. I was waiting to see what everyone wanted to do.
14:15:21 <alraddarla> I am more than happy to merge that patch with mine and put the co-author tag on.
14:15:31 <alraddarla> No one ever responded to my question though so I didn't want to overstep
14:15:47 <apuimedo> alraddarla: doesn't it make more sense to simple rebase rajiv's patch on top of yours?
14:15:59 <apuimedo> s/simple/simply/
14:16:53 <alraddarla> That would work as well! If you guys would like to merge mine
14:17:03 <alraddarla> https://review.openstack.org/#/c/424198/
14:17:13 <alraddarla> At least review it, I mean :)
14:17:50 <apuimedo> alraddarla: please, remember to add people as reviewers, otherwise sometimes we miss patches ;-)
14:18:13 <apuimedo> (just did for this patch now
14:18:14 <apuimedo> )
14:18:38 <apuimedo> #action apuimedo irenab vikasc limao to review the reno support patch https://review.openstack.org/#/c/424198/
14:19:08 <apuimedo> #action rajiv rebase https://review.openstack.org/#/c/422394/2 on top of https://review.openstack.org/#/c/424198/
14:19:15 <apuimedo> Anything else on kuryr-libnetwork?
14:20:03 <apuimedo> alright then... Moving on!
14:20:04 <alraddarla> apuimedo, yes. My apologies. I forgot
14:20:13 <apuimedo> #topic fuxi
14:20:29 <apuimedo> hongbin is not here today since it's the Chinese new year's Holidays :-)
14:21:07 <apuimedo> #info The proposal to containerize Fuxi has been accepted for Pike https://blueprints.launchpad.net/kolla/+spec/containerized-fuxi
14:21:40 <apuimedo> #info Kuryr-kubernetes has been accepted as a subproject https://review.openstack.org/#/c/423791/
14:21:45 <apuimedo> fsck!!!
14:22:07 <apuimedo> #info fuxi-kubernetes has been accepted as a subproject https://review.openstack.org/#/c/423791/
14:22:31 <janonymous> but why fuxi-kubernetes?
14:22:33 <apuimedo> For those keeping the score at home, k8s already has cinder integration, but only for when k8s runs on OSt VMs
14:22:38 <apuimedo> this is for bare-metal
14:22:58 <janonymous> sorry but wanted to know about fuxi-kubernetes as kubernetes has already cinder support
14:23:16 <apuimedo> janonymous: as I stated above, for bare-metal cases
14:23:22 <janonymous> yeah
14:23:25 <apuimedo> and maybe for some functionality like snapshotting
14:23:30 <apuimedo> but we'll see about that one
14:23:30 <janonymous> apuiemdo:thanks
14:23:36 <apuimedo> I want to keep the scope small
14:23:58 <apuimedo> We already have bare-metal kubernetes in scope, so it was logical to accept the proposal
14:24:16 <janonymous> yeah great!
14:24:21 <apuimedo> but going to replicate the in-tree functionality is going to be a harder sell :-)
14:24:31 <apuimedo> #topic kuryr-kubernetes
14:25:19 <irenab> apuimedo: regarding previous topic
14:25:20 <apuimedo> Thanks janonymous for https://review.openstack.org/#/c/424972/1
14:25:21 <apuimedo> :-)
14:25:33 <apuimedo> irenab: go ahead, sorry I closed it so fast
14:25:42 <irenab> fuxi is for libnetwork and fuxi-kubernetes is for k8s?
14:25:42 <janonymous> trivial bump :D
14:26:30 <apuimedo> irenab: yes. We should probably consider a rename of 'fuxi'
14:26:43 <apuimedo> to keep consistency with the networking naming
14:26:44 <irenab> apuimedo: yes, will be less confusing
14:27:10 <apuimedo> irenab: also. I'm not sure if fuxi folks shouldn't just add handlers and drivers to kuryr-kubernetes
14:27:29 <apuimedo> otherwise they are going to duplicate a good bit of effort
14:27:40 <apuimedo> maybe they can import kuryr-k8s in their repo
14:27:50 <apuimedo> but that's gonna be painful with the fast evolution
14:27:52 <irenab> apuimedo: I also would like to undrstand more on what is ging to land there
14:28:28 <irenab> apuimedo: wasthere any email on this?
14:28:30 <apuimedo> irenab: it will be tackled on the vtg session
14:28:44 <apuimedo> there were some emails but for some reason they did not send them to the list
14:29:14 <irenab> ok, just looks its getting crowded in the kury arena :-)
14:29:14 <apuimedo> I'll start a thread in the ml
14:29:43 <apuimedo> irenab: I still want to possibly help fuxi to move out of home, since it's starting to grow a beard
14:30:06 <apuimedo> #action apuimedo to send ML thread about fuxi-kubernetes
14:30:53 <apuimedo> #info Merging ivc_'s patch to move from id to selflink to ease troubleshooting https://review.openstack.org/#/c/423903/2
14:31:45 <apuimedo> #action irenab to look at this dragonflow gate failure https://review.openstack.org/#/c/425597/1
14:32:04 <irenab> apuimedo: https://bugs.launchpad.net/dragonflow/+bug/1660346
14:32:04 <openstack> Launchpad bug 1660346 in DragonFlow "setting initial networking with df devstack fails" [Undecided,New]
14:32:16 <apuimedo> irenab: that was fast!
14:32:24 <irenab> :-)
14:32:32 <apuimedo> #info df nv gate affected by https://bugs.launchpad.net/dragonflow/+bug/1660346
14:33:05 <apuimedo> #action ivc_ to address comments to https://review.openstack.org/#/c/422946/
14:33:07 <apuimedo> :-)
14:33:50 <apuimedo> THis patch helps us me more understanding of resourceVersions conflicts and not just fail when there's a missmatch
14:33:52 <apuimedo> thanks for that!
14:35:52 <apuimedo> #info ivc_ pushed a temp improvement for skipping events that would force us into repeating handler work https://review.openstack.org/#/c/422910/ It will make the handler 'actor' for an entity sleep to increase the likelihood of having the annotation by the time the next "irrelevant" event comes around
14:36:36 <apuimedo> The idea is to make some code in the k8s client that handles this case more gracefully, but for now it is a good improvement
14:36:53 <apuimedo> #action irenab ltomasbo vikasc to review https://review.openstack.org/#/c/422910/
14:37:15 <irenab> apuimedo: there is also k8s provided client in the incubation state, we may want to move using it
14:37:16 <ltomasbo> ok, I'll take a look asap
14:37:25 <apuimedo> irenab: we do want to move to use it
14:37:46 <apuimedo> irenab: we have to propose it for openstack/requirements if we see it as likely to be maintained
14:37:57 <irenab> apuimedo: +1
14:38:06 <irenab> I think Magnum team is interested as well
14:38:17 <apuimedo> irenab: got a contact?
14:38:28 <apuimedo> (on who's been looking into it in the magnum side)
14:38:38 <irenab> apuimedo: saw some email on this, can check later
14:39:04 <apuimedo> #info ltomasbo has been working on port reuse so that we reduce the cost of new pod addition to networks by using ports that were previously freed
14:39:10 <apuimedo> #link https://review.openstack.org/#/c/426687/
14:39:15 <apuimedo> irenab: thanks irena!
14:39:45 <ltomasbo> yes, but that is just initial work
14:39:55 <ltomasbo> I will update the blueprint with a more detailed information
14:40:14 <irenab> apuimedo: http://lists.openstack.org/pipermail/openstack-dev/2017-January/111021.html
14:40:34 <apuimedo> #link http://lists.openstack.org/pipermail/openstack-dev/2017-January/111021.html
14:40:38 <ltomasbo> and also push some changes to cover some of the TO DOs
14:40:56 <irenab> ltomasbo: maybe better to update existing or add devref for this optimization
14:40:56 <apuimedo> basically ltomasbo's patch saves the neutron creation (it still updates the port with the name of the port to ease troubleshooting), but the ovs binding needs to happen due to the veths getting destroyed when K8s bombs the infra container
14:41:18 <apuimedo> ltomasbo: did you try what happens in OVS native mode if you use an internal port for the pod instead of a veth?
14:41:51 <apuimedo> irenab: ltomasbo: what do you think about a different devref about resource management?
14:41:58 <apuimedo> or is it better to use the same?
14:41:59 <irenab> +1
14:42:07 <irenab> better separate
14:42:10 <ltomasbo> apuimedo, I didn't try that
14:42:11 <apuimedo> (I'd probably use a different one and I'd link it from the original one)
14:42:29 <apuimedo> ltomasbo: don't let me distract you then :P
14:42:41 <ltomasbo> apuimedo, resource management for the ports pool?
14:43:14 <apuimedo> #action ltomasbo to push a basic resource management devref. Maybe even an etherpad or public gdocs
14:43:28 <ltomasbo> think it makes sense, yes
14:43:30 <irenab> ltomasbo: I would suggest to maintain pors per project id to be ready to extend for multi-tenance
14:43:38 <apuimedo> ltomasbo: resource management in general, you take the first part "Port resources"
14:43:40 <apuimedo> :-)
14:44:01 <apuimedo> irenab: we'll have to account for policy too :P
14:44:17 <apuimedo> but we'll see about that when we get there
14:44:28 <apuimedo> with the name update we can probably update the sg
14:44:45 <apuimedo> the tenancy may need a separate grouping
14:44:52 <apuimedo> ivc_: any news on services?
14:45:05 <ltomasbo> maybe we can just have different pools for different tenants
14:45:09 <ltomasbo> and security groups
14:45:10 <apuimedo> Could you use some help with the UTs?
14:45:17 <irenab> ltomasbo: sounds good
14:45:36 <hongbin> o/
14:45:40 <apuimedo> ltomasbo: for tenants I'd say yes, for SGs let's see if it's not more trouble than it is worth
14:45:45 <apuimedo> hongbin: you should be at a party!
14:45:54 <irenab> sorry, need to leave
14:45:54 <hongbin> apuimedo: :)
14:45:54 <apuimedo> we already updated on fuxi :P
14:46:00 <apuimedo> irenab: thanks for joining!
14:46:01 <hongbin> apuimedo: sure. thx
14:46:27 <apuimedo> hongbin: we agreed to send an email to the ML about fuxi-kubernetes
14:46:37 <hongbin> apuimedo: sure thing
14:46:40 <apuimedo> we wondered if the work couldn't be done in kuryr-kubernetes
14:46:47 <apuimedo> since it's about adding handlers and drivers
14:47:02 <apuimedo> and it could then share the controller code
14:47:04 <hongbin> i am open for the community input
14:47:08 <apuimedo> but it needs further discussion
14:47:14 <hongbin> sure
14:48:17 <apuimedo> hongbin: also, Gideon was asking about the kuryr magnum integration. I wonder if you could put some email on the ML about the missing pieces so people can sign up for them
14:48:33 <apuimedo> (asking you since you are the most magnum knowledgeable guy we have :P )
14:48:41 <hongbin> apuimedo: about libnetwork or kubernetes?
14:49:12 <hongbin> apuimedo: sure, i will try that
14:49:13 <apuimedo> hongbin: both :-)
14:49:19 <hongbin> apuimedo: ack
14:49:34 <apuimedo> thanks a lot hongbin!
14:49:41 <hongbin> apuimedo: np
14:49:42 <apuimedo> and thanks for joining even on holidays
14:49:58 <apuimedo> #topic general discussion
14:50:06 <apuimedo> Does anybody else have a topic to bring up?
14:50:14 <janonymous> yes
14:50:15 <alraddarla> Question: We use Contrail at AT&T....have you all started any integration with Contrail? If yes, ca we help? If no, are you itnerested?
14:50:38 <alraddarla> typos...it's definitely Monday morning for me
14:51:36 <apuimedo> alraddarla: we have not started and of course we are interested!
14:51:55 <apuimedo> alraddarla: for kuryr-libnetwork, kuryr-kubernetes or both?
14:52:17 <alraddarla> kuryr-kubernetes....potentially both eventually
14:52:35 <apuimedo> very well
14:52:49 <apuimedo> in that case, let's go over it now on #openstack-kuryr
14:52:51 <apuimedo> :-)
14:52:57 <apuimedo> (after the meeting)
14:53:01 <apuimedo> Anything else?
14:53:42 <ltomasbo> I sent this a week ago or so
14:54:13 <janonymous> apuimedo: It would be great if 28th Feb meeting could be scheduled in the end if possible.
14:54:21 <ltomasbo> I sent this a week ago or so https://review.openstack.org/#/c/422641
14:54:51 <apuimedo> janonymous: you mean the virtual team gathering?
14:55:12 <janonymous> yeah, But that is upto community completely ,i would suggest 1,2,3 march
14:55:18 <apuimedo> ltomasbo: it seems you are blocked on unit tests
14:55:19 <apuimedo> :P
14:55:45 <ltomasbo> there is no unit test for any seg_driver
14:55:53 <apuimedo> janonymous: please, send the date change proposal as a reply to the mailing list thread I send on teh virtual team gatehring so otehr people can weigh in
14:56:18 <apuimedo> ltomasbo: hence the 'add' of irena
14:56:20 <apuimedo> :P
14:56:21 <janonymous> apuimedo: sure,thanks
14:56:40 <apuimedo> She doesn't mean modify, but add a new one :P
14:56:46 <apuimedo> you can do it as follow-up
14:57:43 <apuimedo> very well, closing time https://www.youtube.com/watch?v=7-0lV5qs1Qw
14:57:50 <apuimedo> Thank you all for joining!
14:57:55 <apuimedo> #endmeeting