14:01:00 #startmeeting kuryr 14:01:01 Meeting started Mon Mar 6 14:01:00 2017 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:02 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:04 The meeting name has been set to 'kuryr' 14:01:23 Hello everybody 14:01:30 o/ 14:01:31 o/ 14:01:32 o/ 14:01:32 o/ 14:01:35 o/ 14:01:36 o/ 14:01:38 o/ 14:01:39 welcome to the first official Pike cycle weekly irc meeting 14:02:55 #topic VTG 14:03:06 o/ 14:03:08 o/ 14:03:22 hi 14:03:30 I have to say I enjoyed the Virtual Team Gathering. I hope it was not too strenuous to follow and join 14:03:43 and that the recordings and boards were helpful 14:04:23 apuimedo: I agree with you, my experience was positive too 14:04:35 it was my first one, but I liked it too! 14:04:42 o/ 14:04:48 ltomasbo: it was the first one for everybody 14:04:49 :P 14:04:57 I'd like to propose a meeting tomorrow at 14utc to go over the action items 14:04:58 :D 14:05:06 does that time work? 14:05:25 +1 14:05:31 +1 14:05:38 can we shift 30 mins earlier? 14:05:45 +1 irenab 14:05:47 irenab: fine for me 14:06:02 will prefer 30 mins earlier 14:06:09 mchiappero: janonymous: ivc_: does that work for you 13:30utc tomorrow? 14:06:13 +1 14:06:13 ok 14:06:18 ok 14:06:38 +1 14:07:13 #info March 7th 13:30 utc VTG Action Item sorting session. 14:07:22 thanks! 14:07:28 #action apuimedo to send the VTG Action Item sorting session to the mailing list 14:07:39 Anything else about the VTG? 14:07:39 apuimedo as long as it is an hour long, i'm ok with that 14:07:50 it is 14:08:25 apuimedo: please share the list before, so we can check what to pick before the meeting 14:08:39 irenab: it will be in the email to the ml 14:08:41 :-) 14:08:49 thanks 14:08:54 #topic fuxi 14:08:58 #chair hongbin 14:08:58 Current chairs: apuimedo hongbin 14:08:59 sorry, i actually have to step away. will catch up in a bit 14:09:08 hi 14:09:13 alraddarla: very well, if there is any question reach out on the ml 14:09:16 or irc 14:09:33 hongbin: any fuxi updates? 14:09:38 for fuxi, there are several patches landed last week 14:09:53 and we are proposing the first release 14:09:56 #link https://review.openstack.org/#/c/441522/ 14:10:04 :-) 14:10:28 that would be a good first relase for fuxi 14:10:39 That's really good news 14:10:45 apuimedo: that is all from me 14:10:55 #info A first fuxi release, 0.1.0 has been proposed 14:11:17 hongbin: At some point we should define what 1.0.0 should be 14:11:26 and if that should be in the Pike cycle 14:11:36 apuimedo: work for me 14:11:40 thanks 14:11:55 anybody else has some question about fuxi? 14:12:48 #topic kuryr-libnetwork 14:13:08 We had a very nice series of contributions this week 14:13:16 fixing bugs and adding blueprints 14:13:18 :-) 14:13:58 regarding IPv6 support https://blueprints.launchpad.net/kuryr-libnetwork/+spec/ipv6-subnet 14:14:14 #info oslo debug bugfix in https://review.openstack.org/436523 14:14:36 #info gateway ip retrieval fix https://review.openstack.org/436705 14:14:51 #info mac address setting fix https://review.openstack.org/432777 14:15:01 irenab: please go ahead 14:15:12 I suggest to address IPv6 support as a whole 14:15:35 currently there some ptches to add support for existing IPv6 subnetpools 14:15:48 I'm fine with delaying 1.0.0 for ipv6 support 14:15:54 in fact, I propose we do just that 14:16:09 sounds reasonable to me 14:16:26 at the very least, I want the support for dual stack networking that hongbin has been pushing 14:16:55 apuimedo: but this is fragmental, isn’t it? 14:16:58 irenab: I also saw your comment to hongbin's https://review.openstack.org/#/c/426595/ 14:17:26 irenab: sort of. It only enables dual stack 14:17:32 but not pure ipv6 14:17:33 irenab: it looks the support of existing ipv6 and kuryr-created ipv6 will be very different 14:18:05 apuimedo: dual stack only if its neutron created subnet 14:18:15 irenab: yes 14:18:32 it is more like a bug fix than a new feature (i think) 14:18:46 I think that what hongbin proposes in regard to the change of semantics for specifying the subnetpool makes sense. Because it is consistent with what we do for specifying networks 14:18:53 I think IPv6 never have been a priority till now 14:19:19 apuimedo: I will revisit my comment, ned to recheck 14:20:18 as long as there are no users, I guess we can change the API semantics, but in general I prefer to avoid it 14:20:30 I know 14:20:54 I'll take a second look 14:21:04 but the consistency is important reaching 1.0.0 14:21:25 so supporting container on existing IPv6 subnet will be included in 1.00? 14:21:33 yes 14:21:45 but no docker driver IPv6 subnet 14:21:46 and I'd like also on creating new one 14:22:12 apuimedo: what is the expected timeline for 1.0.0? 14:22:22 yedongcan: hongbin: limao: is creating on new ipv6 something either of you want to work on? 14:22:36 irenab: asap 14:22:50 as soon as we support ipv6 we cut the release 14:22:55 apuimedo: yes, i saw yedongcan took the bp, i can work with him for this 14:23:01 perfect 14:23:19 hongbin: yedongcan: I love the collaboration you have 14:23:20 :-) 14:23:35 apuimedo, hongbin: thanks, will do that. 14:23:40 apuimedo: It is ok for me IPv6 not in 1.0.0 14:23:54 apuimedo: to cut 1.0.0, what is DoD criteria? 14:24:06 Full stack tests? 14:24:23 irenab: I'm sorry. But since I was a teenager... DoD only means "Day of Defeat" to me 14:24:32 :-) 14:24:32 irenab: could you remind me what it means? 14:24:39 Definition of Done 14:24:43 ah, right 14:24:51 I know remember it's not the first time I ask you this 14:24:57 sorry about that 14:25:17 irenab: IPv6 with full stack tests for dual stack and IPv6 14:25:35 limao: would you prefer 1.0.0 be cut before? 14:25:41 I asked yedongcan to list work items at the bp 14:25:52 I ask because if it is helpful we can consider it for before 14:26:54 apuimedo: depend on what's the time line we can fix ipv6, maybe we can check the ipv6 patch status next weekly, to see if we include ipv6 in 1.0.0 14:27:02 limao: agreed 14:27:17 limao: +1 14:27:21 Let's try to have code either merged or almost ready 14:27:30 so that in next week we only miss the fullstack tests 14:28:13 I will work on the base code and unit test this week. 14:28:30 yedongcan: thanks a lot! 14:28:52 #info We will revisit IPv6 creation inclusion on 1.0.0 release on the next weekly 14:29:04 anything else on kuryr-libnetwork? 14:29:25 apuimedo: one more 14:29:31 go ahead 14:29:36 about the kuryr-libnetwork docker image 14:30:12 Will we create an offical image to upload to docker hub? 14:30:48 (Maybe as a part of 1.0.0 release?) 14:30:50 limao: there was some discussion on the PTG about the OpenStack container registry 14:31:05 dan prince said he'd help with the puppet to deploy it 14:31:22 apuimedo: oh, cool, thanks for the info. 14:31:42 limao: before that, we can only have unofficial container, that I can publish again 14:32:06 #topic kuryr-kubernetes 14:32:49 ltomasbo: how is the work going on the resource management after the VTG discussion? 14:33:12 I'm modifying the already existing patches 14:33:27 irenab: vikasc: we should be merging https://review.openstack.org/#/c/440248/1 as we did in the other repos 14:33:28 to increase the isolation between VIF and PortsPool drivers 14:33:39 good 14:33:51 I already did for the baremetal one 14:33:54 ltomasbo: is there anything you need? 14:33:58 and working right now in the nested one 14:34:10 (from reviewers) 14:34:12 it would be really nice to have some reviews on these: 14:34:16 apuimedo: why is this one manula update? 14:34:21 https://review.openstack.org/#/c/436875 14:34:27 https://review.openstack.org/#/c/436876/ 14:34:33 https://review.openstack.org/#/c/436877/ 14:34:47 and I will submit the ones for nested probably today 14:34:54 and hopefully update the devref tomorrow 14:35:03 ltomasbo: will check them by tomorrow 14:35:28 https://bugs.launchpad.net/openstack-requirements/+bug/1668848 14:35:28 Launchpad bug 1668848 in tacker "PBR 2.0.0 will break projects not using constraints" [High,In progress] - Assigned to yong sheng gong (gongysh) 14:35:29 then I will work on the kuryr-controller reboot (discovering the already created ports) 14:35:36 I thought there was an email on the ml about it too 14:35:40 but I can't find it now 14:35:42 irenab, great, thanks 14:35:52 thanks ltomasbo 14:35:59 apuimedo, about what? 14:36:09 (not the thanks but the ml thing) 14:36:22 :d 14:36:26 ltomasbo: about the pbr thing I pointed irenab towards 14:36:29 apuimedo we have that pbr patch https://review.openstack.org/#/c/439321/ 14:36:30 ahh, ok 14:36:57 garyloug: mchiappero: I see irenab also posted some comments to https://review.openstack.org/#/c/440669/ 14:36:58 apuimedo since it passes gates it seems it does not break anything 14:37:03 keep up to good work 14:37:41 ivc_: it may break us on indirect dependencies, but you are right, for kuryr-k8s it doesn't break us for now 14:37:47 apuimedo: the main concern is actually the potential race you pointed out 14:38:00 mchiappero: yeah... That's a tough onw 14:38:01 *one 14:38:03 :-) 14:38:23 apuimedo: so about the prb patch, which one do you want to merge? 14:38:24 mchiappero: did you get any inspiration for that? 14:38:47 pbr 14:38:48 apuimedo: other than a "allowed_address_pairs" lock? 14:38:53 no 14:39:41 irenab: I'll dig around 14:39:48 I'm afraid that Neutron has nothing to offer, and on the dispatcher side I'm not sure whether there is something we can do (e.g. use a threaded controller in this config) 14:39:49 ok 14:40:09 mchiappero: what is the problem you are dealing with? 14:40:49 mchiappero: irenab: Do you think there is any chance the address pairs Neutron API could be enhanced? 14:40:54 irenab: the possibility that two threads might be updating the "allowed_address_pairs" on the parent port at the same time 14:40:58 So instead of being a replacement we get additions? 14:41:17 apuimedo: it should probably work as the openstack client 14:41:20 get/set 14:41:32 each with a commit approach 14:41:51 mchiappero: on kuryr side? 14:41:53 short term, maybe locking on the kuryr side 14:42:01 ivc_: the problem here is we'd probably need the dispatcher to be able to grouping events into green threads according to a driver as well 14:42:23 mchiappero: that would get rid of needing a lock 14:42:29 apuimedo: yes, that the other approach I was talking about 14:42:48 for example, for the current macvlan driver, we'd just group events for scheduled node 14:42:52 I need to check though, or get feedback from any of you familiar with the dispatching code 14:42:56 apuimedo yup. but lets stay away from that for now. i'm still hoping for a generalised actor model that would solve those issues 14:43:26 ivc_: any hint on how it would solve it? Without giving up pod creation concurrency 14:43:42 apuimedo mchiappero i'm probably ok with locking as a short-term workaround just for that 14:43:56 apuimedo: long term if the nested port support for macvlan/ipvlan based on trunks will be accepted in Neutron we are ok 14:43:57 so, neutorn.update_port_address_pairs just replace the current set, instead of adding into it? 14:43:58 apuimedo you'll have an actor for 'update address pairs' 14:44:01 irenab: well, the problem for the approach garyloug and mchiappero patch pushes 14:44:13 exactly what ltomasbo said 14:44:14 :P 14:44:21 sounds to me that the right place to fix it is on the neutron side then 14:44:25 so the same we do for trunk ports can't be done 14:44:36 as it's buying tickets for the hot potato game 14:44:51 ltomasbo: I agree that it would be the nicest outcome 14:45:07 ltomasbo: but that may not be a tractable problem 14:45:17 :( 14:45:38 neutron works on API request basis, so every request should be served autonomously, isn’t it? 14:45:43 ltomasbo 'just replace' could race and loose some calls 14:45:55 ltomasbo: agree, I'm not sure though how willing they are to change a public API 14:46:02 #action apuimedo to reach out to Neutron folks about a non-replacing API for allowed address ports 14:46:18 I'm not in favor of replacing ivc_, rather the oposite 14:46:35 apuimedo ltomasbo or are we discussing 'commit' on neutron side. if so, i'd be agains that 14:47:04 I am not sure I unnderstand what API change you propose for neutron 14:47:30 as I said I think an 'openstack' client like API would be nice 14:47:46 mchiappero: I don't understand what that means 14:47:52 so atomic adds and removes rather a full replace 14:47:56 irenab i think they mean making 'allowed address pairs' a sub-resource so it can support add/remove instead of replace 14:48:24 apuimedo: like add one or more pair 14:48:25 ivc_: thanks for clarification 14:48:49 so that you do not need to per form the Read-Modify-Update 14:48:56 ivc_: yes, that's what I meant 14:49:01 it's a REST change 14:49:07 so I think it's unlikely 14:49:35 apuimedo in that case it does make sense, tho super unlikely to get that change in reasonable amount of time 14:49:45 apuimedo: I tend to agree; unless itis addition and not replacement to the existing API 14:49:52 agreed 14:50:20 I can't see it happening unless it is a service plugin 14:50:21 :D I'm not following you anymore 14:50:26 or some other sort of extension 14:50:30 irenab apuimedo addition would mean neutron would provide 2 api's for the same function, which is kinda ugly 14:50:40 mchiappero: discussing neutron api/maintenance/politics 14:50:48 apuimedo: ok 14:51:23 apuimedo: worth to check anyway, maybe same issues came from other use cases 14:51:50 apuimedo i'd say serialising updates on kuryr side is more realistic approach. so locks for now and actors later 14:52:27 ivc_: I'll just ask if a plugin could make sense. If possible I prefer to lock in Neutron 14:52:29 ivc_: so lock on call neutron to update port with address pairs? 14:52:43 ok, so, to recap: 1) macvlan specific lock on allowed_address_pairs updates 2) attempt Neutron API extension 3)improve the dispatching logic 14:52:48 irenab: I guess just lock on a per port basis 14:52:53 is this ok in terms of priority? 14:53:01 when doing the address pairs update 14:53:08 as you say 14:53:20 mchiappero: that's right 14:53:34 irenab lock per 'update_address_pair' call is the easiest, but could be narrowed down to per-port lock 14:53:39 ok, let's try in that order and update according to the progress 14:54:15 irenab: https://review.openstack.org/#/c/438367/1 14:54:24 when you have a moment 14:54:47 apuimedo: done 14:55:01 apuimedo irenab that one could have some better wording, but ok :) 14:55:03 as ivc_ says. per port when doing the update_address_pair 14:55:22 ivc_: I'm just glad I didn't have to write it :P 14:55:41 lets follow up on the patch 14:55:51 documentation patches warm my heart 14:56:26 anything else on kuryr-k8s? 14:56:33 apuimedo i was lazy when documenting the function and that devref patch just copied that :) 14:56:59 ivc_: we need to document better in this cycle. I had a diagram bug as well 14:57:16 and we should start making release note patches to catch up to what we have 14:57:22 with reno 14:57:46 apuimedo add another action item for tmoroz meeting :P 14:57:49 apuimedo: I think we shoudl ahve a list what should be included for every added feature 14:58:05 ivc_: That's a very good point 14:58:06 I will 14:58:17 irenab: even better point 14:58:20 a checklist 14:58:34 thanks irenab! 14:58:39 I can add some feature DoD list to the devref 14:59:01 irenab: that would be super great! 14:59:02 trello? or i remember there was some openstack-hosted todo list... 14:59:29 ivc_: I think more like have a devref of: Things to check for when submitting a feature 14:59:38 like people can use it for their own checklist 14:59:44 I just though to add some rst into the devref , like kuryr feature addition plolicy or something like this 14:59:50 ivc_: the action items from the VTG will end up on a trello most likely 15:00:06 irenab: sounds perfect 15:00:11 let me know if I can help you with that 15:00:15 it's an excellent idea 15:00:21 apuimedo: add AI for me please 15:00:24 (especially for forgetful people like me 15:00:25 ) 15:00:37 apuimedo i'll try to find that openstack-hosted tool (to keep things under the same umbrella if possible) 15:00:40 so I won’t forget :-) 15:00:43 #action irenab to draft the feature contributor checklist 15:00:54 ivc_: openstack stories 15:00:58 storyboard 15:01:03 or something like that 15:01:15 irenab: we really need those checklists 15:01:17 :P 15:01:26 anyways. Out of time 15:01:30 thank you all for joining! 15:01:31 apuimedo right, storyboard it is :) 15:01:35 talk to you tomorrow 15:01:43 ivc_: I'm proud I remembered it 15:01:45 #endmeeting