13:59:51 #startmeeting networking 13:59:52 Meeting started Tue Jul 3 13:59:51 2018 UTC and is due to finish in 60 minutes. The chair is mlavalle. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:59:53 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:59:55 The meeting name has been set to 'networking' 13:59:59 hi 13:59:59 hi 14:00:09 o/ 14:00:14 o/ 14:01:08 Hi there! 14:01:15 o/ 14:01:28 hi there 14:01:35 hi 14:01:36 manjeets_, njohnston: more people in the US than I expected ;-) 14:01:52 haleyb: also 14:02:09 #topic Announcements 14:03:23 The Rocky-3 milestone is fast approaching. It is less than 3 weeks away: July 23 - 27 14:04:08 The call for presentations for the Berlin Summit will be open until July 17th 14:04:23 I look forward to many nice submissions from the Neutron team 14:04:52 Any other announcements? 14:04:58 mlavalle I think the release candidate for client libs (including neutron-lib) is July 19th, no? 14:05:06 finalized by the 26th 14:05:12 Yes 14:05:45 ok so that date is rapidly approaching, if there are any things we want in neutron-lib for Rocky, it seems now is the time :) 14:06:05 Thanks for the reminder boden 14:06:26 I'll add it to the agenda in wiki 14:06:54 ok, moving on 14:07:01 #topic Blueprints 14:08:10 These are the blueprints we are targeting for Rocky 3: 14:08:16 #link https://launchpad.net/neutron/+milestone/rocky-3 14:08:43 any updates on blueprints? 14:09:16 for https://blueprints.launchpad.net/neutron/+spec/neutron-lib-decouple-db I’ll reiterate that isn’t realistic for Rocky anymore 14:09:38 but if we can try to land https://review.openstack.org/#/c/565593/ it will allow some more consumption patches to start flowing 14:10:11 boden: I will review it today 14:10:21 thanks mlavalle 14:10:38 rubasov: any updates on bandwidth based scheduling? 14:10:40 for https://blueprints.launchpad.net/neutron/+spec/strict-minimum-bandwidth-support I'd like to call the attention/judgement of reviewers familiar with hierarchical port binding use cases to this change: https://review.openstack.org/574783 14:11:06 you read my mind ;-) 14:11:15 mlavalle: :-) 14:11:53 also I hope to upload more non-wip patches this week 14:12:02 I assume that tempest-full-py3 is not related to the change, right? 14:12:22 mlavalle: yep, that's unrelated 14:12:47 ok, cool. I will take a look between today and tomorrow 14:12:54 mlavalle: thank you 14:13:02 and encourage others to do the same 14:13:10 mlavalle/rubasov: I am back now so after my jetleg I start working again ;) 14:13:24 lajoskatona_: welcome back 14:13:38 nice to see you back, lajoskatona_ 14:13:41 hi sorry for late.... 14:14:33 In regards to multiple port bindings, the entire series is here: https://review.openstack.org/#/q/topic:bp/live-migration-portbinding+(status:open+OR+status:merged) 14:15:19 I addressed some comments from haleyb and mriedem in the patch that originates the series: https://review.openstack.org/#/c/414251/ 14:15:31 and rebased the entire series yesterday 14:16:22 I have some re-checking to do but as soon as zuul returns green in all of them, I think the series is good to go 14:16:37 please take a look when you have a chance 14:18:07 Thanks to haleyb, manjeets for revieweing the series of port forwarding: https://review.openstack.org/#/q/topic:bp/port_forwarding+(status:open+OR+status:merged) 14:18:46 njohnston also reviewed the neutron-lib patch. thanks 14:18:55 please keep it up 14:19:07 I will go over the series today and tomorrow 14:20:01 annp_: do you want to talk about wsgi server? 14:20:14 mlavalle, thanks. 14:20:39 Thanks slawek, hongbin and Manjee for your great reviewing. I’ve just update new patch-set to address your comments. Could you please take a look at https://review.openstack.org/#/c/555608/ 14:21:01 ack 14:21:03 regarding to fwaas rpc timeout, firewall plugin tried to initialize rpc listener. I think fwaas rpc listener should be served by neutron-rpc-server worker. I’ve just pushed a patch to fix that https://review.openstack.org/#/c/579433/. Now, I’m waiting feedback from zigo about the result of the test. 14:21:17 :) 14:21:27 nice +1 14:21:29 o/ 14:21:39 I'm sorry but I couldn't work on it today. 14:21:44 I'll check on it tomorrow. 14:21:56 Thanks a lot annp ! 14:22:06 zigo, No problem! :) I'm waiting you. :) 14:22:19 Regards to Neutron WSGI integration in devstack https://review.openstack.org/#/c/473718/ All tempest test was passed, however there are neutron-grenade gate still failed by cinder removing volume. 14:22:23 annp_: do we have similar pattern in other service plugins other than fwaas? 14:23:18 amotoki, I've checked metering plugin. 14:24:02 amotoki, metering plugin or l3_router_plugin initilize rpc listener by rpc worker. 14:24:21 annp_: thanks. it sounds nice. 14:25:23 amotoki, yeah! :) 14:25:53 amotoki, do you have any idea related to neutron wsgi integration issue? 14:26:00 ok, thanks annp for your hard work. and thanks to zigo for all his support 14:26:26 mlavalle, thanks :) 14:26:42 One more thing: if that works, would it be hard to do a backport of the patch to Queens? 14:27:58 zigo: of https://review.openstack.org/#/c/579433/? 14:29:11 perhaps zigo is thinking all wsgi patches each of which is small. 14:30:11 ack, haleyb let's take a look at it after the meeting 14:30:15 Well, both? :) 14:30:32 WSGI support was supposed to be a Queens release goal ... :P 14:30:43 yeap, I realize that 14:30:54 I knew you would bring it up ;-) 14:31:03 Sorry, I'm a pain ... :P 14:31:21 zigo: you are not. you are very helpful. just kidding 14:31:34 I got something else that I want to bring up, not sure if that's the time. 14:31:46 go ahead 14:33:05 mhhh, zigo, let's talk about it at the end of the meeting 14:33:12 #topic Community Goals 14:33:13 Ok. 14:33:33 mlavalle, that's all for WSGI. 14:33:45 amotoki do you want to talk about policy in code? 14:33:46 please go ahead. 14:33:53 annp_: Thanks! 14:34:10 sorry for late progress in policy-in-code. I succeeded to block all meetings this wed and fri. perhaps it will be two whole day works I guess. 14:34:33 is there anything we can do to help? 14:35:07 patches will consist of neutron-lib policy migration and policy-in-code. 14:35:39 I will resolve merge conflict this week and testing will be helpful 14:36:33 thanks for the update amotoki. looking forward to those patches 14:36:39 (1) neutron-lib policy, (2) remove policy.py from neutron (3) policy conversion in neutron, (4) policy convesion in subprojects (ike fwaas, dynamic-routing) 14:37:00 (1)-(3) would be straightforward and I am refreshing my memory 14:37:16 (4) might need some help 14:37:43 ok, let us know how we can help 14:37:51 I will send a mail to summarize patches once I push them 14:38:30 the lib freeze deadline is in my mind. 14:40:24 I also want to remind the team that we have this etherpad: https://etherpad.openstack.org/p/py3-neutron-pike 14:40:34 for py35 support tasks 14:40:50 please pick up some to help us moving in this goal 14:41:35 is it covered by the CI meeting? 14:42:01 mlavalle: reading the py35 etherpad, it is not clear where the available items to work are. Is that the "Old Failure categories" section? 14:43:05 njohnston: perhaps we need to check how they work now again. 14:43:20 yeah 14:44:15 * zigo wrote everything in a buffer and is ready for end of meeting topics ... 14:44:28 njohnston: we probably need to re-order that etherpad 14:44:39 "python3 first" will be one of the Stein goal, so we need to refresh the etherpad 14:44:58 +1 14:45:01 I will take a look over the next few days 14:45:25 njohnston: are you interested in helping with that? 14:45:58 I wanted to see if I could pitch in, yes 14:46:17 ok, I'll take a look this week and ping you in channel 14:46:32 #topic neutron-lib 14:46:44 boden: anything on neutron-lib? 14:46:53 a few things, I will try to be brief 14:47:14 we releases neutron-lib 1.17.0 last week FYI.. if you want to use it bump your requirements and lower-constraints 14:47:47 a friendly reminder that many networking projects are not fully setup for zuul v3 and local testing: http://lists.openstack.org/pipermail/openstack-dev/2018-June/131801.html 14:48:14 ^ causes friction when trying to land neutron-lib patches, os if you get a chance please have a looksee 14:49:19 what you mean by "not fully setup for zuul v3" ? 14:49:19 another reminder that if your networking project wants to stay “Current” and receive neutron-lib patches that we please ask to help land them as mentioned in http://lists.openstack.org/pipermail/openstack-dev/2018-June/131841.html 14:49:30 amotoki pls see email and related link 14:49:31 boden for 1.17.0 https://review.openstack.org/#/c/577245/ 14:49:34 its in detail tehre 14:49:49 manjeets_ ack 14:49:57 boden: thanks. perhaps networking projects and horizon plugins have same problem 14:50:12 amotoki yes this is about the networking projects in general 14:50:39 boden: I was wondering, was there ever a solution for projects that do "from neutron.tests import base"? ISTR that was on the list of "not sure how to solve that" issues a while back. 14:50:47 amotoki I hope the etherpad linked in is helpful, and I think it outlines the “situation”, but if not feel free to ping me 14:51:25 boden: yes, that is totally helpful. as a quick look I am in the same page. 14:52:06 ok, let's move on 14:52:10 njohnston we have a base test class in neutron-lib, tho I don’t know that it’s fully compatible yet 14:52:20 #topic CLI / SDK 14:52:28 amotoki: anything on this topic? 14:52:58 I will look thru client stuffs next week after policy-in-code. 14:53:09 ok, thanks 14:53:13 client freeze wil be after the lib freeze, so we have time. 14:53:31 on SDK, we don't have much plan in Rocky 14:53:46 that's all from me 14:53:48 #topic Open Agenda 14:53:58 zigo: please go on 14:54:32 In neutron/agent/l3/dvr_local_router.py, there's the function _set_subnet_arp_info() called: 14:54:32 1/ Each time l3 is restarted 14:54:32 2/ Each time an interface of a VM is added to a compute if no other VM of that tenant was there 14:54:32 Problem is, in our use case in Infomaniak, we have 2000 VM, most of then in a single tenant. The result is, this operation takes about 15 minutes, with more than 4200 call to "neutron-rootwrap ip neight add" : one call per IP & MAC association. This is just horrible for a customer to wait for 15 minutes for its VM to be up, especially that it establishes L2 peers (east-west traffic) before any north-bound connectivity 14:54:32 work is done. 14:54:32 Many times, the call is even not needed ("ip neigh show" shows it's most of the time already done and not needed). 14:54:34 So my question is: what do you think would be the best strategy to optimize this, having in mind that more than half of this time is spent in invoking neutron-rootwrap. 14:55:33 I thought about using the batch mode for the ip command. 14:55:46 Like, write a file, call once ... 14:56:35 That, and switching to oslo.privsep. 14:56:49 Good ideas or not? 14:57:15 zigo: seems ok to me in principle. 14:57:33 would you mind filiung a bug so we can take it from there 14:57:41 Ok, I'll try to come with a patch then. 14:57:49 Will do. 14:58:05 we can discuss it in the L3 meting, that takes place on Thursday at 15000 UTC 14:58:13 IMO, it's best if I try, because it's kind of not easy to reproduce the issue with thousands of VMs. 14:58:23 Oh ok, thanks. 14:58:25 oh, I know 14:58:45 the hard part is to reproduce the situation 14:59:18 ok, thanks for attending 14:59:21 it would be super helpful if we can emulate such situation 14:59:36 talk to you next week 14:59:40 see you guys ! 14:59:42 #endmeeting