15:00:20 #startmeeting neutron_l3 15:00:21 Meeting started Thu Mar 13 15:00:20 2014 UTC and is due to finish in 60 minutes. The chair is carl_baldwin. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:22 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:24 The meeting name has been set to 'neutron_l3' 15:00:28 #topic Announcements 15:00:37 #link https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam 15:01:07 First, I didn’t plan for daylight savings time when I chose the time for this meeting. 15:01:23 The time shift has caused a bit of a problem for me. 15:01:46 I'd like to suggest a couple of possible meeting times. Both Thursday. 15:02:15 The first is an hour earlier. I know that makes things very early for a few in Western US. 15:02:29 For me it's actually better , +1 15:02:48 The second is two hours later which could be difficult for others in other parts of the world. 15:03:21 for me both are ok 15:03:28 HI all btw 15:03:34 safchain: hi 15:03:41 safchain, hi :) 15:04:54 I'll wait a few days for others who might be reading the meeting logs to chime email. Ping me on irc or email with any concerns. I'll announce the meeting time before next week. And, I'll consider the next shift in daylight savings time. ;) 15:05:15 sure, thanks carl_baldwin 15:05:19 #topic l3-high-availability 15:05:32 safchain: Anything to report? 15:05:54 currently I'm working on the conntrackd integration, 15:06:14 The assaf's patch has to be reworked a bit to support multicast 15:07:13 I need to review again. Anything new on the FFE? I fear that it didn't happen. 15:07:18 I don't know if all of you have tested patches 15:07:35 carl_baldwin, no new for FFE 15:08:22 I couldn't test yet safchain, but I will try to allocate some time for it. 15:08:34 Okay. The sub team page has links and information about reviewing and testing but I'll admit I've not yet tested. 15:08:44 carl_baldwin, I think this is almost everything for me, just need more feed back with functionnal test 15:09:13 safchain, do you have some functional test examples? 15:09:25 I could get some people on our team to provide feedback on that. 15:09:27 Okay, I am looking forward to running it. I need a multi host development setup soon anyway. 15:09:56 #link https://docs.google.com/document/d/1P2OnlKAGMeSZTbGENNAKOse6B2TRXJ8keUMVvtUCUSM/edit# 15:10:03 ajo, I will add some test use cases on the doc. 15:10:20 ajo: ^ This is the doc. 15:11:07 Thanks, I mean, if we have already some kind of initial functional test written for this. I will keep a link to this doc for manual testing. 15:11:37 ajo, not yet 15:11:58 ok, it's not easy 15:12:08 ajo, but tempest test should works with HA enabled 15:12:19 ok, that's a good start 15:12:59 safchain: anything else? 15:13:13 It's ok for me 15:13:16 #topic neutron-ovs-dvr 15:14:05 Doesn't look like Swami is around. 15:14:30 Swami is still working on detailing changes to L3. 15:14:41 The doc for L2 is up. Could use more review. 15:14:57 #link https://docs.google.com/document/d/1depasJSnGZPOnRLxEC_PYsVLcGVFXZLqP52RFTe21BE/edit#heading=h.5w7clq272tji 15:15:18 Sure, I plan to review it by the end of the week 15:15:53 Also looking in to integrating the HA L3 and HA DHCP was discussed. 15:16:38 safchain: great 15:16:46 hi all... 15:17:01 hi Sudhakar_ 15:17:03 Sudhakar_: hi 15:17:08 carl_baldwin, yes I'll try to ping swami after reviewing the doc 15:17:17 carl_baldwin, is there a doc about HA DHCP? 15:17:19 hi Sudhakar_ 15:17:26 hi ajo... 15:17:42 Sudhakar_: I don't think there is a doc yet about it. Only some initial discussion expressing interest in starting that work. 15:17:43 hi carl 15:18:10 Did Swami initiate the discussion? 15:18:22 Sudhakar_: yes 15:18:30 Ok. I have some context then.. 15:18:44 I am Swami's colleague ..based out of India 15:19:12 basically we were thinking of an Agent monitoring service....which can be used to monitor different agents ... 15:19:22 typically useful for L3 and DHCP when we have multiple NNs 15:20:03 Sudhakar_, something like rpcdaemon ? 15:20:19 not exactly.. 15:20:41 a thread which can started from plugin itself... 15:20:49 and act based on the agent report_states... 15:21:21 Sudhakar_, what kind of actions? 15:22:03 for ex: if a DHCP agent hosting a particular network goes down ....and we have another active DHCP agent in the cloud... 15:22:37 agent monitor detects that this DHCP agent went down and trigger rescheduling the network's DHCP on to the other agent.. 15:22:54 A few weeks ago, I was proposing that daemon agents could provide status via status file -> init.d "status", but it could be complementary. 15:23:08 aha, it makes sense Sudhakar_ 15:23:21 currently we have agent_down_time configuration which will help us decide on rescheduling... 15:23:28 Sudhakar_: Do you have any document describing this that we could review offline? 15:23:33 we could have another parameter altogether to avoid mixing up.. 15:23:48 Sudhakar_, It seems there is something like that for LBaaS 15:23:49 yes, a document on those ideas would be interesting, 15:23:57 we are refining the doc... will publish it for review soon.. 15:24:16 Actually, I made a mistake above. I said HA DHCP where I should have said, more precisely, distributed DHCP. 15:24:36 Sudhakar_: Great. 15:24:39 aha carl_baldwin , the one based in openflow rules? 15:24:46 Ok ..:) 15:25:12 Distributed DHCP was another thought...but i don't have much idea on that yet... 15:25:22 ajo: open flow rules could play a part but that did not come up explicitly. 15:25:33 understood 15:26:01 #topic l3-agent-consolidation 15:26:36 This work is up for review but the bp was pushed out to Icehouse. 15:26:52 yamahata: anything to add? 15:27:10 carl_baldwin: nothing new this week. 15:27:28 #topic bgp-dynamic-routing 15:27:45 #link https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing 15:27:52 #link https://blueprints.launchpad.net/neutron/+spec/neutron-bgp-mpls-vpn 15:28:10 nextone92: are you around? 15:28:59 I spent some time reviewing the bgp-mpls bp this week and made some notes. 15:29:38 It looks like a few key people aren't around this week to discuss. So, I'll try again next week. 15:30:02 #topic DNS lookup of instances 15:30:32 Really quick, I’m almost done writing a blueprint for this. Then, I need get it reviewed internally before I can post it. 15:30:42 I hope to have more to report on this next week. 15:30:51 sounds interesting, thanks carl_baldwin 15:30:57 #topic Agent Performance with Wrapper Overhead 15:31:07 #link https://etherpad.openstack.org/p/neutron-agent-exec-performance 15:31:31 This has come up on the ML this week. I have worked on it some so I created this etherpad. 15:32:06 carl_baldwin: nice summary on the etherpad 15:32:18 yes, thanks carl_baldwin :) 15:32:39 rossella_: ajo: thanks 15:32:59 So, there are a number of potential ways to tackle the problem. 15:33:36 I'm wondering what could be done for Icehouse. 15:33:57 carl_baldwin - sorry I'm so late to join the meeting 15:34:03 carl_baldwin, thanks for putting up the doc. looking forward on this.. 15:34:06 Yes, I had that thought too carl_baldwin 15:34:09 Icehouse is now 15:34:19 we can't do much 15:34:23 Carl: Sorry I am late today 15:34:35 yes, I will have a look to this etherpad 15:34:37 Swami: nextone92: Hi 15:34:59 Carl: hi 15:35:04 Yuriy's idea (priviledged agent) doesn't look bad from the point of view of keeping all in python. But looks like it requires more changes into neutron. Too bad to be at the end of the cycle. 15:35:10 rossella_: I fear you are right. There isn't much unless we can find bugs that could be fixed in short order. 15:35:24 o/ 15:35:31 I'm that Yuriy. 15:35:40 Ho YorikSar ! :) 15:35:43 Hi :) 15:35:51 I don't think it'll become very intrusive 15:35:57 ajo: YorikSar: My thinking is similar. It may be a very good long term solution. 15:36:25 YorikSar: I noticed your additions to the ether pad only this morning so I have not had a chance to review them. 15:36:35 We basically need to replace execute() calls with smth like rootwrap.client.execute() 15:36:40 I'm just worried with, for example, memory consumtion. We must keep all instances tied tight... to avoid leaking "agents" 15:37:08 ajo: They can kill themselves by timeout. 15:37:27 Then we won;t leak them. 15:37:35 And at client exit 15:37:50 May be, for ones running inside a netns: kill by timeout 15:37:51 ajo: Yeah. Which can end up basically the same. 15:38:03 the system-wide ones: kill by client exit + longer timoeut 15:38:51 carl_baldwin, do you think this approach could have the potential to be backported to Icehouse if it's tackled from now to the start of Juno? 15:40:11 ajo: I'm thinking about trying to push this to oslo.rootwrap, actually. So backporting will be minimal, but it'll be another feature. 15:40:24 ajo: I don't think it adds features and it wouldn't change the database. So, I think there might be hope for it. 15:40:45 carl_baldwin, do we have a bug filed for this? 15:40:51 ... not a new feature from the user perspective. More of an implementation detail. 15:41:09 Yes, we're killing a cpu-eating-bug.... 15:41:31 carl_baldwin: Oh, yes. Agree. 15:41:49 It is a significant implementation detail though. 15:41:59 yes, I agree carl_baldwin 15:42:00 I don't think there is one overarching bug for this. 15:42:22 I have filed detailed bugs for some of the individual problems that I've found and fixed. 15:42:50 carl_baldwin, I can fill a bug with the details 15:42:56 ajo: Great. 15:42:58 (basically, the start of the latest mail thread) 15:43:08 #action fill bug about the rootwrap overhead problem. 15:43:15 is it done this way? 15:43:33 sorry, I'm almost new to meetings 15:44:02 carl_baldwin: perhaps for icehouse all we can do is continue chipping away at unnecessary calls, and maybe get your priority change in? my $.02 15:44:06 ajo: I think you need to mention your handle after action. But, yes. Everyone should feel free to add their own action items. 15:44:31 #action ajo fill bug about the rootwrap overhead problem. 15:44:36 is that even possible for icehouse, at this time 15:44:42 haleyb: +1 15:44:56 I'm going to work on POC for that agent soon, btw. 15:45:10 haleyb: Yes. I'm hoping to wrap up that priority change this week as a bug fix. 15:45:11 It's going to be interesting stuff to code :) 15:45:39 may be, for icehouse, I could try to spend some time in reducing the python subset in the current rootwrap, and get a C++ translation we can use. 15:45:46 Swami: I imagine there is little that can be done for Icehouse. Only bug fixes and I imagine that significant changes will not be accepted. 15:46:04 Yes that's my thought as well. 15:46:07 (automated one), but I'm unsure about the auditability of such solution. that might require some investigation. 15:46:18 ajo: It might be worth a try. That is something I'm not very familiar with though. 15:47:07 carl_baldwin: may be it's not much work <1 week, I could try to allocate the time for that with my manager... 15:47:30 I have found speed improvements of >x50 with the C++ translation, but the python subset is rather reduced. 15:48:07 ajo: Remember that we need to reduce start up time and not necessarily execution speed. 15:48:23 Yes, that's greatly reduced, let me look for some numbers I had. 15:48:36 ajo: sounds good. 15:48:51 There are updates to "sudo" and "ip" that can help at scale. These fall outside the scope of the Openstack release. 15:49:14 I wouldn't actually call switching to some subset of Python staying with Python. it'd still be some other language. 15:49:30 Is there any documentation existing in openstack about tuning at the OS level? 15:49:51 But it might worth it to compare our approaches and probably come up with some benchmark. 15:49:54 1 sec. getting the numbers 15:50:00 If so, I thought we could add some information from the ether pad to that document. If not, it could be created. 15:50:23 carl_baldwin, not sure if there any docs on tuning at the OS level 15:50:31 http://fpaste.org/85068/25818139/ 15:50:49 assuming you are talking about the neutron server itself 15:50:52 [majopela@redcylon ~]$ time python test.py 15:50:53 real 0m0.094s 15:50:58 [majopela@redcylon ~]$ time ./test 15:50:58 real 0m0.004s 15:51:37 #action carl_baldwin will look for OS level tuning documentation and either augment it or create it. 15:52:20 carl_baldwin, there is an "iproute" patch, and a "sudo" patch, could you add them to the etherpad? 15:52:46 FWIW, my efforts at consolidating system calls to run multiple calls under a single wrapper invocation have shown that it is extremely challenging with little reward. 15:53:18 ajo: I believe those patches are referenced from the etherpad. 15:53:28 ah, thanks carl_baldwin 15:53:48 you're right , [3] and [2] 15:53:57 ajo: Some of them are rather indirect. I'll fix that. 15:54:24 #action carl_baldwin will fix references to patches to make them easier to spot and follow. 15:54:40 carl_baldwin, doing it as we talk, :) 15:54:49 ajo: cool, thanks. 15:56:09 So, ajo and YorikSar We'll be looking forward to seeing what you come up with. Keep the ether pad up and we'll collaborate there. 15:56:43 ok 15:56:50 Anything else? 15:57:12 #topic General Discussion 15:57:48 carl_baldwin, 15:58:01 I've seen neighbour table overflow messages from kernel, 15:58:09 when I start lots of networks, 15:58:13 have you seen this before? 15:58:36 lots (>100) 15:58:43 ajo, which plugin/agent ? 15:58:51 ipv6 error? i think we've seen that too 15:58:54 normal neutron-l3-agent 15:58:59 with ipv4 15:59:24 and openvswitch 15:59:27 I believe that we have seen it but I did not work on that issue directly. So, I cannot offer the solution. 15:59:39 It's in my todo list 15:59:54 I tried to tune the ARP garbage collection settings on the kernel 16:00:03 but, I'm not sure if it's namespace related 16:00:28 ajo: found my notes - yes, found and solution is to increase size - gc_thresh* 16:00:31 I've got a hard stop at the hour. Feel free to continue discussion in the neutron room or here if no one has this room. 16:00:52 Thank you all who came and participated. 16:01:05 thx carl_baldwin 16:01:08 ajo: neighbor table is shared between all namespaces 16:01:09 Please review the meetings logs and get back to me about potential time change for this meeting. 16:01:17 thanks carl 16:01:22 Bye! 16:01:23 #endmeeting