21:03:36 #startmeeting Networking 21:03:37 hi 21:03:37 hi folks! 21:03:37 Meeting started Mon Jun 24 21:03:36 2013 UTC. The chair is markmcclain. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:03:38 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:03:40 The meeting name has been set to 'networking' 21:03:42 markmcclain: Error: Can't start another meeting, one is in progress. 21:03:52 ok.. now the bot is awake :) 21:03:59 openstack: shut up! 21:04:06 #link https://wiki.openstack.org/wiki/Network/Meetings 21:04:35 #topic Announcements 21:04:40 salv-orlando: +1 21:04:56 It took a while, but we have a new name: Neutron 21:05:20 so, what will we need to do after the name changed? 21:05:37 admin guide doc, code change, api change ... 21:05:43 gongysh: A lot of work, I suspect. :) 21:05:57 we can divide the work 21:05:58 no kidding 21:06:10 yes 21:06:31 gongysh: working through producing a draft of the changes we'll need to make 21:06:57 markmcclain: As granular as it's possible to make it, the easier we can spread the renaming load. 21:07:13 the plan is to publish a wiki that contains all of the items we need to complete along with a timeline to sync up with H2 21:07:48 mestery: yes I plan to make it granular so that we can get spread the load to switch rather quickly 21:10:42 markmcclain: so the plan is that we wait for you and few other folks to produce a plan, and then we divide work items? 21:10:42 yeah.. we're working on a draft and then we'll let everyone know just to make sure we didn't miss anything 21:10:42 and the assign out the items to complete 21:10:43 markmcclain: Sounds like a good plan! 21:10:43 ok, it is a plan. 21:10:43 the other trick is going to maintain compatibility in some places 21:10:43 do we assume that there will be a sort of a  freeze for bug/features merges until the naming change takes place? 21:11:09 to mitigate potential (needless) conflicts? 21:11:15 I think infra will be taken down for a while. So there will be a forced freeze 21:11:27 armax: ^^^ 21:11:55 cool 21:12:04 thanks for clearing that up 21:12:11 otherwise I want us to keep working on the changes and I'll also try to have a short script to clean-up patchsets 21:12:34 for patches that are in review when the changeover occurs 21:12:56 maybe we should set aside a week or 2 and all focus on the effort 21:13:25 upside is we can stop opening bugs for the period of time we are working on the transition 21:13:53 garyk: +1 21:14:04 garyk: I considered it, but we have lots of items in review now and I dont to do a full stop since we have 3 wks until h2 cut 21:15:42 markmcclain: Dan was leading Documentation, who is reviewing that part now? 21:16:06 I heard a rumour it's emagana... 21:16:22 we all should rights to review docs 21:16:25 salv-orlando: +1 21:17:43 if you dont mind having a non-english native speaker on that task! 21:18:00 I'll complete the specifics in the next day or so and post for comments 21:18:19 emagana: thanks for volunteering for writing the docs in spanish! 21:18:38 I'll be creating bugs associated with the tasks so that progress can be tracked. 21:18:41 salv-orlando: +1 (n.p.) 21:18:50 salv-orlando: si amigo! 21:19:12 bugs form is a good idea. 21:19:14 so that's the current update on renaming any questions? 21:19:18 salv-oralando: his Spanish is not that good…. 21:20:03 markmcclain: not from me. thanks for the update. 21:21:03 mlavalle: what??? :-) 21:21:04 This will add some extra work, so I appreciate everyone's patience as we work through the process 21:21:54 we still have lots of other work going on too… let run through the reports 21:22:12 #topic API 21:22:36 salv-orlando: hi 21:22:51 hello again 21:23:00 We shall be quick as there's a lot to discuss 21:23:08 the API is fairly quited. 21:23:11 quiet 21:23:23 No major bugs, blueprints are proceeding smoothly. 21:23:35 cool 21:23:52 I've posted a spec for https://blueprints.launchpad.net/quantum/+spec/sharing-model-for-external-networks 21:23:53 #topic VPNaaS 21:24:00 markmcclain: I've added a bug 21:24:06 to the meeting agenda to discuss 21:24:13 can we spare a second for it? 21:24:34 salv-orlando: yes 21:24:36 Bug 1184484 21:24:46 hey bot? 21:24:53 bug #1184484 21:25:01 nvm https://bugs.launchpad.net/quantum/+bug/1184484 21:25:02 https://bugs.launchpad.net/quantum/+bug/1184484 21:25:22 the problem was very easy to reproduce without using code from: 21:25:29 https://review.openstack.org/#/c/27265/ 21:25:44 and https://review.openstack.org/#/c/29513/ (now merged) 21:30:13 however the reported said it still occurs, and at fairly small scale. It seems that concurrent requests immediately send quantum out of connections 21:30:13 regardless of whether pooling is enabled or not. 21:30:13 This can be mitigated increasing the pool size 21:30:13 But with the default pool size, it's been reported that even 10 vm spawns concurrently executed cause the issue again 21:30:13 I think pool size is not a permanent solution. 21:30:13 The solution would be avoiding quantum sucks up connection 21:30:13 1 request = 1 connection. 21:30:13 yeah we've experienced this issue internally too 21:30:13 and then then connection is immediately released 21:30:13 So I just wanted to say that if you can provide more details, please comment on the bug report 21:30:14 will do 21:30:14 and provide logs and stuff 21:30:14 salv-orlando: Are we sure nested transactions are using additional connections? Why? 21:30:14 nested transaction are doing that at the moment because of an issue with the way we do db pooling at the moment 21:30:14 https://review.openstack.org/#/c/27265/ fixes that 21:30:14 but the issue still remains 21:30:14 so 27265 is not a fix for the db pool problem. 21:30:21 right? 21:30:28 gongysh: it fixes part of the issue 21:30:45 and also aligns us with how some of the other projects are using the db 21:31:18 salv-orlando: i hope to have the review ready for https://review.openstack.org/#/c/27265/ tomorrow (tests are failing at the moment) 21:31:39 i do not think it will be the magic bullet but as said above it will align us with the community 21:31:40 garyk: saw that, thanks 21:32:09 salv-orlando: thanks for calling attention to that bug 21:32:40 folks can comment on the bug offline we can work on it more 21:32:48 anything else for the api? 21:32:55 and I heard nova is introducing a mysql db api, without sqlalchemy 21:33:52 salv-orlando: is it a just rumor? 21:34:02 gongysh: no idea what they're doing, but wouldn't fix this issue 21:34:43 the root cause might as well by in sqlalchemy, but would you cut your arm if you bruised it : 21:34:45 :) 21:35:04 * markmcclain blames eventlet 21:35:14 ok, If I find URL of the BP, I will send u it. 21:35:27 salv-orlando: anything else? 21:35:51 nope 21:35:59 thanks for the report 21:36:02 nati_ueno: hi 21:36:06 markmcclain: ok 21:36:29 quick update on VPN? 21:36:45 We finished to move new directory structure. We are still working on UT. I except we can remove WIP in 1 or 2 weeks. 21:36:47 That all 21:36:49 salv-orlando: I want u to experiment, hope u will not bruise it. :) 21:37:25 nati_ueno: ok 2 weeks is right around the h2 feature cutoff 21:37:59 markmcclain: gotcha. 21:38:18 I'll do my best for 1 week to remove UT :) 21:38:25 cool 21:38:28 sorry typo remove WIP 21:38:40 sounds good 21:38:47 #topic Nova Integration 21:38:53 garyk: hi 21:39:15 markmcclain: no updates on the migrations 21:39:28 markmcclain: but there is a patch that id like people to look at 21:39:42 which one? 21:40:16 markmcclain: https://review.openstack.org/#/c/33054/ 21:40:25 (sorry it took me a while to find) 21:41:02 news on bug 1192131 ? 21:41:03 ok 21:41:11 garyk: I am guessing you will talk about the host id patch too. 21:41:16 https://bugs.launchpad.net/quantum/+bug/1192131 21:41:21 that was next on the list 21:41:28 https://review.openstack.org/#/c/29767/ 21:41:34 ^^^ is a big bug that is randomly breaking the gate 21:41:52 I know arosen and some of the nova devs were working on tracking it down 21:41:52 salv-orlando: no, i have not seen anything regarding https://bugs.launchpad.net/quantum/+bug/1192131 21:42:17 markmcclain: i recall seeing a mail from arosen saying it was related to eventlet. not sure 21:42:23 shared quantum client 1192131? 21:42:53 the problem with 1192131 is that folks thus far have been unable to track down which change made gate more unstable 21:43:17 I am trying to do a version to share the admin token 21:43:21 if someone can help me reproduce the bug then i can take a look at it. i have yet to get it to reproduce 21:44:03 markmcclain: thats about all at the moment 21:44:07 garyk it is a race that occurs in random places 21:44:12 garyk: it's elusive. Not an heisenbug, but needs concurrency and some other condition that we still need to figure out 21:44:16 garyk: yes, that is a very random problem. IBM QA found it too. 21:44:37 is it just caused by runing tempest? 21:44:49 running the gate will sometimes trigger it 21:45:34 i'll try and look at it tomorrow 21:45:40 great...we've covered some important stuff, but we're starting to run short on time 21:45:41 but nova has reverted the shared client, right? 21:45:53 gongysh: the reversion did not have an impact 21:46:04 so the revert was abandoned 21:46:24 #topic FWaaS 21:46:32 SumitNaiksatam: quick update? 21:46:39 Hi 21:46:42 yeah quick 21:46:44 We have most of the fwaas patches that we were targeting for H2 in review now (except devstack and horizon). 21:46:51 API/Plugin: https://review.openstack.org/#/c/29004/ Agent: https://review.openstack.org/#/c/34064/ Driver: https://review.openstack.org/#/c/34074/ CLI: https://review.openstack.org/#/c/33187/ 21:46:56 awesome 21:46:58 We are trying to work through the integration, finding and fixing bugs (hence the patches are marked WIP), but we do have the flow from the REST call reaching the driver 21:47:26 nice 21:47:36 thats the quick update, unless RajeshMohan or SridarK want to add or if there are questions 21:47:45 nothing more to add 21:47:48 this is gary from vArmour 21:47:57 we are following Sumit's patch 21:48:09 thanks gduan for that update 21:48:12 markmcclain: the reversion is done, I see the code is reverted if my eye is right. 21:48:18 and rework our rest api to fit into the structure 21:48:27 gduan: we will catch up 21:48:29 gduan: good to know… please feel free to comment on the work in progress 21:48:36 sure 21:48:49 gongysh: shouldn't be merged: https://review.openstack.org/#/c/33555/ 21:49:09 SumitNaiksatam: Thanks for the update 21:49:13 sure 21:49:20 #topic ML2 21:49:26 rkukura or mestery? 21:49:39 markmcclain: Hi. 21:49:58 we are making progress towards the H2 BPs 21:50:15 details are on agenda wiki 21:50:22 #link https://wiki.openstack.org/wiki/Meetings/ML2 21:50:35 mestery: anything you want to bring up here? 21:50:45 markmcclain: https://review.openstack.org/#/c/33499/ 21:50:59 rkukura: Nope, other than to say if people want to talk ML2 in more detail to join the sub-team meeting on #openstack this Wednesday at 1400UTC 21:51:24 anything else from anyone on ml2? 21:51:54 thanks for updating us 21:51:56 mestery: Make the #openstack-meeting 21:52:09 rkukura: Good catch. :) 21:52:12 where is the meeting minute of Ml2? 21:52:26 #link http://eavesdrop.openstack.org/meetings/networking_ml2/ 21:52:44 gongysh: Just posted (http://eavesdrop.openstack.org/meetings/networking_ml2/) 21:53:00 bookmarked it. thanks 21:53:09 #topic python client 21:53:26 no big problem is here. just about 2.2.2 21:53:33 Seems that the feedback for 2.2.2a1 has been positive 21:53:46 so we'll push 2.2.2 in the PyPI overnight 21:54:41 ok 21:54:49 no more from me. 21:54:57 alright 21:55:28 #topic Horizon 21:55:38 hi. sorry for my absense last 2 weeks. I had family affairs. 21:55:47 About horizon, I have good progresses for H2 horizon blueprints: secgroup support and extension aware features. 21:56:03 SumitNaiksatam: SumitNaiksatam: I think it is better to move FWaaS support to H3. What do you think? 21:56:31 amotoki: sure 21:56:40 if that works better 21:56:41 SumitNaiksatam: thanks. 21:56:50 i will coordinate with you offline 21:57:10 i have no convern about other h2 items and will check their status. 21:57:28 no more from me. 21:57:34 amotoki: welcome back and thanks for the update 21:57:49 #topic lbaas 21:58:17 enikanorov_: looks like there are a few minor items in review, but other things are stabilizing. correct? 21:58:27 right 21:58:42 mmajor one is adding agent scheduling to reference implementation 21:58:55 would be great if gongysh could take a look 21:59:18 gongysh: mind taking a lookg? 21:59:22 this one: https://review.openstack.org/#/c/32137/ 21:59:27 enikanorov_: ok, it is on my todo list in these two days. 21:59:37 gongysh: thanks 21:59:51 gongysh: thanks 22:00:01 enikanorov_: anything else? 22:00:12 not this time 22:00:18 thanks for updating 22:00:24 #topic Open Discusssion 22:00:52 dkehn: around? 22:01:06 I'd like to get core folks review of https://review.openstack.org/#/c/30441/ and https://review.openstack.org/#/c/30447/ all reviews have been addressed 22:01:31 if possible 22:02:25 both amotoki and I are cores on 30441 22:02:34 we can coordinate offline 22:02:40 k 22:02:49 i am going to call it a night. good night everyone 22:02:52 Any other open discussion items? 22:02:55 going back to: https://review.openstack.org/#/c/29767 is there anything we need to as a quantum team to facilitate? 22:02:55 garyk: night 22:03:37 thanks to gongysh for his perseverance on it (host_id from nova to quantum issue) 22:04:04 yes, I have rebased many times. 22:04:36 looks like gongysh responded to Phil Day's issue, and no change should be needed 22:04:37 nova guys mark it as low priority. I hate that. 22:05:00 SumitNaiksatam: I think we need make sure the −1 is understood and that gongysh has responded back 22:05:11 the −1 is likely causing other reviewers to skip 22:05:19 markmcclain: it seems gongysh response is correct 22:05:36 i think gongysh did respond promptly but its a new -1 every time 22:06:05 right.. all by different reviewers 22:06:23 there are no fixed core members on it. so a new one comes in, gives it a scan and fires a -1. 22:06:25 I know. In general it's not a good idea to -1 a patch when the reviewer have a question but no specific concern with the patch itself. 22:06:39 salv-orlando: +1 22:06:56 problem is he does not come back again after that. 22:07:03 markmcclain: anything we can do catch the attention of two cores who can shepherd this? 22:07:24 yeah we can work offline on it 22:07:32 push a new patch set, will clear the -1, and send a notification to all the reviewers who reviewed beforehand on it 22:07:51 I had mentioned this to the nova PTL a while back, and I'll ask him again if he can raise the priority 22:08:38 yeah.. when you rebase let me know and I'll chat with a few cores offline about it 22:09:02 make sure the profile of the review is raised 22:09:15 markmcclain: rebase? 22:10:12 actually may not need it since it was last pushed yesterday 22:10:21 I have just rebased and fixed some conflicts because of the reversion patch https://review.openstack.org/#/c/33499/ 22:15:05 k 22:15:06 we'll can work on this offline.. any other open discussion items? 22:15:06 there is a new problem on nova integration side: 22:15:06 what's that? 22:15:06 gongysh: what? 22:15:06 since the quantum client will be created many times in nova side, the keystone token in db will multiple many times. 22:15:06 I think this is the purpose of the patch to using shared quantum client. 22:15:07 which is reverted by https://review.openstack.org/#/c/33499/ 22:15:07 gongysh: right there are issues with how eventlet monkey patches.. the Http objects 22:15:08 we'll need to step back and think of a different approach 22:15:08 So I will push a shared token version on nova side. 22:15:08 gongysh: you're saying you will use a single auth context for all resources? 22:15:08 regardless of tenant? 22:15:08 with it, nova side can make use of the token as long as possible. 22:15:08 no, it is just for admin features on nova side. 22:15:08 k 22:15:08 not for normal API invocation. 22:15:42 I am done. 22:16:27 ok.. still have to be mindful or how it's implemented 22:17:09 alright everyone have good night/afternoon/morning 22:17:12 #endmeeting