21:03:06 <markmcclain> #startmeeting Networking
21:03:07 <openstack> Meeting started Mon Jul 29 21:03:06 2013 UTC and is due to finish in 60 minutes.  The chair is markmcclain. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:03:08 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:03:11 <openstack> The meeting name has been set to 'networking'
21:03:16 <enikanorov> hi
21:03:27 <vbannai> Hello
21:03:46 <markmcclain> #link https://wiki.openstack.org/wiki/Network/Meetings
21:03:52 <markmcclain> #topic Announcements
21:04:36 <markmcclain> Congrats to our two new core team members: armax and mestery
21:05:00 <salv-orlando> markmcclain: have you told them about the ritual?
21:05:05 <armax> ?
21:05:06 <emagana> congratulations mestery and armax!
21:05:11 <armax> :)
21:05:12 <salv-orlando> they have to buy drink for all the other cores at the next summit
21:05:15 <salv-orlando> one round each
21:05:19 <nati_uen_> congrat!
21:05:23 <mestery> salv-orlando: No mention of a ritual, is this something to happen in Hong Kong?
21:05:34 <salv-orlando> mestery: ^
21:05:45 <armax> ah, okay for a second I feared the worst :P
21:05:50 <mestery> salv-orlando: I will be happy to do that in Hong Kong. :)
21:05:55 <markmcclain> armax: that's the part we can talk about it public :p
21:06:08 <armax> oh my
21:06:28 <salv-orlando> ah the lies I would tell for some free booze. Don't you find it tastes a lot better when it's free?
21:06:35 <mestery> :)
21:07:09 <markmcclain> both guys will be able to help with our review bandwidth especially with the large amount of work we have to accomplish in H3
21:07:34 <armax> I hope I won't let you guys down :)
21:07:38 <garyk> they have ti buy all drinks at the next summit. congratulations
21:07:59 <markmcclain> speaking of work.. we're getting close to our target number of med or high blueprints for H3
21:08:26 <markmcclain> #info armax and mestery promoted to core
21:08:52 <markmcclain> now that is it official in the logs can continue :)
21:08:55 <markmcclain> #link https://launchpad.net/neutron/+milestone/havana-3
21:09:52 <markmcclain> when grooming the blueprint list I've become a little concerned with several of the blueprints that have not been started
21:11:40 <nati_uen_> It looks like some bp with 'not started' have reviews.. (Qos etc)
21:12:09 <markmcclain> which one?
21:12:58 <nati_uen_> markmcclain: ah sorry it is my mistake
21:13:39 <markmcclain> no worries.. there are several in that area and I thought I had missed one
21:14:08 <markmcclain> #topic Bugs
21:14:16 <markmcclain> https://bugs.launchpad.net/neutron/+bug/1194026
21:14:18 <uvirtbot> Launchpad bug 1194026 in neutron "check_public_network_connectivity fails with timeout" [Critical,In progress]
21:14:34 <markmcclain> This bug is still open.  nati_uen_ want to update?
21:15:05 <nati_uen_> OK In this week, I found ping time out is just 20 sec.
21:15:15 <nati_uen_> so I fixed tempest, but it looks like no effects
21:15:27 <nati_uen_> I'm working on this https://review.openstack.org/#/c/37576/
21:15:49 <nati_uen_> latest patch looks no failure in some gating. so I'll keep investigation
21:15:59 <markmcclain> ok.. thanks for working through it
21:16:43 <nati_uen_> I can't find the way to reproduce it in my env for this time.. This is very nasty bug
21:17:11 <markmcclain> Yes it definitely is… I've tried to replicate it a few ways too
21:17:30 <markmcclain> Any other bugs the team should tracking?
21:18:42 <markmcclain> #topic Docs
21:18:54 <emagana> still working on: https://bugs.launchpad.net/openstack-manuals/+bug/1202331
21:18:55 <uvirtbot> Launchpad bug 1202331 in openstack-manuals "renaming to neutron in non-networking docs " [Medium,In progress]
21:19:29 <emagana> will be completed this week!  I was busy updating a plugin, you can guess which one!
21:19:42 <markmcclain> the linuxbridge :p
21:19:59 <emagana> markmcclain: OVS  ;-)
21:19:59 <markmcclain> seriously, thanks for updating the other manuals
21:20:38 <markmcclain> Any questions for docs?
21:20:43 <emagana> on VPNaaS, will start ASAP! BP is created and assigned to me
21:20:56 <markmcclain> cool
21:20:56 <emagana> on FWaaS, Sumit can provide update
21:21:11 <nati_uen_> emagana: please assign it for me :)
21:21:31 <pcm_> I've starting on https://bugs.launchpad.net/openstack-api-site/+bug/1203865
21:21:32 <uvirtbot> Launchpad bug 1203865 in openstack-api-site "Neutron VPNaaS API Docs" [Undecided,New]
21:21:34 <emagana> nati_uen_: thanks!
21:21:55 <pcm_> Waited till quantum->neutron change occurred.
21:22:32 <markmcclain> Good idea
21:22:35 <markmcclain> #topic API
21:22:41 <markmcclain> salv-orlando: hi
21:22:54 <salv-orlando> hello people
21:23:11 <salv-orlando> on the API side, there is no major news worth mentioning
21:23:27 <enikanorov> i'd like to ask
21:23:30 <salv-orlando> I think there is not a lot left to sort out for FW and VPN
21:23:38 <salv-orlando> perhaps just about port ranges on FW
21:23:46 <salv-orlando> but I've no checked gerrit over the weekend
21:23:55 <enikanorov> 'api core for services' blueprint has high priority for H3
21:23:57 <salv-orlando> anyway, nothing that cannot be discussed directly on gerrit
21:24:05 <salv-orlando> enikanorov: yep
21:24:13 <enikanorov> which 'service' extension worth moving to core?
21:24:20 <enikanorov> routers, i guess?
21:24:31 <salv-orlando> I think that's related to L2/L3 split
21:25:03 <salv-orlando> IMHO, doing the blueprint you mentioned make sense only if we manage to make progress on the L2/L3 split
21:25:23 <salv-orlando> I don't think, at this stage, we are in a position such to consider anything else core
21:25:26 <enikanorov> I see. then I don't see why it should have high priority
21:25:42 <enikanorov> as i didn't see much progress with l2/l3 split
21:26:29 <salv-orlando> I am happy to keep on the same priority level on the blueprint for splitting layer-2 and layer-3 plugin
21:26:48 <salv-orlando> and talking about that blueprint… do we have a real use case for that for Havana?
21:27:19 <mestery> salv-orlando: What was the original use case for that BP, can you refresh my memory?
21:27:38 <enikanorov> not for Havana, I guess.
21:27:56 <salv-orlando> allowing for implementing plugins which provided L3 functionality over another L2 plugin
21:28:00 <nati_uen_> IMO,  l2/l3 split should be high priority. IMO Henry has some reviews
21:28:23 <enikanorov> i remember that was a 5k lines patch
21:28:28 <salv-orlando> for instance you might use ML2 for L2, and then 'use-my-router-and-your-network-will-be-faster-than-ever' for L3
21:28:35 <mestery> Bob has rebased this a few times, the patch is quite large, if people are interested, I'll see if we can get this pushed out again soon.
21:28:38 <enikanorov> after having a first set of -1 it was abandoned
21:28:51 <nati_uen_> I'm also happy to review it
21:29:08 <salv-orlando> To cut a long story short, the rules for all the other patches will apply to this one too.
21:29:10 <mestery> OK, I will talk to Bob and get him to push it out, though he may be on vacation for the rest of this week.
21:29:31 <salv-orlando> We can downgrade it to low if we deem there's not enough interest around it.
21:29:48 <mestery> salv-orlando: I think there is plenty of interest it appears.
21:29:49 <salv-orlando> At that stage, we might downgrade also the blueprint assigned to enikanorov
21:29:59 <markmcclain> ok
21:30:18 <markmcclain> I'll put both at medium for now
21:30:19 <enikanorov> yes, that seems reasonable. but i even suggest to postpone it to Icehouce
21:30:24 <enikanorov> *s
21:30:47 <enikanorov> (I'm talking about https://blueprints.launchpad.net/neutron/+spec/api-core-for-services now)
21:30:52 <markmcclain> salv-orlando: thoughts on deferring?
21:31:18 <salv-orlando> I have no reason for making it a priority for Havana (the layer-2/layer-3 split)/
21:31:44 <salv-orlando> And even api-core-for-services… I don't seem to have any reason for making it happen in Havana
21:32:06 <salv-orlando> Let's start moving api-core-for-services to low.
21:32:14 <salv-orlando> Then we'll sync up with Bob on the other blueprint
21:32:21 <markmcclain> ok
21:32:26 <mestery> works for me too
21:32:58 <nati_uen_> It looks like some routers service are proposed, so IMO  layer-2/layer-3 split should be earlier
21:33:52 <salv-orlando> nati
21:34:22 <salv-orlando> nati_uen_: I am aware of a draft blueprint for a L3 plugin, and of another 'provider router' blueprint still under discussion
21:34:50 <salv-orlando> but not anything else - with priority medium or higher - which will require this change for havana
21:35:00 <nati_uen_> salv-orlando: I got it
21:35:14 <amotoki> As long as router is implemeted as a part of core pluign, we don't need to split L2/3.
21:35:24 <salv-orlando> amotoki: correct.
21:35:51 <mestery> The split allows for something like the Embrane work to exist without a L2 plugin, though. Just FYI.
21:36:08 <salv-orlando> mestery: that is the draft I mentioned
21:36:09 <GeoffArnold> Heads-up: we'll be proposing a blueprint to address multivendor router support in the next few weeks
21:36:24 <mestery> GeoffArnold: For Havana?
21:36:31 <salv-orlando> GeoffArnold: you should propose it kind of… now
21:36:32 <GeoffArnold> No, icehouse
21:36:35 <salv-orlando> ah ok!
21:36:35 <mestery> Got it.
21:36:43 <markmcclain> Ok.. then we're fine on timing
21:36:48 <emagana> GeoffArnold: +1
21:36:55 <GeoffArnold> Early enough to prep for a useful discussion in Hong Kong
21:37:23 <salv-orlando> sure, it might be good if you check this blueprint from Bob Melander then, and see how it fits in your blueprint
21:37:39 <GeoffArnold> We're looking seriously at multivendor configurations and also virtual appliance provisioning (they're interrelated)
21:37:51 <GeoffArnold> wilco
21:38:13 <mestery> #link https://blueprints.launchpad.net/neutron/+spec/quantum-l3-routing-plugin L3 Routing BP
21:38:55 <markmcclain> mestery: thanks for the link
21:39:15 <GeoffArnold> Yeah, I looked at that. The challenge is how to do policy-based resource allocation across resources from multiple vendors
21:39:54 <markmcclain> will be interested to read the BP
21:40:14 <mestery> Agreed, and to see how it ties into Bob's work.
21:40:26 <markmcclain> to make sure we stay on time.. I'll follow up with Bob, Salvatore
21:40:34 <markmcclain> Any other API issues to discuss?
21:40:51 <salv-orlando> markmcclain: not from me.
21:41:05 <enikanorov> nope.
21:41:15 <markmcclain> Thanks for the update
21:41:20 <amotoki> I would like to discuss about default quota API. I will send a dev mail later.
21:41:35 <amotoki> s/a dev mail/a mail to dev ML/
21:42:02 <amotoki> please go ahead.
21:42:11 <markmcclain> amotoki: sounds good
21:42:18 <markmcclain> #topic VPNaaS
21:42:37 <markmcclain> nati_uen_: Looks like getting real close and most of the iterations have been small changes
21:43:15 <nati_uen_> markmcclain: I think so too.
21:43:49 <nati_uen_> salv-orlando: Is the pending policy OK for you? not allow update resource on pending. allow delete anytime
21:44:05 <salv-orlando> lgtm
21:44:13 <nati_uen_> salv-orlando: Thanks.
21:44:23 <nati_uen_> amotoki: do you have any more concerns?
21:44:35 <amotoki> nati_uen_: nothing except PENDING above
21:44:54 <amotoki> nati_uen_: i agree with your policy
21:45:06 <nati_uen_> amotoki: latest patch is following the policy
21:45:17 <nati_uen_> amotoki: so pending issue is solved?
21:45:24 <amotoki> nati_uen_: will check
21:45:32 <nati_uen_> amotoki: thanks
21:45:48 <nati_uen_> markmcclain: amotoki: do you guys have concerns for driver patch?
21:46:24 <markmcclain> need to take a last look and test a bit more
21:46:47 <amotoki> I don't see more concerns so far. will test it after the meeting.
21:46:50 <nati_uen_> markmcclain: Thanks. https://wiki.openstack.org/wiki/Quantum/VPNaaS/HowToInstall is up-to-date, so plese use this
21:46:54 <nati_uen_> amotoki: Thanks
21:47:01 <markmcclain> nati_uen_: will do
21:47:05 <markmcclain> #topic Nova
21:47:08 <nati_uen_> One news is heat support is in review :)  That's all from us
21:47:20 <markmcclain> nati_uen_: great!
21:48:04 <nati_uen_> markmcclain: Thanks!
21:48:16 <markmcclain> I'm sure most saw, but nova-net deprecation has been delayed a release
21:48:30 <markmcclain> So the earliest it will disappear is J
21:48:41 <garyk> do we still want to try and have quantum set as the default for devstack?
21:48:58 <markmcclain> garyk: yes
21:48:58 <mestery> I think we should try for that, yes.
21:49:00 <garyk> sorry neutron - bad habits are hard to break
21:49:17 <markmcclain> The pushback has been passing tempest tests unmodified
21:50:45 <markmcclain> Anything else for Nova integration?
21:51:28 <markmcclain> #topic FWaaS
21:52:05 <markmcclain> SumitNaiksatam: FW is in the same situation as VPN.. super close to merging with only minor fixups made the last 2-3 days
21:52:16 <SumitNaiksatam> markmcclain: hi
21:52:30 <SumitNaiksatam> yeah, the patch has been stable for a few days now
21:52:47 <SumitNaiksatam> i hope 50 is a lucky number :-)
21:52:57 <markmcclain> me too :)
21:53:12 <SumitNaiksatam> i don't seem to have any pending items
21:53:53 <garyk> sorry - internet went down and came back up again. did i miss anything?
21:53:55 <SumitNaiksatam> last i checked there were 6 cores who commented on the reviews
21:54:03 <SumitNaiksatam> is everyone happy? :-)
21:54:27 <SumitNaiksatam> talking about the API patch: https://review.openstack.org/#/c/29004/
21:54:39 <markmcclain> I think so.. it it is really testing
21:54:53 <salv-orlando> If nobody complains about dichotomy with port ranges wrt security groups I am fine too
21:55:08 <SumitNaiksatam> salv-orlando: thanks
21:56:05 <SumitNaiksatam> the agent and the driver patches are also done
21:56:09 <markmcclain> salv-orlando: you're talking n:n in one column vs range_min and range_max in a separate attributes (columns)?
21:56:16 <SumitNaiksatam> but really waiting for the API patch to get through
21:56:43 <salv-orlando> markmcclain: yes
21:57:13 <markmcclain> yeah.. I've gone back and forth on it
21:57:27 <salv-orlando> sumitnaiksatam replied that after a careful review they asserted that this widely accepted as standard way of configuring throughout the industry
21:57:39 <SumitNaiksatam> markmcclain, salv-orlando: this seemed more natural coming from the firewalls/iptables world
21:58:17 <SumitNaiksatam> i am not religious about this, but talking to a lot of people they seemed this was more usable and easier
21:58:26 <salv-orlando> markmcclain: I think both solutions are totally valid. It's the fact that they are different that makes me unhappy
21:58:59 <markmcclain> understand.. that's my concern
21:59:24 <markmcclain> we can talk offline since we're running out of time
21:59:41 <salv-orlando> If we have a large consensus that the range in a  single attribute is the way to go, I'd add it to security groups and deprecate the range (keeping it for bw compatibility)
21:59:43 <salv-orlando> ok
21:59:53 <marun> -1 on a single attribute
22:00:09 <marun> we're storing this in a db
22:00:17 <marun> storing as a blob is a dumb idea
22:00:33 <SumitNaiksatam> marun: dumb???
22:00:34 <marun> maybe it works in fw land, where indexed access is easy
22:00:39 * salv-orlando please don't tell me I've opened a can of worms
22:00:42 <marun> yes, dumb in the context of db storage
22:01:04 <marun> it's not smart to concatenate columns unless there is a performance reason to do so
22:01:09 <marun> i'm not saying this has to affect the api
22:01:16 <marun> but storage should be separate
22:01:46 <SumitNaiksatam> marun: concatenation of columns? not sure if you have looked at the patch
22:02:01 <marun> i heard 'one column vs range_min and range_max'
22:02:01 <SumitNaiksatam> i think we are digressing here talking about performance
22:02:04 <nati_uen_> yeah, we should discuss it on the review
22:02:19 <marun> fair enough
22:02:23 <markmcclain> yeah.. we can chat after in #openstack-neutron
22:02:37 <markmcclain> we're at an hour and have several folks staying up late
22:02:41 <SumitNaiksatam> markmcclain: sounds good, lets sort this out an move on
22:02:48 <SumitNaiksatam> an -> and
22:02:51 <markmcclain> #topic LBaaS
22:02:56 <markmcclain> enikanorov: anything new?
22:02:59 <enikanorov> yep
22:03:09 <enikanorov> i'd like to have this bp in for h-3: https://blueprints.launchpad.net/neutron/+spec/lbaas-integration-with-service-types
22:03:22 <enikanorov> i've also posted a question to dev ML
22:03:31 <markmcclain> I set the milestone to H3
22:03:32 <enikanorov> regarding the possible issue in API
22:03:36 <markmcclain> is not showing that way for you?
22:03:40 <enikanorov> markmcclain: but it's not approved yet
22:03:59 <markmcclain> sorry.. too many boxes to check on different screens.. I'll fix that
22:04:05 <enikanorov> thanks!
22:04:35 <markmcclain> ok.. I'll look at the ML for the API issue
22:04:35 <enikanorov> beside that, SumitNaiksatam, nati_uen_, salv-orlando, please share your thought on the question in ML
22:04:49 <enikanorov> thanks!
22:04:53 <enikanorov> that's all from my side
22:05:00 <markmcclain> Thanks enikanorov
22:05:02 <salv-orlando> enikanorov: will do
22:05:06 <markmcclain> #topic Stable Branch
22:05:13 <nati_uen_> enikanorov: sure
22:05:35 <markmcclain> garyk: since me a text that connection dropped, but wanted to let everyone know that the next Stable release will be Aug 8th
22:05:57 <markmcclain> #topic Horizon
22:06:06 <markmcclain> amotoki: Any important items to highlight?
22:06:25 <amotoki> FWaaS beta is available
22:06:47 <mestery> markmcclain: For stable, I'd like to see if we can get this bug in there: https://bugs.launchpad.net/neutron/+bug/1204125
22:06:48 <uvirtbot> Launchpad bug 1204125 in neutron "Neutron DHCP agent generates invalid hostnames for new version of dnsmasq" [Medium,Fix committed]
22:06:59 <amotoki> And default quota read API discussion is raised in horizon bug. I will send a mail later.
22:07:03 <mestery> markmcclain: I marked it as backport potential.
22:07:03 <amotoki> that's all.
22:07:18 <markmcclain> mestery: yeah.. that's on the list
22:07:32 <mestery> markmcclain: Thanks (sorry for being late here, was digging hte link out)
22:08:08 <markmcclain> amotoki: cool.. I'll have to try the FWaaS UI
22:08:17 <markmcclain> Any other questions on Horizon?
22:09:04 <markmcclain> #topic Open Discussion
22:09:22 <markmcclain> Anything we didn't cover that needs to be talked about?
22:09:24 <mestery> FYI: I have a new revision of the ML2 devstack patch out for review here: https://bugs.launchpad.net/devstack/+bug/1200767
22:09:25 <uvirtbot> Launchpad bug 1200767 in devstack "Add support for setting extra network options for ML2 plugin" [Undecided,In progress]
22:09:28 <nati_uen_> markmcclain: When we discuss service-agent?
22:09:36 <mestery> Anyone who wants to try it out, please do and provide feedback.
22:09:48 <markmcclain> nati_uen_: thanks for the reminder
22:11:19 <markmcclain> IRC chat about service agents Wednesday 1530 UTC in #openstack-meeting-alt
22:11:37 <nati_uen_> markmcclain: sure. Thanks :)
22:12:01 <markmcclain> we will discuss the high level changes needed to the l3 agent now that VPN and FW will be available
22:13:16 <markmcclain> mestery:  thanks for sharing the devstack link
22:13:24 <markmcclain> Anything else for this week?
22:13:50 <dkehn> https://review.openstack.org/#/c/30447/
22:13:59 <salv-orlando> yeah, I would like to rewrite Neutron in javascript
22:14:03 <salv-orlando> jk :)
22:14:14 <markmcclain> salv-orlando: I want LISP first
22:14:27 <mestery> salv-orlando: Why not Ruby?
22:14:34 <salv-orlando> I was about to say Eiffel, but then people might have called me reactionary
22:14:35 <nati_uen_> erlang erlang erlang!
22:14:50 <markmcclain> dkehn: I'll take a look and test concurrently with API changes
22:15:01 <dkehn> markmcclain, thx
22:15:35 <amotoki> dkehn: sorry for the late response. I'll look it.
22:15:43 <dkehn> sweet
22:15:45 <HenryG> I cannot run neutron unit test suite twice in a row. Anyone else seeing this?
22:15:58 <markmcclain> HenryG: No
22:16:02 <markmcclain> How are you invoking?
22:16:07 <dkehn> amotoki, has all the suggestions that you brought up
22:16:09 <HenryG> tox -e py27
22:16:28 <mestery> HenryG: I occasionally see random failures. Try a third run. :)_
22:16:37 <nati_uen_> neutron.tests.unit.ml2.test_agent_scheduler.Ml2AgentSchedulerTestCase.test_network_add_to_dhcp_agent always fails in my env, but it is ok on gating
22:16:39 <dkehn> not I
22:17:01 <clarkb> HenryG: testr will actually reorder tests based on previous runtimes to try and maximize throughput
22:17:04 <amotoki> dkehn: Thanks for addressing them. I will check both API and CLI sides.
22:17:08 <enikanorov> nati_uen_: the same failures in my env
22:17:13 <clarkb> HenryG: its possible that reording results in a thing that fails
22:17:28 <clarkb> HenryG: nati_uen_ it is probably worth debugging the failures as chances are they are real bugs
22:17:29 <nati_uen_> enikanorov: I'm grad to hear I'm not only one :)
22:17:29 <dkehn> amotoki, please read the review notes there was a side effect on one of them
22:17:43 <markmcclain> clarkb: agreed
22:17:53 <clarkb> HenryG: nati_uen_ `testr run --analyze-isolation` after a failed run is very useful for finding test interactions that cause failures
22:17:58 <enikanorov> and they'r not failing if run alone
22:18:13 <nati_uen_> clarkb: I'm assumption is the function using waiting timer. if the test exceeds 1 sec, it will fail
22:18:16 <mestery> I've got to drop off folks, have a good night!
22:18:22 <nati_uen_> clarkb: I agree, we should debug it
22:18:29 <nati_uen_> clarkb: Thanks
22:18:31 <markmcclain> mestery: bye
22:18:41 <nati_uen_> mestery: bye!
22:19:05 <clarkb> enikanorov: that usually indicates there is a second test (or more) that is interferring with your test
22:19:11 <HenryG> Thanks clarkb - I will try your suggestions when I get a chance. Will ping you if I have further questions.
22:19:16 <clarkb> enikanorov: by running concurrently or before the failing test
22:19:41 <enikanorov> clarkb: i understand. the failures are pretty stable. 100% i'd say
22:20:18 <markmcclain> It is very likely that we have a few tests that unintentionally don't clean up or change read-only datastructures
22:20:59 <markmcclain> Everyone have a good evening/morning/afternoon and talk to everyone on the mailing list and IRC
22:21:03 <markmcclain> #endmeeting