14:00:16 <mestery> #startmeeting networking_ml2
14:00:17 <openstack> Meeting started Wed Jul  3 14:00:16 2013 UTC.  The chair is mestery. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:18 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:20 <openstack> The meeting name has been set to 'networking_ml2'
14:00:23 <rkukura> good morning!
14:00:33 <mestery> #link https://wiki.openstack.org/wiki/Meetings/ML2 Agenda
14:00:39 <rkukura> or afternoon!
14:00:55 <mestery> I thought we could run through action items from last week quick.
14:01:22 <mestery> There was some confusion on the "Instance ID" from Nova. I opened a blueprint, but will close it per comments from rkukura. https://blueprints.launchpad.net/nova/+spec/vm-instance-id-neutron
14:01:38 <mestery> #topic Action Items From Last Week
14:01:52 <mestery> I also opened a bug for proper OVS agent tunnel programming: https://bugs.launchpad.net/neutron/+bug/1196963
14:01:58 <rkukura> the device_id attribute already contains the instance ID
14:02:15 <mestery> rkukura: Thanks for correcting my understanding there. :)
14:02:29 <mestery> #link https://wiki.openstack.org/wiki/Neutron/ML2 ML2 Wiki Page
14:02:30 <apech> yes, sorry for the confusion in proposing this :) nice to see this is already there
14:02:48 <mestery> I added an ML2 Wiki page, so we can now put things like devstack info there.
14:03:08 <rkukura> I'll take an action to add description text, links to slides to wiki page
14:03:15 <mestery> rkukura: Thanks!
14:03:27 <mestery> #action rkukura to update ML2 wiki with text and slides
14:03:41 <mestery> rkukura: Thanks for updating this review with ML2 comments! https://review.openstack.org/#/c/33736/
14:04:09 <rkukura> Is arosen here?
14:04:29 <mestery> rkukura: May be too early for him. :)
14:04:46 <rkukura> I spoke with him about this, and am starting to think his approach makes sense for our BP
14:05:07 <mestery> That's great news actually! I'll review it closer as well, though I saw your comments there.
14:05:12 <rkukura> Basically, expose our segment_list as a single attribute
14:05:37 <mestery> That sounds like it will work.
14:05:43 <rkukura> I don't like the term "transport_zones" for segment_list though
14:06:13 <mestery> OK, one more action item was for rcurran to send some notes on common code for MechanismDrivers.
14:06:21 <mestery> rcurran: How goes that?
14:06:38 <rcurran> i had actually already started that email thread before last weeks IRC
14:06:53 <rkukura> If anyone feels full-fledged REST resources are needed for segments, please get involved in the Aaron's review
14:07:19 <mestery> #action ML2 team to review https://review.openstack.org/#/c/33736/ in the context of multi-segment ML2
14:07:22 <apech> rkukura: you mean the ability to read/write the segment list directly from standardized Neutron APIs?
14:08:02 <rkukura> apech: Yes, although this approach does allow updating the list.
14:08:43 <rkukura> arosen pointed out that only admins would ever see this segment resource
14:09:31 <rkukura> I also like the extensibility of the list-of-dicts vs. fixed fields, but am a bit concerned about losing queryability
14:10:02 <apech> Seems like these are details that are easily hidden from the user, so admin-only access may be okay
14:10:24 <mestery> rkukura: If we think this will cover ML2 multi-segment, do we look at having arosen move his work to be more generic to cover that BP?
14:11:12 <rkukura> mestery: The idea would be for his patch to [re]define the extension API, and our BP would cover implementing it for ml2
14:11:21 <mestery> rkukura: Got it.
14:11:43 <rkukura> I want to formalize how this co-exists with the current provider extension
14:12:06 <mestery> OK, lets move on to the next agenda item now.
14:12:29 <mestery> Wanted to point out rcurran's email from a week ago, folks writing MechanismDrivers should have a look at that and respond as necessary.
14:12:48 <mestery> Lets move on to blueprint updates now.
14:12:53 <mestery> #topic Blueprint Updates
14:12:59 <apech> mestery: I thought the last discussion from last week is that rcurran was going to send out the common code he had in mind
14:13:07 <apech> that'd help. I'll certainly reread and respond too
14:13:29 <mestery> apech: I think it was email and/or notes. :)
14:13:38 <rcurran> yes, but i had already sent out that email on 21-Jun
14:13:38 <apech> ah okay. sorry, missed that. will look
14:13:47 <mestery> apech: No worries.
14:13:59 <mestery> OK, lets start with apech's MechanismDriver BP
14:14:06 <mestery> #link https://review.openstack.org/33201 Review for MechanismDriver BP
14:14:20 <garyk> markmcclain: ping
14:14:30 <mestery> garyk: Hi
14:14:39 <mestery> apech: Any updates for everyone?
14:14:53 <markmcclain> o/
14:14:58 <apech> I sent out an update last night, which then promptly failed pep8 for a last minute change. About to re-update
14:15:05 <apech> I think it's getting close - appreciate the comments
14:15:16 <mestery> garyk markmcclain: FYI, we're in the middle of hte ML2 meeting on this channel. :)
14:15:19 <apech> rkukura - think you'll have time to take a deeper look soon?
14:15:30 <rkukura> yes
14:15:38 <garyk> mestery: oops. sorry. wrong channel :)
14:16:05 <markmcclain> mestery: sorry.. I thought there was something you all wanted me to look at
14:16:35 <mestery> apech: I need to look at the latest version of your patch as well based on our discussion on gerrit on a prior version.
14:17:00 <mestery> Any other questions for apech on MechanismDriver BP?
14:17:09 <apech> mestery: great, thanks
14:17:50 <mestery> Given it's a likely long holiday weekend here in the US this week, should we try to shoot for early next week getting this BP merged?
14:17:54 <rkukura> apech: It seems its getting very close, and just minor details should need changing
14:18:42 <apech> mestery: works for me. Hopefully others can just pull in changes to unblock their own development of ml2 mechanism drivers
14:19:14 <mestery> #action ML2 subteam to review MechanismDriver blueprint with the goal of having it merge by early next week.
14:19:25 <mestery> OK, lets move on.
14:19:33 <mestery> #link https://blueprints.launchpad.net/quantum/+spec/ml2-portbinding ML2 PortBinding
14:19:42 <mestery> rkukura: Any updates?
14:20:02 <rkukura> no progress yet, but will start this weekend (when other work will hopefully slow down)
14:20:33 <apech> rkukura: is your goal still to try to do this h2? not sure how long you think this will take
14:21:14 <rkukura> apech: I'd like to get in H2, shouldn't be too much code given the MechanismDriver work already in review
14:21:24 <apech> rkukura: great, thanks!
14:21:31 <mestery> Thanks for the updates rkukura!
14:21:40 <mestery> OK, any questions for PortBinding?
14:21:45 <Sukhdev_> rkukura: any eta?
14:22:01 <rkukura> code in review by next week's meeting at latest
14:22:25 <rkukura> which is the H2 freeze, I think
14:22:25 <Sukhdev_> rkukura: thanks
14:22:36 <mestery> OK, lets move to the next agenda item.
14:22:46 <mestery> #link https://review.openstack.org/33297 ML2 GRE Code Review
14:22:52 <mestery> matrohon: Here?
14:22:58 <matrohon> mestery: yes
14:23:00 <matrohon> hi
14:23:06 <mestery> hi matrohon!
14:23:12 <mestery> How goes the bp/ml2-gre work?
14:23:53 <matrohon> it should be ok for a merge ASA i take the review into account
14:24:03 <matrohon> there is only nits
14:24:19 <mestery> matrohon: Great! And apologies for my git review mishap which resulted in me rebasing your commit. :)
14:24:33 <matrohon> mestery: it' ok :)
14:24:36 <mestery> The instructions on the wiki for dependent commits were not quite right it turns out. :)
14:24:52 <matrohon> but i'd like rkukura to validate the achitecture
14:25:06 <matrohon> with tunnel_type.py
14:25:20 <matrohon> and absctract method to handle endpoint managment
14:25:21 <rkukura> OK
14:25:24 <mestery> matrohon: I agree, as the bp/ml2-vxlan is dependent on that as well.
14:25:39 <mestery> #link https://review.openstack.org/#/c/35384/2 ML2 VXLAN Code Review
14:25:57 <mestery> This was pushed out yesterday, and is dependent on matrohon's GRE work.
14:26:24 <matrohon> rkukura: you were talking about thinking of a better way to handle generic RPC calls
14:26:58 <matrohon> and to dispatch it in driver, no?
14:27:05 <rkukura> Looks like your TunnelTypeDriver may be more-or-less what I was thinking
14:27:18 <matrohon> rkukura: ok great!
14:27:47 <mestery> OK, looks like both GRE and VXLAN BPs are moving along nicely then.
14:27:53 <matrohon> mestery: I reviewed your code about vxlan
14:27:53 <rkukura> We may also want more general ability for drivers to mix-in RPC handlers
14:28:06 <mestery> matrohon: I saw that, thanks! I will plan to address comments today, appreciate it!
14:28:33 <mestery> matrohon: I think your direction on the multicast group is a good one, and I'll address that today.
14:29:07 <matrohon> rkukura: ok, do you want us to think about that before ml2-gre and vxlan get merged
14:29:46 <rkukura> is this the issue of storing multicast groups with the endpoints, vs configuring single group?
14:30:34 <rkukura> I could be way off base on that
14:30:45 <matrohon> rkukura : I proposed to sort multicast group in VXLan allocation table
14:32:37 <mestery> Lets continue the VXLAN multicast discussion in the review and on the mailing list.
14:32:46 <rkukura> What is the disposition of "I even wonder if it's really usefull, in this first implementation, to store multicast group in db if it has to be the same for every VNI."?
14:33:31 <rkukura> I was interpreting this as suggestion to not store groups in DB
14:33:40 <rkukura> OK with moving to email/gerrit
14:33:49 <mestery> rkukura: Sorry, please continue.
14:34:00 <matrohon> rkukura : yes since bp vxlan-linuxbridge use a single multicast group for every VNI
14:34:20 <mestery> matrohon rkukura: The crux of the issue is do we want to support more than one multicast group or not?
14:34:32 <mestery> For the first cut of the code, I planned to support a single one for simplicity.
14:34:36 <mestery> Thoughts?
14:35:06 <matrohon> mestery: exactly, it's not necessary in a first time, but you should have this feature in the future
14:35:26 <rkukura> I'm for keeping it simple until we are sure complexity is needed
14:35:35 <mestery> matrohon: OK, I can file a blueprint to track this.
14:35:51 <mestery> #action BP ml2/vxlan to support a single multicast group in first iteration
14:35:52 <matrohon> mestery : ok, great
14:36:13 <mestery> #action mestery to file BP to add support for multiple multicast addresses to ML2 VXLAN code
14:36:33 <mestery> OK, any more GRE or VXLAN discussion before we move on?
14:37:18 <mestery> The next item was ml2-multi-segment-api, but I believe we previously discussed this already.
14:37:23 <rkukura> right
14:37:25 <mestery> Do we need to discuss anything else on this now?
14:37:38 <rkukura> Just that we might track it for H3
14:37:55 <mestery> rkukura: Good point. We should target that BP at H3 then, right?
14:38:00 <rkukura> was low priority, maybe change to medium if agreed on simple approach
14:38:35 <Sukhdev_> I wanted to ask a question - I filed the BP for Arista driver, how come I do not see it in the Havana list?
14:38:53 <mestery> Sukhdev_: Did you target it for H2/H3?
14:39:02 <Sukhdev_> Do I have to take any additional step to include it in havana?
14:39:24 <Sukhdev_> I did not specify - I thought the approver does that
14:39:30 <rkukura> I'll target it
14:39:42 <mestery> rkukura: Thanks!
14:39:44 <Sukhdev_> rkukura: thanks
14:39:55 <mestery> OK, moving on to the next agenda item.
14:39:58 <mestery> #topic Bugs
14:40:04 <rkukura> does this replace the original hardwaredriver BP?
14:40:28 <mestery> #link https://review.openstack.org/#/c/33107/ OVS agent tunnel_types bug
14:40:31 <Sukhdev_> rkukura: yes
14:40:39 <apech> rkukura: yes, original hardwaredriver BP can go away
14:40:44 <rkukura> H2 or H3?
14:41:02 <Sukhdev_> H3
14:41:08 <mestery> rkukura: Yong gave me a -2, and I thought this was so close.
14:41:41 <mestery> #link https://docs.google.com/a/mestery.com/document/d/1NT3JVn2lNk_Hp7lP7spc3ysWgSyHa4V0pYELAiePD1s/edit#heading=h.4grgudkj8ei3 ML2 OVS Agent Changes Design
14:41:55 <mestery> I added a spec on what the OVS Agent will look like after the changes are done.
14:42:00 <mestery> rkukura: Your review would be appreciated!
14:42:17 <rkukura> I think he just wanted to understand the plan, and the writeup should help
14:42:37 <mestery> Yes, agreed. I am now thinking to go the full way and implement everything in the document.
14:42:49 <mestery> e.g. deprecate enable_tunneling in the server, add tunnel_types into the 'ovs' section, etc.
14:43:30 <matrohon> mestery : make sense
14:43:51 <rkukura> mestery: Two comments on that: 1) VLANs can already co-exist with flat, local, and gre networks. 2) should emphasize openvswitch agent supporting multiple tunnel types concurrently (with ml2) is goal
14:44:10 <mestery> rkukura: Thank you, will update with those comments.
14:44:29 <mestery> I'll plan a new version of the tunnel_types patch with the changes from the document for early next week at the latest.
14:44:52 <rkukura> OK - lets solicit feedback on the writeup on openstack-dev
14:45:29 <rkukura> we kind of have our own sandbox with ml2, but when we change agents or legacy plugins, people pay more attention
14:45:30 <mestery> rkukura: I sent email to that affect I believe.
14:45:36 <matrohon> mestery : I assigned this bug to myself : https://bugs.launchpad.net/neutron/+bug/1196963
14:45:37 <uvirtbot> Launchpad bug 1196963 in neutron "Update the OVS agent code to program tunnels using ports instead of tunnel IDs" [Medium,New]
14:45:46 <matrohon> mestery : it's ok for you?
14:45:55 <mestery> matrohon: I was going to discuss that one next. :)
14:46:01 <mestery> matrohon: And yes, thank you for taking that one up!
14:46:06 <matrohon> mestery : ok sorry :)
14:46:21 <mestery> #action mestery to send email to openstack-dev for the OVS Agent Writeup
14:46:51 <mestery> matrohon: Do you think the bug you mentioned will be merged by H2?
14:47:01 <mestery> For ML2, it will be important I think.
14:47:54 <matrohon> mestery : i will try to work on it asap
14:48:06 <mestery> matrohon: Great, thank you!
14:48:18 <matrohon> is there a feature freeze for H2?
14:49:12 <mestery> matrohon: Do you mean after H2?
14:50:08 <matrohon> I mean a date taht I have to respect? to let time for review?
14:50:08 <rkukura> H2 is 7/18, but I think a freeze on 7/10 was mentioned in the meeting
14:50:17 <matrohon> rkukura : ok
14:51:03 <rkukura> Maybe not 7/10: "<markmcclain> Also it is now July, which means were are 10 days away from H2 feature freeze."
14:51:19 <mestery> Also, keep in mind gerrit and CI are down this weekend for a day.
14:51:27 <rkukura> could be business days
14:51:28 <mestery> And with the name change, that may cause some shifting and churn.
14:51:33 <markmcclain> freeze is 7.10
14:51:51 <mestery> markmcclain: thanks!
14:51:53 <markmcclain> branch will be cut July 16th
14:52:40 <rkukura> markmcclain: End of day 7/10?
14:53:06 <markmcclain> yes
14:53:42 <mestery> OK, we're running low on time, any other bugs people want to discuss now related to ML2?
14:54:02 <apech> mestery: I'm all good
14:54:25 <mestery> #topic Questions/Comments?
14:54:25 <rkukura> I'm good
14:54:41 <mestery> OK, thanks for everyone's great work on all the ML2 items!
14:54:53 <mestery> For those in the US, have a great holiday this week!
14:54:55 <apech> thanks! Happy 4th
14:54:58 <mestery> #endmeeting