16:01:44 <rkukura> #startmeeting networking_ml2
16:01:44 <openstack> Meeting started Wed Jan 29 16:01:44 2014 UTC and is due to finish in 60 minutes.  The chair is rkukura. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:45 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:47 <openstack> The meeting name has been set to 'networking_ml2'
16:01:48 <trinaths> Good Morning ..all
16:01:53 <banix> Hi
16:01:58 <rkukura> #link https://wiki.openstack.org/wiki/Meetings/ML2 Agenda
16:03:02 <trinaths> I'm trinath .. working for FSL SDN Mechanism driver..
16:03:17 <rkukura> #topic Action Item Review
16:03:35 <rkukura> hi trinaths! welcome!
16:03:51 <irenab> hi
16:04:07 <Sukhdev> Hi
16:04:25 <sadasu> Hi! Working on the UCS manager mechanism driver supporting sr-iov
16:04:34 <rkukura> looks like only AI was for me to send summary around ML2 binding to the list
16:05:09 <trinaths> this is my first time to the meeting.. How we start on..
16:05:33 <rkukura> I just posted a proposal which covers flow of info from bound MechanismDriver to nova's GenericVIFDriver
16:05:48 <rkukura> #link http://lists.openstack.org/pipermail/openstack-dev/2014-January/025812.html
16:06:05 <irenab> rkukura: would add the binding:profile to the discussion?
16:06:16 <rkukura> irenab: agreed
16:06:17 <sadasu> rkukura: just went through your summary on portbinding changes wrt ML2
16:06:44 <sadasu> thanks...like the generic solution for passing info to genericVIF driver via bound mech drivers
16:07:10 <rkukura> any feedback on the proposal to use binding:vif_details for both VIF security and PCI details?
16:07:56 <irenab> rkukura: will follow on the mailing list after more deep review
16:08:20 <rkukura> This proposal is purely for data flowing out from the plugin/driver, not for input data, so it is read-only
16:08:47 <irenab> but not sure if it makes sense to add PCI details to any port (even as None)
16:08:59 <sadasu> rkukura: agree with the generic idea, is it possible to go into more details?
16:09:10 <rkukura> As irenab mentioned, we will be filing BP to implement binding:profile in ML2 to handle data flowing into the plugin/driver
16:09:42 <Sukhdev> I have not read the proposal (will review it later) - so, the ML drivers will push the info back to the ML2 plugin and then this info gets pushed to nova, right?
16:10:00 <sadasu> can a port have both VIF security and PCI address info attached to it?
16:10:09 <sadasu> will this proposal handle that case too?
16:10:14 <rkukura> sadasu: Its really what's already in Nachi's patches, but just renamed from binding:vif_secuirty to binding:vif_details so it can be used for other things.
16:10:56 <irenab> rkukura: will this vif_inof available vig get_Device_details for agents?
16:11:01 <rkukura> sadasu: yes - the set of key/value pairs in binding:vif_details depends on the value of binding:vif_type
16:12:48 <rkukura> irenab: I think we need a separate effort to involve the bound MD in responding to the get_device_details RPC. Is that needed for PCI-passthru?
16:12:57 <sadasu> rkukura: ok..agreed
16:14:02 <sadasu> rkukura: I think I need it for my case..will have to get back to you
16:14:12 <irenab> rkukura: not sure, maybe needed. So the current patch does not extend the device_details with vif_info, right?
16:14:16 <rkukura> Lets discuss feedback on the binding:vif_details proposal on the list, and hopefully get nachi onboard with a plan to finally resolve the VIF security issue
16:14:32 <matrohon> irenab, rkukura : this looks like asomya proposal, MD should be able to add info to get_device_details
16:14:51 <rkukura> matrohon: Yes, that is what I was saying is a separate effort.
16:14:55 <matrohon> but maybe not the same as those return to nova
16:14:58 <irenab> matrohon: can please put link?
16:15:08 <rkukura> We need to know whether its a priority for icehouse
16:15:13 <amotoki> hi, i just read rkukura's proposal of vif_detail in the dev list.
16:15:22 <amotoki> binding:* attributes are all vif_details....
16:15:47 <amotoki> do we go vif_details to a generic dictionary?
16:16:13 <rkukura> amotoki: True, but do we want a proliferation of lots of top-level attributes that aren't for end users? Or is one dictionary sufficient for the MD->VIFDriver path?
16:17:27 <rkukura> amotoki: Yes, the proposal is for binding:vif_details to be a generic dictionary whose contents are interpreted based on the value of binding:vif_type.
16:17:43 <amotoki> understood.
16:18:05 <matrohon> irenab : https://docs.google.com/document/d/1ZHb2zzPmkSOpM6PR8M9sx2SJOJPHblaP5eVXHr5zOFg/edit#
16:18:10 <rkukura> Any more quick questions/comments on that proposal now, or we can take it to the list
16:18:18 <amotoki> ui am not sure now.. it seems we can split binding attrs into subcategories: MD->VIF, VIF->MD, VIF<->MD.
16:18:20 <irenab> matrohon: thanks
16:18:39 <rcurran> rkukura: does this review cover issue w/ vlan# being accessible for delete_port_postcommit()?
16:18:56 <rkukura> rcurran: Trying to get to that
16:19:07 <rcurran> on this commit?
16:19:49 <rkukura> amotoki: This proposal covers MD->VIF. Looking at using binding:profile for inputs to the MD for binding purposes.
16:20:11 <matrohon> rkukura : what is the difference between binding:vif_details and binding:profile
16:20:55 <rkukura> Regarding my action item, I've been trying to get this port attribute stuff resolved so I can post a clear description of the proposed changes to port binding regarding transactions and access to original vs. new binding details.
16:21:17 <rkukura> binding:vif_details is output from the MD, binding:profile is input to the MD
16:21:24 <amotoki> matrohon: at now binding:profile is reserved for the attr which a pllugin (a driver in ML2 case) can use freely
16:21:37 <amotoki> binding:profile is bidirectional attribute.
16:21:57 <matrohon> amotoki, rkukura : thanks
16:22:31 <rkukura> amotoki: I'm not aware of current cases where binding:profile is used for output data, and was trying to avoid the complication of merging input data with output data during updates.
16:23:19 <rkukura> Lets take the binding:profile and binding:vif_details discussion to the list, and move on with the agenda
16:23:37 <amotoki> ok.
16:24:44 <rkukura> I plan to post the proposal that rcurran was asking about to openstack-dev in the next day or to, so I'll keep this action item open
16:25:37 <rkukura> #action rkukura to post proposal for portbinding changes to call MDs outside of transactions and with all needed info
16:26:04 <rkukura> #topic bugs
16:26:17 <rcurran> fyi rkukura - i plan on pushing up a review w/ the work around solution of saving off vlan in delete_port_precommit() (for use in postcommit) i'll change this once your code gets in
16:27:00 <rkukura> rcurran: Great! Sorry I've taken so long to write that up, but I think you have the general idea from these meetings
16:27:21 <rkukura> #link https://bugs.launchpad.net/neutron/+bugs?field.tag=ml2
16:27:58 <matrohon> I just reported a new potential bug and tagged it with ml2
16:28:02 <matrohon> https://bugs.launchpad.net/neutron/+bug/1274160
16:28:26 <rkukura> 3 high priority bugs, 2 in progress
16:28:45 * mestery walks in very late.
16:29:12 <rkukura> and it looks like safchain is taking the 3rd: https://bugs.launchpad.net/neutron/+bug/1237807
16:29:29 <rkukura> hi mestery - we just moved from AIs to bugs
16:29:33 <amotoki> regarding db migration issue , there are two opinions and seems no consensus.
16:29:35 <mestery> rkukura: Thanks!
16:30:24 <rkukura> amotoki: What's the disagreement?
16:31:24 <amotoki> woops... slow connection. the question is how to handle havana migration.
16:33:54 <rkukura> amotoki: We you going to describe the issue?
16:34:33 <rkukura> I see there are links to discussions - how can we bring this to conclusing?
16:34:40 <rkukura> conclusion
16:36:02 <rkukura> on other bugs, lets fix what we can, review fixes, and hopefully work through these soon
16:36:43 <rkukura> Please speak up of any seem to have wrong priorities - we'll look at the high and maybe medium in these meetings to make sure we are progressing
16:37:19 <trinaths> sure
16:37:26 <amotoki> I will check the situation of db migration again. several fixes are related.
16:37:27 <rkukura> #topic ovs-firewall-driver
16:37:33 <rkukura> asadoughi: any update on this?
16:38:13 <asadoughi> hi. no news. no new reviews were made because of the gating issues and no new code pushed for the same reason. moving forward with it now. our cores allowed to review again?
16:38:57 <rkukura> asadoughi: I think we've been allowed (expected) to keep reviewing, just not approve
16:39:08 <rkukura> I'll admit I'm behind on reviews
16:39:33 <asadoughi> ah, well, again, i'd like to get reviews on the code that's already out there if possible.
16:39:46 <mestery> asadoughi: Can you paste the review here please?
16:39:47 <asadoughi> https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bp/ovs-firewall-driver,n,z
16:39:56 <mestery> asadoughi: You are a psychic. :)
16:40:05 <rkukura> We should prioritize reviewing fixes for gate issues, but need to keep making progress
16:40:17 <asadoughi> rkukura: i agree with that sentiment
16:40:37 <mestery> rkukura: +1
16:40:54 <Sukhdev> rkukura: any ETA on the gate issues fixes?
16:41:34 <rkukura> I don't have any current info on the gate issues - does anyone else?
16:42:36 <asadoughi> i don't have a concise idea, but anyone interested can look for the "state of the gate" e-mails.
16:42:44 <amotoki> I have nothing too. neutron-channel is good place to check the status.
16:42:56 <rkukura> right
16:42:59 <asadoughi> yes, #openstack-neutron too
16:43:13 <rkukura> Lets make sure to fix/review any ML2 issues effecting the gate ASAP
16:43:28 <amotoki> asadoughi: sorry for the late. my understanding was completely wrong on source-port. i will resume the review.
16:43:57 <asadoughi> amotoki: ok. thanks.
16:44:06 <rkukura> asadoughi: I'll try to look these over this week as well
16:44:22 <rkukura> #topic new MechanismDrivers
16:45:16 <rkukura> mestery: What do you have in mind for covering these in this meeting? Should we go through status of each, or just see if there are any general issues/questions?
16:45:30 <trinaths> can we discuss on my FSL Mechanism Driver..
16:45:33 <mestery> rkukura: Maybe just general issues now.
16:45:41 <mestery> Such as what trinaths wants to discuss :)
16:45:47 <mestery> trinaths: Please go ahead.
16:45:51 <rkukura> trinaths: Sure
16:46:05 <trinaths> thank youy mestery... :)
16:46:33 * mestery has to step out now.
16:46:54 <trinaths> We have developed an ML2 mechanism driver to prost the network/subnet/port related data to out Cloud Resource Discovery Service..
16:46:56 <dkehn_> gate issue , evendentually a infra is doing a tox upgrade
16:47:03 <trinaths> I have submitted the code base for review..
16:47:14 <trinaths> got few comments
16:47:23 <trinaths> I was not clear on a comment
16:47:35 <matrohon> trinaths : sounds great
16:47:59 <trinaths> on UNIT test case for the driver
16:48:31 <trinaths> the driver needs CRD Client..to send data to CRD Sever.
16:48:39 <trinaths> but in UNIT testing its not possible..
16:49:06 <irenab> rkukura: do you want to discuss the vnic_type port attribute we talked during PCI passthru meeting?
16:49:19 <rcurran> i think trinaths is referring to the now required 3rd party testing for all ML2 mech drivers
16:49:27 <banix> trinaths: Need to fake it
16:49:34 <Sukhdev> trinaths: you can Mock the operation - look into Arista ML driver
16:49:40 <trinaths> yes true said.. I need to FAKE it..
16:50:10 <rkukura> trinaths: Is this a new dependency on a client python library?
16:50:28 <trinaths> can any one check the code in the link here as a guidance to me on how to fake it.. any in a right path.. #linkhttps://review.openstack.org/#/c/69838/1/neutron/tests/unit/ml2/drivers/test_fslsdn_mech.py
16:50:48 <trinaths> #link https://review.openstack.org/#/c/69838/1/neutron/tests/unit/ml2/drivers/test_fslsdn_mech.py
16:50:57 <trinaths> no rkukura.. !
16:51:16 <rkukura> Where does the crdclient module come from?
16:51:20 <trinaths> we have a CRD Client.. some thing like neutron client
16:51:39 <trinaths> CRD client is which we ourselves developed
16:52:10 <Sukhdev> trinaths: please look into the unit test example in Arista Driver - we Mock the sync operation. You can so something similar
16:52:18 <trinaths> #link  https://review.openstack.org/#/c/69838/1
16:52:34 <trinaths> okay.. let me check the same.. sukhdev
16:53:05 <rkukura> Don't the other drivers all include needed client, and just need to mock in the unit tests because there is no server to talk to?
16:53:31 <Sukhdev> rkukura: yes
16:53:41 <rcurran> cisco_nexus does mock our server api
16:54:08 <HenryG> trinaths: Have you looked at the Tail-F NCS driver? I believe they are doing a similar thing to what you want to do.
16:54:13 <Sukhdev> So, there are plenty examples
16:54:20 <rkukura> So seems trinaths's patch needs to include crdclient or set it up to be pip-installed or something
16:54:56 <ktbenton1> #link http://stackoverflow.com/questions/8658043/how-to-mock-an-import
16:55:24 <trinaths> yes rkukura.. crd client needs to be installed like neutron-client
16:55:33 <rkukura> only 5 minutes left - lets move this discussion to the review, or to IRC or email
16:55:54 <rkukura> Any other issues regarding new drivers?
16:56:20 <banix> Before we run out of time: Please review this and see if the approach is right: https://review.openstack.org/#/c/69792/
16:56:36 <irenab> irenab: on SRIOV Mech Drivers depend on binding:profile, need it in hight priority
16:56:43 <rkukura> I think its the brocade driver that needs to disable bulk ops - would be good to discuss that, but much time
16:56:57 <ktbenton1> https://review.openstack.org/#/c/68996/
16:57:07 <ktbenton1> BigSwitch needs no bulk
16:57:21 <ktbenton1> So I proposed that approach
16:58:00 <rkukura> ktbenton1: Is this because bulk isn't properly implemented in ML2, or something else?
16:58:29 <ktbenton1> It's because the backend for the driver doesn't support bulk operations
16:58:44 <ktbenton1> so we need a way to change the native_bulk flag that ML2 advertises
16:59:18 <rkukura> Don't the bulk operations get implemented as non-bulk operations?
16:59:51 <ktbenton1> Not with native_bulk enabled
17:00:13 <rkukura> Lets work through this in the review, we are out of time here
17:00:17 <ktbenton1> ok
17:00:17 <banix> This is what I mentioned last week;  Deals with mechanism drivers raising exception in post commit ops. Need to add more unit tests. https://review.openstack.org/#/c/69792/
17:01:08 <rkukura> banix: Thanks. Looks like its got some good review input. I'll take a look too
17:01:15 <rkukura> We are out of time
17:01:25 <rkukura> #endmeeting