13:00:22 #startmeeting PCI Passthrough 13:00:23 Meeting started Thu Feb 6 13:00:22 2014 UTC and is due to finish in 60 minutes. The chair is baoli. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:00:24 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:00:26 The meeting name has been set to 'pci_passthrough' 13:00:40 hi 13:03:37 hi 13:03:55 sorry for being late 13:06:40 hello 13:07:15 hi, I got notice from rkukura for being late in 10 mins 13:08:58 ok...until he joins...can we talk about the SriovMechanismDriverBase/Mixin 13:09:06 I have some delay with vnic_type due to other urgent task, I think I will resume the work on it in Sunday. 13:09:10 what do you have in mind? 13:10:11 sadasu: We have the nova to support nova that passes in the port profile with all SRIOV related detail 13:11:41 yes, how about the validate_port_binding() in the AgentMechanismDriverBase 13:11:45 sadasu: sorry, we have to support nova call into port create/update with SRIOV details, seems at least this should be parsed generically 13:12:47 agreed...but didn't think that was going to be a lot.. 13:13:46 nova calls into validate_port_binding(). It is currently implemented only be ovs and linuxbridge agents 13:14:15 in our SR-IOV context, does each mechanism driver take care of it? 13:14:59 hi - sorry I'm late (as usual) 13:14:59 sadasu: agree. Seems that it will be code duplication to parse, store and retrive this information by each MD. 13:15:33 rkukura: hi 13:15:36 I'm proposing to eliminate validate_port_binding(). Do you see an need for it? 13:15:45 rkukura: hello! 13:15:50 I can attend only for more 15 mins more, sorry 13:16:03 need some guidance on the validate_port_binding() method 13:16:06 I was wondering how rkukura's binding:profile binding:vif_details support would look like. Then we can decide based on that 13:16:27 sadasu: I think that validate_port_binding can be generic for SRIOV and if needed call into derived Mech Driver, but actually need write reference code to see how it goes 13:16:50 I think I'm on track to push WIP patches for both those BPs tomorrow. 13:16:54 irenab: as I said earlier, we can add the mixin class if there is enough common functionality...we can discuss on the list 13:17:41 sadasu: fine, I think we can try to figure out on internal mail exchange and then publish on the list if its OK with you 13:17:42 See http://lists.openstack.org/pipermail/openstack-dev/2014-February/026344.html for the proposed changes regarding ML2's port binding and mechanism drivers. 13:17:45 rkukura: great! 13:18:26 rkukura: thanks! And also, nice detailed write up on what you are proposing with original and current portContext 13:19:06 I have some delay with vnic_type, hope to be back on this on Sunday 13:19:27 Cool, seems that we are on track from neutron side 13:19:31 irenab: thanks 13:19:57 Can we continue with what we have left off from yesterday? 13:20:02 irenab: I'm guessing your policy rule investigation has already been discussed - I'll read the log after the meeting and followup on the email thread if needed 13:20:27 rkukura: not discussed yet 13:20:41 rkukura: not discussed in the meeting yet, only on the list 13:21:15 baoli: do you want to discuss the question you sent? 13:21:23 anythink other than ^^ lesft over from yesterday? 13:21:29 #topic policy rule 13:21:37 irenab, sure go ahead 13:23:08 I did net-list on behalf of tenant and saw shared network there 13:23:47 but the net-show shows admin as owner of the shared network 13:24:08 "admin_or_owner": "rule:context_is_admin or tenant_id:%(tenant_id)s", 13:24:08 "admin_or_network_owner": "rule:context_is_admin or tenant_id:%(network:tenant_id)s", 13:24:33 I can see what admin_or_network_owner is about 13:25:11 the user should either have an admin role, or its tenant id matches the network's tenant id 13:25:12 so what I think that for vnic_type we need "admin_or_owner" 13:25:27 to cover the shared network case 13:26:41 but actually not sure why it is forbiden for tenant user to set mac-address and IP on shared network 13:27:08 So what exactly tenant_id:%(tenant_id)s means in the admin_or_owner definition? 13:27:10 irenab: "admin_or_owner" allows access for the owner of the poirt itself. That sounds like what we want. If the network is shared and someone else owns it, the user's own port is still owned by that user, not the owner of the network. 13:27:17 to indavertantly disallow duplicate mac address or IP address config? 13:27:47 baoli: like rkukura said, owner of the port 13:28:39 irenab, got it, thanks. 13:28:47 rkukura: thanks for clarifying. 13:29:04 so agree on admin_or_owner? 13:29:05 irenab: so, admin_or_owner should be fine for us 13:29:15 sadasu: great 13:30:04 baoli: can you please give brief update if there is anything new on nova side? 13:30:21 any progress with your patch? 13:30:34 irenab, I havent heard anything yet? 13:30:55 then can i quickly get my question answered by rkukura? 13:31:22 sadsu: may I have few moremins for next step? 13:31:27 I sent request to the mailing list and John 13:31:35 I just need to go in few mins 13:31:54 irenab: go ahead 13:33:06 I wanted to suggest that although the chances to get nova part in time are very low, to try to progress with all relevant parts and at least have reference end to end code working 13:33:44 irenab, I agree with you thats the plan. 13:33:48 irenab: I have a call with markmcclain today and will ask about the blocked status on your BP, and see if I can get that cleared up 13:33:49 then we will be able to come to Juno release with POC in all components and make every discussion or bp very concrete 13:33:53 can we post draft code changes even without BP being approved? 13:34:09 this is for baoli's nova code 13:34:14 I guess so, it will be just delayed with reviews 13:34:41 I believe sgordon is working to recruit nova cores to review patches 13:34:51 rkukura, correct 13:34:53 Only in the nova part, I'm not sure if we can get what we want on time 13:35:31 baoli: I think at least we will have some code to apply and run, and make it fully functional in Juno 13:35:36 sadasu, i think you can post it as draft 13:35:46 I need to work with Yunhong/yongli for that. Or I can work out something for the time being myself 13:35:49 sadasu, it makes it easier for me to recruit 2 core for you if i can say "look patches!" 13:35:56 sgordon: ok 13:36:17 sgordon, that's great 13:36:22 russellb, does that seem like the right approach to you ^ 13:37:15 i think we can assume that for now and move on 13:37:33 rkukura, irenab, going back to the multiprovidernet extension for a minute 13:37:49 baoli: ok 13:37:55 I think that it shouldn't affect sriov for the time being. 13:38:01 I asked Kyle about it 13:38:13 And he said the use case for now is the vxlan support 13:38:35 And I don't think that we will have vxlan support with sriov in near future 13:38:36 sorry, I have to go, will look into logs. sadasu, lets exchange emails to see regarding neutron generic SRIOV if needed 13:39:00 irenab: thanks. will do. 13:39:26 baoli: Is the plan for the nova scheduling filter to use the neutron API to get the provider details to find the physical_network for which SR-IOV connectivity is needed? 13:40:51 rkukura, no. The plan for the time being is for the MD to fill up the field in binding:profile. The field was named as pci-flaovr, which is not appropriate. 13:41:15 But let's say we have a field called net-group 13:41:40 and MD can put the physical net name for now. 13:42:44 nova api gets this infor after performing a query to neutron, and construct a pci request in the form "net-group: physical net name" 13:43:09 baoli: We don't have a bound MD until after the nova scheduler has made its decision and binding:host_id has been set on the port by nova. 13:43:12 The pci request later is used by the scheduler to schedule the instance 13:43:59 Once we have a bound MD, it can put the physical_network name and/or net-group, or whatever, into binding:profile for the nova VIF driver to use 13:44:22 rkukura, how is the decision made on which MD should be bound? 13:44:40 but it can't do that until port binding has occurred, which cannot occur until binding:host_id has been set by nova, which cannot occur until nova has scheduled the VM 13:45:20 "port binding" refers to selecting the bound MD 13:45:46 port binding is triggered by the setting of the binding:host_id attribute by nova 13:46:24 rkukura, are you saying that a mechanism driver won't be invoked until binding:host_id is set? 13:47:00 So don't we need some way to ensure that nova schedules the VM on a compute node with an available VF for an SR-IOV device connected to the proper physical_network? 13:47:07 baoli: yes 13:47:56 baoli: sort of - there won't be a bound MD until binding:host_id is set, and its the bound MD that supplies binding:vif_type and soon binding:vif_details 13:48:26 several lines above I said binding:profile when I meant binding:vif_details 13:49:36 rkukura, all we need is the physical net name. Let's say that by the time nova api queries neutron with a port-id, neutron should be able to return the network details which includes the physical net name, right? 13:50:13 which means, even if we can't get it from vif_details, we can get it from the nuetron network details. 13:50:30 physical net name should be part of binding:vif_details 13:50:58 baoli: nova could use the providernet and/or multiprovidernet extension to find out what physical_network(s) can be used for the port 13:51:25 rkukura, yes if we made the assumption. 13:52:31 rkukura, another possibility is that we can use an extra argument something like --pci-flavor (--net-group). But I'm hesitating to go there because it will provoke another set of debates on pci-flavor, naming, etc. 13:53:15 baoli: This extra argument idea is a --nic option with nova, right? 13:54:28 rkukura, neutron port-create --binding:net-group <> --binding:vnic_type <> and/or nova boot --nic vnic_type=<>,net-group=<> 13:56:01 I think I've been hearing two other ways to solve the scheduling issue other than an extra nova boot argument: 1) have user specify a VM flavor or host_aggregate that is known to have the needed SR-IOV connnectivity, or 2) implement a nova scheduler filter that uses the providernet and/or multiprovidernet extension to see what SR-IOV connectivity is needed and filter based on that. 13:57:21 rkukura, we had lengthy discussion with 1) 13:57:48 for 2), is that what we are trying to do for the time being? 13:57:48 rkukura: exactly...looked at 1 for a while...did not look too much into 2 13:59:51 Lets call baoli's 3) implement a nova scheduler filter that uses the net-group value passed with --nic 14:00:29 simplified version of 2 14:00:44 I apologize that I've never managed to systematically sort through the history of these discussions and figure out which options are still on the table. 14:01:19 need the channel for the next meeting :) 14:01:25 rkukura, for the time being, the assmuption is that the pci whitelist will be tagged with the correct physical net name. 14:01:26 Are these three options the ones that have been discussed, and are any off the table? 14:01:27 ok 14:02:02 rkukura, let's continue in a different channel since people are waiting for this one 14:02:06 #endmeeting