13:02:28 #startmeeting PCI passthrough 13:02:29 Meeting started Thu Jan 23 13:02:28 2014 UTC and is due to finish in 60 minutes. The chair is baoli. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:02:30 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:02:33 The meeting name has been set to 'pci_passthrough' 13:03:34 Hi 13:03:39 hi 13:04:50 Hello! should we set the topic to SR-IOV? 13:04:56 irenab, rkukura is going to join? 13:05:10 #topic SRIOV 13:05:14 baoli: I hope so. Will ping to check 13:05:30 hi, 13:05:31 he mentioned in the ML2 meeting yesterday that he would 13:06:10 hi 13:06:18 Welcome! 13:06:33 Hi 13:06:43 Let's get started 13:06:59 rkukura: I have updated the vnic_type neutron plueprint according to our discussion 13:07:11 ^blueprint 13:07:27 irenab: Great! I'll review it today. 13:07:59 Irenab, pointer to the bp? 13:08:02 just to share with all, the idea is to use port binding:profile for vnic_type and pci details 13:08:27 so binding:profile is another dictionary? 13:08:27 #link https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type 13:08:58 baoli: Its an existing attribute that is a dictionary 13:08:59 baoli: this is already supported via neutron CLI, so just to add persistency 13:09:23 understood. 13:09:30 Its part of the portbinding extension, but needs to be implemented in ml2 13:09:45 rkukura, that sounds good 13:09:56 rkukura: I think to add it as part of supporting this BP 13:10:15 right 13:10:36 what changes do we need in ML2 to support vnic_type "pcipassthrough" ? 13:10:58 so, by adding this we add the common layer to propagate and persist attributes to neutron 13:11:26 sadasu: it will be managed by Mech Driver not by plugin 13:12:03 Mech Driver will need to look at binding:profile , check if wnic_type is supported 13:12:08 ok...just making sure it does not need any special handling at ML2 layer...it will just be passed along to the respective mech driver 13:12:12 the plugin will persist it and handle the CRUD operations, but it will be interpreted by MechanismDrivers as they attempt to bind the port 13:12:42 got it 13:12:56 sadasu: and here I guess we will need some general util to parse it with regards to PCI device record details 13:13:01 are you suggesting that only ML2 supports the vmic_type and PCI stuff? I guess that's fine since regular plugins will be deprecated? 13:13:16 seens it will be needed by both our Drivers 13:14:03 baoli: not sure why to limit it to ML2 only 13:14:30 if there is plugin that wants to support this extension, can add the support 13:14:46 irenab: both means ML2 mech driver and regular (non-ML2) plugin? 13:14:47 and then will check the vnic_type on plugin layer 13:14:53 irenab, ok, that sounds right 13:15:01 baoli: The openvswitch and linuxbridge plugins are deprecated. Some other plugins already implement binding:profile, so they should probably be updated to handle the vnic_type key if it makes sense for them. 13:15:09 sadasu: regular do not need to handle PCI device fields 13:15:21 rkukura, sounds good 13:16:04 #topic specify SRIOV 13:16:23 Can we talk about how to specify SRIOV requests from API/CLI? 13:16:40 baoli: API/CLI of nova or neutron? 13:16:52 irenab, both. Let's start from nova 13:17:17 One other key point on binding:profile - this is being used purely as an input param to neutron/ml2, so a different attribute will be used for output from neutron to the VIF driver. 13:18:01 nic request for pci should translate to request before scheduler, in API , i think 13:18:14 and store it in to instance meta data. 13:19:23 before the API/CLI, I also wanted to talk about passing the pci_flavor info given in --nic option to neutron 13:19:24 heyongli: right, but then it comes to compute node and allocates specific device 13:20:30 irenab: sure, only if conver to request and store in the meta data, all works. 13:20:56 i had proposed interface to do this in my pathset. 13:21:02 I think that the question is what should be part of the --nic parameters 13:21:59 baoli, sure. 13:22:24 and we need to define what is passed from nova to neutron and vise versa 13:22:27 #link https://review.openstack.org/#/q/status:abandoned+project:openstack/nova+branch:master+topic:bp/pci-extra-info,n,z 13:23:29 heyongli: which one do you refer? 13:24:20 vnic-type=“vnic” | “direct” | “macvtap” pci-group=pci-group-name port-profile=port-profile-name. Except that we can change the pci group to pci flavor, I guess. 13:24:45 baoli, agree. 13:25:19 baoli: all these --nic options will be possible to defined with binding:profile on port-create 13:25:46 irenab, yes, that's what we have been talking about in all of our docs 13:25:58 But let's start with nova boot 13:26:02 baoli: I missing the question :-) 13:27:11 irenab, what question? The input will be put into the binding:profile. In addition, neutron port-create can use the same parameters 13:28:00 sure, so what do you want to discuss? 13:28:52 well, just to make sure we are on agreement with that, and then we can submit the BP for approval 13:29:22 One point - if existing port UUID is passed into nova via --nic, nova needs to merge its binding:profile keys with any that already exist when doing the port_update. 13:29:43 rkukura, agreed 13:29:52 agree 13:30:26 So neutron port-create will have similar parameters. And --nic port-id=sriov-port will populate the binding:profile after geting the infor from neutron 13:30:28 I do not think the pci_flavor should be pushed to port profile, it should be pci slot details, right? 13:31:46 irenab, if a pci-flavor name can correspond to a provider net or physical net, certainly it's useful information in the neutron 13:32:41 baoli: I mean it can pass, but its not enough, we need the pci device BDF too 13:32:52 irenab, agreed. 13:33:15 baoli: What do you mean above by " after geting the infor from neutron"? 13:34:05 rkukura, if --nic port-id=srivo-port-uuid is specified, nova will query neutron for port info. So the binding:profile will be returned as part of the query 13:35:09 irenab, I also think that vendor_id and product_id are useful information for the mech driver. 13:35:35 baoli: OK, you just mean getting the existing binding:profile, adding stuff, and then doing a port_update. You said "port-create" above, in which case the port doesn't already exist. 13:35:54 baoli:agree, this can help to filter between diff.v vendor Mech Drivers for SR-IOV 13:36:18 rkukura, with --nic port-id=sriov-port-uuid, the port has to be created before hand 13:37:40 baoli: agreed - I was just confusing this with the case where nova creates the port. 13:37:42 so it would be like, neutron port-create --vnic-type=direct --pci-flavor=f1 net, and then nova boot --nic port-id=port-uuid. It's not exact the cli syntax, but that's the idea 13:38:16 baoli: agree 13:38:21 cool 13:38:34 Now let's talk about the neutron net-create command 13:38:42 --pci-flavor=f1:number 13:39:06 heyongli, we don't need :number, since it's only one nic a time 13:39:28 baoli: to keep it all the same , i suggest it should have one 13:40:06 heyongli: can it be assumed 1 if not specified? 13:40:11 heyongli, I suggest that if it's one, it can be omitted in the extra spec as well 13:40:26 in our case, it's always one 13:40:40 baoli: let's accept this , maybe convinet to you guy. 13:41:18 disagree 13:41:35 what ? 13:41:55 i mean ommit is ok for me, disagree? 13:42:24 heyongli, sorry, I misunderstood because the sequence of messages. So that's cool 13:42:45 So we agree that if it's one, it can be omitted 13:43:01 baoli: can you present the net-create 13:43:43 irenab, I was thinking to add --pci-flavor and --port-profile to the command. 13:44:29 baoli: so you need extension for this? 13:45:01 The idea behind adding pci-flavor is that a pci flavor can be associated with a physical net by admin, and a neutron network is associated with a physical net. 13:45:19 irenab, possibly if we agree to add them 13:45:36 baoli: another question, how the pci flavor will be taken into account by scheduler? 13:46:16 irenab, when you specify --nic net-id=net-uuid, nova query neutron for the network information, so that's when that information will be passed back to nova 13:46:38 the rest will be the same 13:46:48 naoli: not sure, but is it before the scheduler decision? 13:46:58 irenab, yes. 13:47:57 so from the point it gets it from network details, it will be the flow heyongi already supports/plan to support? 13:48:12 So the sequence of api requests would be: neutron net-create --pci-falvor f1 net1, nova boot --nic net-id=net1-uuid. 13:48:23 irenab, it will be the same flow as the patch I have posted 13:48:27 baoli: So you just do not need to specify it explicitly on nova boot? 13:49:04 irenab, yes. 13:49:49 I think the correct constract to associate the pci_flavor with is provider_network and not virtual netowrk, so probably can come from config file 13:49:50 This gives the admin a simplified view/tool of the sriov network 13:50:28 irenab, that's a good idea too. we can think about it more 13:50:43 and if needed can be overriden on virtual network/port level 13:50:46 Assuming that we use sriov for provider net only 13:51:30 baoli: I like your idea to simplfy the management view 13:51:39 rkukura, any thoughts on this? 13:51:43 irenab: Do you mean physical_network rather than "provider network"? 13:52:09 rkukura: same 13:52:15 Once they are created, provider networks are no different than tenant networks. In fact, there is no way to know which way it was created. 13:52:47 I mean physical network you specify via provider extension 13:52:55 ok, that's what I thought 13:53:19 at least for flat and vlan networks, this makes sense 13:53:59 Are any other network_types, which might not involve a physical_network, in scope for this discussion? 13:54:24 rkukura: at least for now, not for Mellanox case 13:55:37 So who where in the process does the physical_network come into play? Does the user need to see this? 13:55:58 rkukura, by user, you mean? 13:56:08 Only admin users would ever know anything about any physical_network 13:57:00 rkukura, when we create a neutron network, the provider net needs to be specified with provider network, right? 13:57:02 the matching pci_flavor to physical_net should be done by admin only 13:57:19 agreed 13:57:27 baoli: there can be default, then no need to specify 13:57:44 ok, so a non-admin user needs to pick the right flavor for the network he is using? 13:58:02 rkukura: for current, it is. 13:58:03 That's fine for now. 13:58:10 rkukura: not is it was associated previously by admin 13:58:10 irenab, so you mean to say that it should be configured 13:58:31 baoli: what should be configured? 13:58:48 rkukura, we had plan B, discuss in the meeting, i recall. 13:58:59 irenab, I mean to say that the pci flavor and physical net association can be configured 13:59:06 So when it comes to the ml2 port binding by the PCI-passthru MechanismDriver, it will make sure the network has a segment who's physical_network matches that of the PCI device? 13:59:31 its an option, like you have vlan_ranges configured via config 13:59:54 rkukura: it should be so 14:00:05 Ok, time is running out. Let's continue on Monday, in the same time let's also talk about work division. 14:00:09 #endmeeting