13:02:28 <baoli> #startmeeting PCI passthrough
13:02:29 <openstack> Meeting started Thu Jan 23 13:02:28 2014 UTC and is due to finish in 60 minutes.  The chair is baoli. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:02:30 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:02:33 <openstack> The meeting name has been set to 'pci_passthrough'
13:03:34 <baoli> Hi
13:03:39 <irenab> hi
13:04:50 <sadasu> Hello! should we set the topic to SR-IOV?
13:04:56 <baoli> irenab, rkukura is going to join?
13:05:10 <baoli> #topic SRIOV
13:05:14 <irenab> baoli: I hope so. Will ping to check
13:05:30 <heyongli> hi,
13:05:31 <sadasu> he mentioned in the ML2 meeting yesterday that he would
13:06:10 <rkukura> hi
13:06:18 <sadasu> Welcome!
13:06:33 <baoli> Hi
13:06:43 <baoli> Let's get started
13:06:59 <irenab> rkukura: I have updated the vnic_type neutron plueprint according to our discussion
13:07:11 <irenab> ^blueprint
13:07:27 <rkukura> irenab: Great! I'll review it today.
13:07:59 <baoli> Irenab, pointer to the bp?
13:08:02 <irenab> just to share with all, the idea is to use port binding:profile for vnic_type and pci details
13:08:27 <baoli> so binding:profile is another dictionary?
13:08:27 <irenab> #link https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type
13:08:58 <rkukura> baoli: Its an existing attribute that is a dictionary
13:08:59 <irenab> baoli: this is already supported via neutron CLI, so just to add persistency
13:09:23 <baoli> understood.
13:09:30 <rkukura> Its part of the portbinding extension, but needs to be implemented in ml2
13:09:45 <baoli> rkukura, that sounds good
13:09:56 <irenab> rkukura: I think to add it as part of supporting this BP
13:10:15 <rkukura> right
13:10:36 <sadasu> what changes do we need in ML2 to support vnic_type "pcipassthrough" ?
13:10:58 <irenab> so, by adding this we add the common layer to propagate and persist attributes to neutron
13:11:26 <irenab> sadasu: it will be managed by Mech Driver not by plugin
13:12:03 <irenab> Mech Driver will need to look at binding:profile , check if wnic_type is supported
13:12:08 <sadasu> ok...just making sure it does not need any special handling at ML2 layer...it will just be passed along to the respective mech driver
13:12:12 <rkukura> the plugin will persist it and handle the CRUD operations, but it will be interpreted by MechanismDrivers as they attempt to bind the port
13:12:42 <sadasu> got it
13:12:56 <irenab> sadasu: and here I guess we will need some general util to parse it with regards to PCI device record details
13:13:01 <baoli> are you suggesting that only ML2 supports the vmic_type and PCI stuff? I guess that's fine since regular plugins will be deprecated?
13:13:16 <irenab> seens it will be needed by both our Drivers
13:14:03 <irenab> baoli: not sure why to limit it to ML2 only
13:14:30 <irenab> if there is plugin that wants to support this extension, can add the support
13:14:46 <sadasu> irenab: both means ML2 mech driver and regular (non-ML2) plugin?
13:14:47 <irenab> and then will check the vnic_type on plugin layer
13:14:53 <baoli> irenab, ok, that sounds right
13:15:01 <rkukura> baoli: The openvswitch and linuxbridge plugins are deprecated. Some other plugins already implement binding:profile, so they should probably be updated to handle the vnic_type key if it makes sense for them.
13:15:09 <irenab> sadasu: regular do not need to handle PCI device fields
13:15:21 <baoli> rkukura, sounds good
13:16:04 <baoli> #topic specify SRIOV
13:16:23 <baoli> Can we talk about how to specify SRIOV requests from API/CLI?
13:16:40 <irenab> baoli: API/CLI of nova or neutron?
13:16:52 <baoli> irenab, both. Let's start from nova
13:17:17 <rkukura> One other key point on binding:profile - this is being used purely as an input param to neutron/ml2, so a different attribute will be used for output from neutron to the VIF driver.
13:18:01 <heyongli> nic request for pci should translate to request before scheduler, in API , i  think
13:18:14 <heyongli> and store it in to instance meta data.
13:19:23 <sadasu> before the API/CLI, I also wanted to talk about passing the pci_flavor info given in --nic option to neutron
13:19:24 <irenab> heyongli: right, but then it comes to compute node and allocates specific device
13:20:30 <heyongli> irenab: sure, only if conver  to request and store in the meta data, all works.
13:20:56 <heyongli> i had proposed interface to do this in my pathset.
13:21:02 <baoli> I think that the question is what should be part of the --nic parameters
13:21:59 <heyongli> baoli, sure.
13:22:24 <irenab> and we need to define what is passed  from nova to neutron and vise versa
13:22:27 <heyongli> #link https://review.openstack.org/#/q/status:abandoned+project:openstack/nova+branch:master+topic:bp/pci-extra-info,n,z
13:23:29 <irenab> heyongli: which one do you refer?
13:24:20 <baoli> vnic-type=“vnic” | “direct” | “macvtap” pci-group=pci-group-name port-profile=port-profile-name. Except that we can change the pci group to pci flavor, I guess.
13:24:45 <heyongli> baoli, agree.
13:25:19 <irenab> baoli: all these --nic options will be possible to defined with binding:profile on port-create
13:25:46 <baoli> irenab, yes, that's what we have been talking about in all of our docs
13:25:58 <baoli> But let's start with nova boot
13:26:02 <irenab> baoli: I missing the question :-)
13:27:11 <baoli> irenab, what question? The input will be put into the binding:profile. In addition, neutron port-create can use the same parameters
13:28:00 <irenab> sure, so what do you want to discuss?
13:28:52 <baoli> well, just to make sure we are on agreement with that, and then we can submit the BP for approval
13:29:22 <rkukura> One point - if existing port UUID is passed into nova via --nic, nova needs to merge its binding:profile keys with any that already exist when doing the port_update.
13:29:43 <baoli> rkukura, agreed
13:29:52 <irenab> agree
13:30:26 <baoli> So neutron port-create will have similar parameters. And --nic port-id=sriov-port will populate the binding:profile after geting the infor from neutron
13:30:28 <irenab> I do not think the pci_flavor should be pushed to port profile, it should be pci slot details, right?
13:31:46 <baoli> irenab, if a pci-flavor name can correspond to a provider net or physical net, certainly it's useful information in the neutron
13:32:41 <irenab> baoli: I mean it can pass, but its not enough, we need the pci device BDF too
13:32:52 <baoli> irenab, agreed.
13:33:15 <rkukura> baoli: What do you mean above by " after geting the infor from neutron"?
13:34:05 <baoli> rkukura, if --nic port-id=srivo-port-uuid is specified, nova will query neutron for port info. So the binding:profile will be returned as part of the query
13:35:09 <baoli> irenab, I also think that vendor_id and product_id are useful information for the mech driver.
13:35:35 <rkukura> baoli: OK, you just mean getting the existing binding:profile, adding stuff, and then doing a port_update. You said "port-create" above, in which case the port doesn't already exist.
13:35:54 <irenab> baoli:agree, this can help to filter between diff.v vendor Mech Drivers for SR-IOV
13:36:18 <baoli> rkukura, with --nic port-id=sriov-port-uuid, the port has to be created before hand
13:37:40 <rkukura> baoli: agreed - I was just confusing this with the case where nova creates the port.
13:37:42 <baoli> so it would be like, neutron port-create --vnic-type=direct --pci-flavor=f1 net, and then nova boot --nic port-id=port-uuid. It's not exact the cli syntax, but that's the idea
13:38:16 <irenab> baoli: agree
13:38:21 <baoli> cool
13:38:34 <baoli> Now let's talk about the neutron net-create command
13:38:42 <heyongli> --pci-flavor=f1:number
13:39:06 <baoli> heyongli, we don't need :number, since it's only one nic a time
13:39:28 <heyongli> baoli: to keep it all the same , i suggest it should have one
13:40:06 <irenab> heyongli: can it be assumed 1 if not specified?
13:40:11 <baoli> heyongli, I suggest that if it's one, it can be omitted in the extra spec as well
13:40:26 <baoli> in our case, it's always one
13:40:40 <heyongli> baoli: let's accept this , maybe convinet to you guy.
13:41:18 <baoli> disagree
13:41:35 <heyongli> what ?
13:41:55 <heyongli> i mean ommit is ok for me, disagree?
13:42:24 <baoli> heyongli, sorry, I misunderstood because the sequence of messages. So that's cool
13:42:45 <baoli> So we agree that if it's one, it can be omitted
13:43:01 <irenab> baoli: can you present the net-create
13:43:43 <baoli> irenab, I was thinking to add --pci-flavor and --port-profile to the command.
13:44:29 <irenab> baoli: so you need extension for this?
13:45:01 <baoli> The idea behind adding pci-flavor is that a pci flavor can be associated with a physical net by admin, and a neutron network is associated with a physical net.
13:45:19 <baoli> irenab, possibly if we agree to add them
13:45:36 <irenab> baoli: another question, how the pci flavor will be taken into account by scheduler?
13:46:16 <baoli> irenab, when you specify --nic net-id=net-uuid, nova query neutron for the network information, so that's when that information will be passed back to nova
13:46:38 <baoli> the rest will be the same
13:46:48 <irenab> naoli: not sure, but is it before the scheduler decision?
13:46:58 <baoli> irenab, yes.
13:47:57 <irenab> so from the point it gets it from network details, it will be the flow heyongi already supports/plan to support?
13:48:12 <baoli> So the sequence of api requests would be: neutron net-create --pci-falvor f1 net1, nova boot --nic net-id=net1-uuid.
13:48:23 <baoli> irenab, it will be the same flow as the patch I have posted
13:48:27 <irenab> baoli: So you just do not need to specify it explicitly on nova boot?
13:49:04 <baoli> irenab, yes.
13:49:49 <irenab> I think the correct constract to associate the pci_flavor with is provider_network and not virtual netowrk, so probably can come from config file
13:49:50 <baoli> This gives the admin a simplified view/tool of the sriov network
13:50:28 <baoli> irenab, that's a good idea too. we can think about it more
13:50:43 <irenab> and if needed can be overriden on virtual network/port level
13:50:46 <baoli> Assuming that we use sriov for provider net only
13:51:30 <irenab> baoli: I like your idea to simplfy the management view
13:51:39 <baoli> rkukura, any thoughts on this?
13:51:43 <rkukura> irenab: Do you mean physical_network rather than "provider network"?
13:52:09 <irenab> rkukura: same
13:52:15 <rkukura> Once they are created, provider networks are no different than tenant networks. In fact, there is no way to know which way it was created.
13:52:47 <irenab> I mean physical network you specify via provider extension
13:52:55 <rkukura> ok, that's what I thought
13:53:19 <rkukura> at least for flat and vlan networks, this makes sense
13:53:59 <rkukura> Are any other network_types, which might not involve a physical_network, in scope for this discussion?
13:54:24 <irenab> rkukura: at least for now, not for Mellanox case
13:55:37 <rkukura> So who where in the process does the physical_network come into play? Does the user need to see this?
13:55:58 <baoli> rkukura, by user, you mean?
13:56:08 <rkukura> Only admin users would ever know anything about any physical_network
13:57:00 <baoli> rkukura, when we create a neutron network, the provider net needs to be specified with provider network, right?
13:57:02 <irenab> the matching pci_flavor to physical_net should be done by admin only
13:57:19 <baoli> agreed
13:57:27 <irenab> baoli: there can be default, then no need to specify
13:57:44 <rkukura> ok, so a non-admin user needs to pick the right flavor for the network he is using?
13:58:02 <heyongli> rkukura: for current, it is.
13:58:03 <rkukura> That's fine for now.
13:58:10 <irenab> rkukura: not is it was associated previously by admin
13:58:10 <baoli> irenab, so you mean to say that it should be configured
13:58:31 <irenab> baoli: what should be configured?
13:58:48 <heyongli> rkukura, we had plan B, discuss in the meeting, i recall.
13:58:59 <baoli> irenab, I mean to say that the pci flavor and physical net association can be configured
13:59:06 <rkukura> So when it comes to the ml2 port binding by the PCI-passthru MechanismDriver, it will make sure the network has a segment who's physical_network matches that of the PCI device?
13:59:31 <irenab> its an option, like you have vlan_ranges configured via config
13:59:54 <irenab> rkukura: it should be so
14:00:05 <baoli> Ok, time is running out. Let's continue on Monday, in the same time let's also talk about work division.
14:00:09 <baoli> #endmeeting