13:00:19 <baoli_> #startmeeting PCI passthrough
13:00:21 <openstack> Meeting started Tue Jul 22 13:00:19 2014 UTC and is due to finish in 60 minutes.  The chair is baoli_. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:00:22 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:00:25 <openstack> The meeting name has been set to 'pci_passthrough'
13:00:53 <baoli_> hi
13:00:57 <rpothier> hi
13:01:47 <irenab> hi
13:03:25 <irenab> shall we start with quick update?
13:03:56 <baoli_> irenab, I was waiting for more folks to join.
13:04:26 <irenab> baoli: thank you for review you did on Mech Driver
13:04:48 <baoli_> irenab, sure thing.
13:05:14 <baoli_> Ok, let's get started.
13:05:28 <irenab> I can attend only for 1/2 an hour today. Sorry in advance
13:05:37 <baoli_> Quick update from my side: all the code is in. Review is needed.
13:05:38 <sadasu> Hi
13:05:49 <baoli_> sadasu, hi
13:06:08 <sadasu> baoli_:cool!
13:06:35 <irenab> baoli: Itzik and me work continuosly with all patches under review to verify end 2 end.
13:07:05 <baoli_> irenab, all you need to do is to get the code from the last patch.
13:07:22 <baoli_> Due to the dependencies, it will pull in all the dependent patches.
13:07:41 <irenab> baoli: exactly what we do, will provide code review soon
13:07:49 <baoli_> you can use 'git review -N 107466'
13:08:05 <irenab> baoli: thanks
13:08:07 <baoli_> irenab, cool.
13:08:33 <baoli_> The patch keeps changing due to conflicts with code that's coming in.
13:08:44 <sadasu> baoli_: so are recommending that we not use the diffs that rpothier sent out a week + ago?
13:09:20 <baoli_> sadasu, if you want to use the up-to-date code, you can get it from the review now.
13:09:29 <sadasu> ok
13:10:16 <baoli_> it's a heck of work keeping the code up-to-date. Now some unit tests failed in the last patch that I'll have to investigate.
13:10:26 <irenab> baoli: we see sometimes issues, but not sure if it is directly related to submitted code. Will followup via irc or mail
13:10:56 <baoli_> irenab, do let me know.
13:11:09 <heyongli> hi, sorry late
13:11:15 <baoli_> heyongli, hi
13:11:26 <irenab> baoli: sure, will arrange with more details and consult with you
13:11:39 <irenab> hi
13:12:04 <baoli_> I'm hoping that people, especially cores, will start reviewing the code.
13:12:16 <irenab> do we have some current issues to discuss or just need to review and verify?
13:12:47 <heyongli> before core get involved, test case better to passed.
13:12:49 <baoli_> irenab, we'll need to review and verify
13:13:06 <baoli_> heyongli, it was passing ealier.
13:13:23 <irenab> I have some advanced use case to discuss if we do not have other items
13:13:44 <baoli_> irenab, I'd like to talk about your review for a bit
13:13:50 <heyongli> baoli, great
13:13:52 <irenab> baoli: sure
13:14:26 <baoli_> Two issues: inheritence, and second, device-mapping
13:14:45 <irenab> baoli: inheritance is already changed
13:15:00 <irenab> just have some tox issues, probably will upload later today
13:15:10 <heyongli> baoli, Jenkins still -1
13:15:18 <sadasu> irenab: inheritance changed to what?
13:15:30 <irenab> api.MechDriver
13:15:52 <baoli_> irenab, cool.
13:16:23 <baoli_> irenab, a while back, I sent out some code that inherits from mechdriver and uses decorators
13:16:36 <irenab> baoli: he idea to make agent not mandatory was due to heyongli input that Intel NIC does not support link state change
13:16:47 <baoli_> heyongli, yes, the patch uploaded yesterday failed
13:18:08 <heyongli> one question, does SRIOV neutron testing need a specific external switch support?
13:18:09 <irenab> baoli: I still need agent is required, so changing the code accordingly. It is also simplified
13:18:51 <baoli_> irenab, agent is optional as we have agreed.
13:19:30 <irenab> heyongli: not for MD for SRIOV Switch nics
13:19:33 <sadasu> heyongli: support for Intel card should not need specific external switch
13:19:50 <irenab> baoli: it depends on deployer choice if require agent
13:20:05 <heyongli> thanks irenab, sadasu
13:20:09 <irenab> if dynamic changes are expected the agent will be required
13:20:38 <irenab> for HW_VEB case, VFs are managed locally
13:20:44 <sadasu> irenab: could u clarify dynamic changes?
13:21:14 <irenab> admin state can be managed via 'ip link set state' command
13:21:39 <irenab> due to port-update --admin_state_up call for neutron port
13:22:10 <irenab> ther is also option for QoS, and other, but it is not yet supported
13:22:16 <baoli_> irenab, that's right. your existing patch has a config item about whether or not agent is required for a particular device
13:22:43 <irenab> baoli: it is currently for all deployment and not per device.
13:23:01 <irenab> baoli: maybe later maybe enhanced
13:23:07 <irenab> to wha tI think you mean
13:23:24 <baoli_> But it has nothing to do with the parent class your new class should inherit from
13:23:51 <sadasu> yes...I wanted to discuss the parent class too
13:23:56 <irenab> baoli: agree and already changed the parent class
13:24:10 <baoli_> irenab, cool
13:24:22 <irenab> sadasu: go ahead
13:24:22 <baoli_> irenab, the device mapping config from the agent side
13:24:41 <sadasu> what other changes had to be made when you changed the parent class?
13:24:42 <baoli_> do you think that multiple agents would be running per host?
13:25:25 <irenab> baoli: do not want to make assumtion here
13:25:26 <sadasu> baoli_: we should not disallow multiple agents afaik
13:26:07 <irenab> I prefer to follow the existing examples and there is sort of assumption for other agents
13:26:27 <irenab> but there is no need for more than one agent of certain type on same compute node
13:27:02 <baoli_> Just want to confirm that multiple agents of same type on a single host are allowed in neutron
13:27:32 <irenab> baoli: at least existing code assumes that this can be the situation
13:28:12 <baoli_> irenab, I'm just thinking that if that's not some thing that we need to support now, then device mapping is not needed.
13:28:36 <irenab> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mech_agent.py#64
13:29:00 <irenab> device mapping serves also what VFs to watch on periodic loop
13:29:34 <irenab> please take a look on part 2 of the patch
13:29:49 <baoli_> irenab, ok, I didn't see your second part yet. So it's something thtat has to be there, it's fine
13:31:35 <irenab> I wanted to ask if having OVS on PF and VFs on same compute node is the use case you see you want to support?
13:32:23 <baoli_> irenab, that's a good question.
13:32:41 <sadasu> interesting
13:33:18 <irenab> at least for Intel and Mellanox NICs there is a problem for communication between OVS VM and VF MV on same compute same VLAN
13:33:23 <sadasu> not sure if someone would want to pay a premium for for sr-iov ports and not use them in that mode
13:34:02 <sadasu> but having said, if someone wanted to configure it that way, it not be disallowed
13:34:16 <irenab> it maybe the case that need to communicate guest without vendor drivers and guest that can consume Sr-IOV VF
13:35:13 <irenab> using ML2 its possible, but for case with embedded switching NIC, it requires to add OVS MV macs to the PF
13:35:42 <baoli_> irenab, I think it should be ok for OVS with PF, but not sure about OVS with VF.
13:36:53 <irenab> what I wanted to raise that if we are going to support this case, there should be some way to add macs of VMs on OVS to the PF
13:37:30 <baoli_> what do you mean by macs of VMs on OVS to the PF
13:38:34 <irenab> if there is a VM connected vi OVS that is connected to PF interface that wants to talk to VM connected via SRIOV VF of same PF
13:39:10 <baoli_> irenab, I see.
13:39:10 <sadasu> is that really a valid case?
13:39:35 <irenab> sadasu: I think the case is valid, but not sure how common
13:40:05 <baoli_> irenab, does mlnx treat the PF interface as a normal ethernet interface from the host point of view?
13:40:17 <irenab> if there is a need to support it, seems there is a need of some sort of mechanism to propagate macs and program them on PF
13:40:27 <irenab> baoli: yes
13:40:34 <sadasu> i know for sr-iov ports, traffic between 2 VFs is treated specially by the switch
13:40:52 <baoli_> Irenab, in cisco vmfex, the PF has already assigned a mac
13:41:18 <sadasu> but not sure if traffic between PF and VF is treated that way...and may differ in each vendor's implementation
13:41:20 <irenab> baoli: but can OVS be connected to this PF?
13:41:37 <baoli_> irenab, I haven't tried it. But I think it should work
13:41:42 <sadasu> irenab: did not try that
13:42:06 <baoli_> irenab, that' the case I referred as OVS + PF
13:42:19 <irenab> if this use case is required for both Intel and Mellanox, it will require some addition
13:42:39 <baoli_> irenab, you mean addition in nova?
13:43:12 <irenab> baoli: not sure, more neuron or maybe libvirt...
13:43:20 <baoli_> irenab, I would consider that as part of host provisioning
13:43:27 <yongli> i like try to confirm how intel nic handle this
13:43:43 <baoli_> yongli, are you heyongli?
13:43:50 <yongli> yeah
13:44:01 <baoli_> heyongli, thought you were left
13:44:06 <irenab> I can share with you some slides, and it about Intel
13:44:09 <yongli> no
13:44:24 <irenab> will send in the mail later today
13:44:27 <yongli> sent to.me thanks
13:44:42 <baoli_> irenab, thanks.
13:44:49 <irenab> but probably we need first complete the basic support for SR-IOV
13:45:05 <baoli_> irenab, absolutely, let's review and get them in
13:45:21 <baoli_> heyongli, one issue with the system metadata
13:45:23 <yongli> anyway conner case wont stop us
13:45:40 <yongli> baoli what is it
13:46:09 <baoli_> PCI requests is saved in the metadata. Metadata is a key value pair with the value being 255 in max
13:46:38 <yongli> we exceed that?
13:46:51 <baoli_> if pci requests come from both sr-iov and generic, or mutliple sr-iov, it would exceed that easily
13:47:46 <baoli_> however, increasing that would impact all the features using the metadata table
13:47:49 <yongli> its common not only pci i suspect other thing will fill that also
13:48:28 <baoli_> I think that this is something that we need to keep in mind, and we need to find a solution later. I will log a bug
13:48:48 <yongli> no good.idea now , bug is good
13:49:46 <baoli_> yongli, another question is about the VF state change from the host.
13:50:18 <yongli> what state?
13:50:24 <baoli_> how do you change VF state on the host before or after it has been attached to a VM
13:50:42 <baoli_> yongli, this is something you guys talked about three weeks ago.
13:51:22 <yongli> sorry but no idea what is it now
13:51:50 <yongli> do you mean link up down?
13:52:17 <baoli_> intel nic.no interface to control per vf up down from host
13:52:53 <yongli> now there is no interface to.do such thing
13:53:21 <yongli> what is the concern?
13:53:36 <baoli_> yongli, no concern. Just want to know about it.
13:54:05 <yongli> ok do we bettet to have that?
13:54:15 <baoli_> so such an interface doesn't actually exist now.
13:54:24 <yongli> yeah
13:54:39 <baoli_> yongli, well, neutron has a command to control a port's admin state.
13:55:03 <yongli> i can help to broadcat need from openstack if it is good to have
13:55:07 <baoli_> but for sr-iov, we won't be able to do so
13:56:18 <baoli_> Anything else for today?
13:56:20 <sadasu> baoli_: I agree...this may be an issue later
13:56:21 <yongli> just say. i will on vacation in rest ilof this week
13:56:53 <sadasu> baoli_: mostly from the neutron side
13:57:10 <baoli_> yongli, have a good time.
13:57:23 <baoli_> sadasu, agree.
13:57:30 <yongli> thanks
13:57:51 <sadasu> yongli: gr8...hav fun
13:58:13 <sadasu> baoli_: not sure if you noticed but my BP got approved at the very last minute
13:58:19 <yongli> go to a palce.no.plution
13:58:27 <irenab> yongli: enjoy the vacation
13:58:37 <baoli_> sadasu, saw it. Great and good job!
13:58:43 <yongli> sadasu
13:58:50 <yongli> good news
13:59:22 <baoli_> yongli, hwo often do you see blue skies nowdays in beijing?
13:59:30 <sadasu> yes....there were a lot of objections to the fact that not all beutron features could be supported on sr-iov ports
13:59:56 <yongli> baoli this year is most ofen
14:00:05 <yongli> not so bad actually
14:00:12 <baoli_> yongli, improving, that's good
14:00:22 <baoli_> Thanks everyone. next week
14:00:26 <baoli_> #endmeeting