13:00:19 #startmeeting PCI passthrough 13:00:21 Meeting started Tue Jul 22 13:00:19 2014 UTC and is due to finish in 60 minutes. The chair is baoli_. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:00:22 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:00:25 The meeting name has been set to 'pci_passthrough' 13:00:53 hi 13:00:57 hi 13:01:47 hi 13:03:25 shall we start with quick update? 13:03:56 irenab, I was waiting for more folks to join. 13:04:26 baoli: thank you for review you did on Mech Driver 13:04:48 irenab, sure thing. 13:05:14 Ok, let's get started. 13:05:28 I can attend only for 1/2 an hour today. Sorry in advance 13:05:37 Quick update from my side: all the code is in. Review is needed. 13:05:38 Hi 13:05:49 sadasu, hi 13:06:08 baoli_:cool! 13:06:35 baoli: Itzik and me work continuosly with all patches under review to verify end 2 end. 13:07:05 irenab, all you need to do is to get the code from the last patch. 13:07:22 Due to the dependencies, it will pull in all the dependent patches. 13:07:41 baoli: exactly what we do, will provide code review soon 13:07:49 you can use 'git review -N 107466' 13:08:05 baoli: thanks 13:08:07 irenab, cool. 13:08:33 The patch keeps changing due to conflicts with code that's coming in. 13:08:44 baoli_: so are recommending that we not use the diffs that rpothier sent out a week + ago? 13:09:20 sadasu, if you want to use the up-to-date code, you can get it from the review now. 13:09:29 ok 13:10:16 it's a heck of work keeping the code up-to-date. Now some unit tests failed in the last patch that I'll have to investigate. 13:10:26 baoli: we see sometimes issues, but not sure if it is directly related to submitted code. Will followup via irc or mail 13:10:56 irenab, do let me know. 13:11:09 hi, sorry late 13:11:15 heyongli, hi 13:11:26 baoli: sure, will arrange with more details and consult with you 13:11:39 hi 13:12:04 I'm hoping that people, especially cores, will start reviewing the code. 13:12:16 do we have some current issues to discuss or just need to review and verify? 13:12:47 before core get involved, test case better to passed. 13:12:49 irenab, we'll need to review and verify 13:13:06 heyongli, it was passing ealier. 13:13:23 I have some advanced use case to discuss if we do not have other items 13:13:44 irenab, I'd like to talk about your review for a bit 13:13:50 baoli, great 13:13:52 baoli: sure 13:14:26 Two issues: inheritence, and second, device-mapping 13:14:45 baoli: inheritance is already changed 13:15:00 just have some tox issues, probably will upload later today 13:15:10 baoli, Jenkins still -1 13:15:18 irenab: inheritance changed to what? 13:15:30 api.MechDriver 13:15:52 irenab, cool. 13:16:23 irenab, a while back, I sent out some code that inherits from mechdriver and uses decorators 13:16:36 baoli: he idea to make agent not mandatory was due to heyongli input that Intel NIC does not support link state change 13:16:47 heyongli, yes, the patch uploaded yesterday failed 13:18:08 one question, does SRIOV neutron testing need a specific external switch support? 13:18:09 baoli: I still need agent is required, so changing the code accordingly. It is also simplified 13:18:51 irenab, agent is optional as we have agreed. 13:19:30 heyongli: not for MD for SRIOV Switch nics 13:19:33 heyongli: support for Intel card should not need specific external switch 13:19:50 baoli: it depends on deployer choice if require agent 13:20:05 thanks irenab, sadasu 13:20:09 if dynamic changes are expected the agent will be required 13:20:38 for HW_VEB case, VFs are managed locally 13:20:44 irenab: could u clarify dynamic changes? 13:21:14 admin state can be managed via 'ip link set state' command 13:21:39 due to port-update --admin_state_up call for neutron port 13:22:10 ther is also option for QoS, and other, but it is not yet supported 13:22:16 irenab, that's right. your existing patch has a config item about whether or not agent is required for a particular device 13:22:43 baoli: it is currently for all deployment and not per device. 13:23:01 baoli: maybe later maybe enhanced 13:23:07 to wha tI think you mean 13:23:24 But it has nothing to do with the parent class your new class should inherit from 13:23:51 yes...I wanted to discuss the parent class too 13:23:56 baoli: agree and already changed the parent class 13:24:10 irenab, cool 13:24:22 sadasu: go ahead 13:24:22 irenab, the device mapping config from the agent side 13:24:41 what other changes had to be made when you changed the parent class? 13:24:42 do you think that multiple agents would be running per host? 13:25:25 baoli: do not want to make assumtion here 13:25:26 baoli_: we should not disallow multiple agents afaik 13:26:07 I prefer to follow the existing examples and there is sort of assumption for other agents 13:26:27 but there is no need for more than one agent of certain type on same compute node 13:27:02 Just want to confirm that multiple agents of same type on a single host are allowed in neutron 13:27:32 baoli: at least existing code assumes that this can be the situation 13:28:12 irenab, I'm just thinking that if that's not some thing that we need to support now, then device mapping is not needed. 13:28:36 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mech_agent.py#64 13:29:00 device mapping serves also what VFs to watch on periodic loop 13:29:34 please take a look on part 2 of the patch 13:29:49 irenab, ok, I didn't see your second part yet. So it's something thtat has to be there, it's fine 13:31:35 I wanted to ask if having OVS on PF and VFs on same compute node is the use case you see you want to support? 13:32:23 irenab, that's a good question. 13:32:41 interesting 13:33:18 at least for Intel and Mellanox NICs there is a problem for communication between OVS VM and VF MV on same compute same VLAN 13:33:23 not sure if someone would want to pay a premium for for sr-iov ports and not use them in that mode 13:34:02 but having said, if someone wanted to configure it that way, it not be disallowed 13:34:16 it maybe the case that need to communicate guest without vendor drivers and guest that can consume Sr-IOV VF 13:35:13 using ML2 its possible, but for case with embedded switching NIC, it requires to add OVS MV macs to the PF 13:35:42 irenab, I think it should be ok for OVS with PF, but not sure about OVS with VF. 13:36:53 what I wanted to raise that if we are going to support this case, there should be some way to add macs of VMs on OVS to the PF 13:37:30 what do you mean by macs of VMs on OVS to the PF 13:38:34 if there is a VM connected vi OVS that is connected to PF interface that wants to talk to VM connected via SRIOV VF of same PF 13:39:10 irenab, I see. 13:39:10 is that really a valid case? 13:39:35 sadasu: I think the case is valid, but not sure how common 13:40:05 irenab, does mlnx treat the PF interface as a normal ethernet interface from the host point of view? 13:40:17 if there is a need to support it, seems there is a need of some sort of mechanism to propagate macs and program them on PF 13:40:27 baoli: yes 13:40:34 i know for sr-iov ports, traffic between 2 VFs is treated specially by the switch 13:40:52 Irenab, in cisco vmfex, the PF has already assigned a mac 13:41:18 but not sure if traffic between PF and VF is treated that way...and may differ in each vendor's implementation 13:41:20 baoli: but can OVS be connected to this PF? 13:41:37 irenab, I haven't tried it. But I think it should work 13:41:42 irenab: did not try that 13:42:06 irenab, that' the case I referred as OVS + PF 13:42:19 if this use case is required for both Intel and Mellanox, it will require some addition 13:42:39 irenab, you mean addition in nova? 13:43:12 baoli: not sure, more neuron or maybe libvirt... 13:43:20 irenab, I would consider that as part of host provisioning 13:43:27 i like try to confirm how intel nic handle this 13:43:43 yongli, are you heyongli? 13:43:50 yeah 13:44:01 heyongli, thought you were left 13:44:06 I can share with you some slides, and it about Intel 13:44:09 no 13:44:24 will send in the mail later today 13:44:27 sent to.me thanks 13:44:42 irenab, thanks. 13:44:49 but probably we need first complete the basic support for SR-IOV 13:45:05 irenab, absolutely, let's review and get them in 13:45:21 heyongli, one issue with the system metadata 13:45:23 anyway conner case wont stop us 13:45:40 baoli what is it 13:46:09 PCI requests is saved in the metadata. Metadata is a key value pair with the value being 255 in max 13:46:38 we exceed that? 13:46:51 if pci requests come from both sr-iov and generic, or mutliple sr-iov, it would exceed that easily 13:47:46 however, increasing that would impact all the features using the metadata table 13:47:49 its common not only pci i suspect other thing will fill that also 13:48:28 I think that this is something that we need to keep in mind, and we need to find a solution later. I will log a bug 13:48:48 no good.idea now , bug is good 13:49:46 yongli, another question is about the VF state change from the host. 13:50:18 what state? 13:50:24 how do you change VF state on the host before or after it has been attached to a VM 13:50:42 yongli, this is something you guys talked about three weeks ago. 13:51:22 sorry but no idea what is it now 13:51:50 do you mean link up down? 13:52:17 intel nic.no interface to control per vf up down from host 13:52:53 now there is no interface to.do such thing 13:53:21 what is the concern? 13:53:36 yongli, no concern. Just want to know about it. 13:54:05 ok do we bettet to have that? 13:54:15 so such an interface doesn't actually exist now. 13:54:24 yeah 13:54:39 yongli, well, neutron has a command to control a port's admin state. 13:55:03 i can help to broadcat need from openstack if it is good to have 13:55:07 but for sr-iov, we won't be able to do so 13:56:18 Anything else for today? 13:56:20 baoli_: I agree...this may be an issue later 13:56:21 just say. i will on vacation in rest ilof this week 13:56:53 baoli_: mostly from the neutron side 13:57:10 yongli, have a good time. 13:57:23 sadasu, agree. 13:57:30 thanks 13:57:51 yongli: gr8...hav fun 13:58:13 baoli_: not sure if you noticed but my BP got approved at the very last minute 13:58:19 go to a palce.no.plution 13:58:27 yongli: enjoy the vacation 13:58:37 sadasu, saw it. Great and good job! 13:58:43 sadasu 13:58:50 good news 13:59:22 yongli, hwo often do you see blue skies nowdays in beijing? 13:59:30 yes....there were a lot of objections to the fact that not all beutron features could be supported on sr-iov ports 13:59:56 baoli this year is most ofen 14:00:05 not so bad actually 14:00:12 yongli, improving, that's good 14:00:22 Thanks everyone. next week 14:00:26 #endmeeting