13:00:45 #startmeeting sriov 13:00:50 Meeting started Tue Sep 6 13:00:45 2016 UTC and is due to finish in 60 minutes. The chair is moshele. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:00:51 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:00:53 The meeting name has been set to 'sriov' 13:00:58 hi everyone 13:01:03 hi 13:01:06 hi 13:01:11 hi 13:01:20 hi 13:02:30 howdy 13:02:31 #topic Improving SR-IOV/PCI Passthrough CI 13:03:00 status about CI 13:04:07 this issue with qemu with Virtqueue size exceeded error when resuming VM 13:04:21 see https://www.redhat.com/archives/libvir-list/2016-August/msg00406.html https://bugzilla.redhat.com/show_bug.cgi?id=1371943 13:04:21 bugzilla.redhat.com bug 1371943 in qemu-kvm-rhev "RHSA-2016-1756 breaks migration of instances" [High,New] - Assigned to virt-maint 13:05:32 so it seem the investigation is done to fix it, we saw it in our CI in suspend resume so we downgraded qemu version 13:06:04 there is also issue with libvirt and macvtap see https://www.redhat.com/archives/libvir-list/2016-September/msg00076.html 13:06:45 we plan to push the WIP patch to workaround the libvirt issue https://review.openstack.org/#/c/364121/ 13:07:52 just wanted to share so of the issues we saw in Mellanox CI 13:08:30 anyone have anything to update related to CI or this issues? 13:09:07 nothing on my end, sadly 13:09:25 let move on 13:09:43 #topic Patches for Subteam Review 13:10:07 https://etherpad.openstack.org/p/sriov_meeting_agenda Line 32 13:11:02 I forgot to ping dansmith about https://review.openstack.org/#/c/349060 and https://review.openstack.org/#/c/347558/ 13:11:10 I will do it after the meeting 13:12:03 these are the final patches for making pci migration-revert to work, hopefully it will be merged in this cycle 13:12:50 any new patches for SR-IOV, that I am not aware of? 13:13:39 also the neutron side of BWG was merged with support for SR-IOV 13:14:17 we plan to add this to our CI (testing rate limit and BWG) 13:14:38 I didn't see any recently 13:14:39 #topic Specs for Review 13:15:26 I'm still looking for ideas on 'PCI passthrough device role tagging - https://review.openstack.org/#/c/307028/' 13:15:42 including whether it's a useful idea or no :) 13:16:06 Also, is 'Add spec to enhance PCI passthrough whitelist to support regex' going ahead? I thought there was a lot of pushback? 13:16:33 sfinucan: I need to read the spec - PCI passthrough device role tagging :) 13:16:48 moshele: Ah, no problem :) 13:16:58 you're input would probably be valuable 13:17:12 as to the value proposition of the idea 13:17:14 what pushback? I just didn't jet reviews from nova cores 13:17:36 oh, maybe I'm mixing it up with something else... 13:17:58 yeah, the -2 is only for FF. My mistake 13:19:11 sfinucan, there is a new spec for the regex spec: https://review.openstack.org/#/c/350211/1 13:19:13 regarding NUMA we have this Enable to share PCI devices between numa nodes - https://review.openstack.org/#/c/361140/ 13:19:27 I didn't review it yet 13:19:56 and I would like to also push this User-controlled SR-IOV ports allocation - https://review.openstack.org/#/c/182242/ 13:20:40 I've reviewed the former of those. I'll review the latter today 13:21:26 sfinucan: so currently I am not addressing NUMA in User-controlled SR-IOV ports allocation, but I wonder if I should 13:22:29 moshele: Totally up to you. From my perspective it seems a little too "uncloudy" for my liking 13:22:40 but that's just me. Maybe you'd see benefits from your end 13:22:50 always good to get contrasting opinions :) 13:23:27 Is there any plan to support using DPDK from inside a Guest OS 13:23:35 on VFs 13:24:14 I mean for start I just want to address the selecting the correct pci devices to achieve HA and we can extend it later 13:24:51 m1dev: Probably a question best directed to the dev@dpdk.org mailing list 13:25:13 m1dev: nova wouldn't be doing anything like that inside guests 13:25:56 moshele: Yeah, I get that. Need to think about it a little more, tbh 13:26:10 dpdk supports it but when you have a hostdev device that needs to be configured outside sriov agent since we cannot use iproute to configure vf 13:26:57 m1dev: Ping me after this meeting? Would need to know a little more about your desired configuration 13:27:08 sure 13:27:32 anything else regarding specs? 13:27:46 do you have anything else to talk about? 13:27:46 hi all, I do have a spec I want to put up for review but I am looking for collaboration on it. should I wait for open discussion to talk about this? 13:28:07 we can talk about it now 13:28:50 hi, sorry I'm late 13:28:52 moshele: (nothing else on my end this week) 13:28:59 hey lbeliveau 13:29:08 hi, lbeliveau: 13:29:14 Here is a paste for the spec I was talking about 13:29:18 http://paste.openstack.org/show/567157/ 13:30:04 trevormc: I'll have a closer look ... 13:30:52 lbeliveau: thanks 13:30:53 what are other vf configuration utilities? 13:31:09 is that vendor tools? 13:31:36 vf configuration utilities can be seen here https://github.com/att/vfd 13:32:42 so you want to make the agent with plugable driver to configure VFs? 13:32:54 and the default will be iproute 13:33:03 yes 13:33:14 I see 13:33:46 the difference is these vfs are bound to dpdk modules and are in userspace 13:34:27 do you have a POC code of the agent refactoring? 13:34:53 Yes we hacked the vip.py to invoke the vfd 13:35:00 vif.py in nova 13:35:04 not via agent yet 13:36:11 I would suggest to POC the code on the agent and you can open RFE in neutron 13:36:25 ok 13:36:55 +1 13:37:09 also you know that the agent only know the pci device of the VF and the PF, is that enough to use you utils? 13:37:52 no we used the binding profile field in neutron port to pass vf configuration for now 13:38:25 so the dpdk allow you to customize lot of params on vf via agent 13:38:53 ok start POC and we will take a look 13:39:06 anything else/ 13:39:17 also I suggest submitting an official blueprint for ocata 13:39:20 so these values can be passed in binding profile, i think mellanox already supply a json to their switch 13:39:54 sure trevor will summit it 13:40:29 no, Mellanox is using iproute to configure the VFs 13:40:56 yes i understand but i am talking about the data that will be used 13:41:12 so depending on agent that field will configure the vf 13:41:59 not sure I understand. it will be easier with spec/blueprint and some code :) 13:42:04 sure 13:42:11 anything else 13:42:35 nothing on my end 13:42:37 #endmeeting