13:00:45 <moshele> #startmeeting sriov
13:00:50 <openstack> Meeting started Tue Sep  6 13:00:45 2016 UTC and is due to finish in 60 minutes.  The chair is moshele. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:00:51 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:00:53 <openstack> The meeting name has been set to 'sriov'
13:00:58 <moshele> hi everyone
13:01:03 <wznoinsk> hi
13:01:06 <vladikr> hi
13:01:11 <lennyb> hi
13:01:20 <edand> hi
13:02:30 <sfinucan> howdy
13:02:31 <moshele> #topic Improving SR-IOV/PCI Passthrough CI
13:03:00 <moshele> status about CI
13:04:07 <moshele> this issue with qemu with Virtqueue size exceeded error when resuming VM
13:04:21 <moshele> see https://www.redhat.com/archives/libvir-list/2016-August/msg00406.html https://bugzilla.redhat.com/show_bug.cgi?id=1371943
13:04:21 <openstack> bugzilla.redhat.com bug 1371943 in qemu-kvm-rhev "RHSA-2016-1756 breaks migration of instances" [High,New] - Assigned to virt-maint
13:05:32 <moshele> so it seem the investigation is done to fix it, we saw it in our CI in suspend resume so we downgraded qemu version
13:06:04 <moshele> there is also issue with libvirt and macvtap see https://www.redhat.com/archives/libvir-list/2016-September/msg00076.html
13:06:45 <moshele> we plan to push the WIP patch to workaround the libvirt issue https://review.openstack.org/#/c/364121/
13:07:52 <moshele> just wanted to share so of the issues we saw in Mellanox CI
13:08:30 <moshele> anyone have anything to  update related to CI or this issues?
13:09:07 <sfinucan> nothing on my end, sadly
13:09:25 <moshele> let move on
13:09:43 <moshele> #topic Patches for Subteam Review
13:10:07 <moshele> https://etherpad.openstack.org/p/sriov_meeting_agenda Line 32
13:11:02 <moshele> I forgot to ping dansmith about  https://review.openstack.org/#/c/349060 and https://review.openstack.org/#/c/347558/
13:11:10 <moshele> I will do it after the meeting
13:12:03 <moshele> these are the final patches for making pci migration-revert to work, hopefully it will be merged in this cycle
13:12:50 <moshele> any new patches for SR-IOV, that I am not aware of?
13:13:39 <moshele> also the neutron side of BWG was merged with support for SR-IOV
13:14:17 <moshele> we plan to add this to our CI (testing rate limit and BWG)
13:14:38 <sfinucan> I didn't see any recently
13:14:39 <moshele> #topic Specs for Review
13:15:26 <sfinucan> I'm still looking for ideas on 'PCI passthrough device role tagging - https://review.openstack.org/#/c/307028/'
13:15:42 <sfinucan> including whether it's a useful idea or no :)
13:16:06 <sfinucan> Also, is 'Add spec to enhance PCI passthrough whitelist to support regex' going ahead? I thought there was a lot of pushback?
13:16:33 <moshele> sfinucan: I need to read the spec - PCI passthrough device role tagging  :)
13:16:48 <sfinucan> moshele: Ah, no problem :)
13:16:58 <sfinucan> you're input would probably be valuable
13:17:12 <sfinucan> as to the value proposition of the idea
13:17:14 <moshele> what pushback? I just didn't jet reviews from nova cores
13:17:36 <sfinucan> oh, maybe I'm mixing it up with something else...
13:17:58 <sfinucan> yeah, the -2 is only for FF. My mistake
13:19:11 <vladikr> sfinucan, there is a new spec for the regex spec: https://review.openstack.org/#/c/350211/1
13:19:13 <moshele> regarding NUMA we have this     Enable to share PCI devices between numa nodes - https://review.openstack.org/#/c/361140/
13:19:27 <moshele> I didn't review it yet
13:19:56 <moshele> and I would like to also push this User-controlled SR-IOV ports allocation - https://review.openstack.org/#/c/182242/
13:20:40 <sfinucan> I've reviewed the former of those. I'll review the latter today
13:21:26 <moshele> sfinucan: so currently I am not addressing NUMA in User-controlled SR-IOV ports allocation, but I wonder if I should
13:22:29 <sfinucan> moshele: Totally up to you. From my perspective it seems a little too "uncloudy" for my liking
13:22:40 <sfinucan> but that's just me. Maybe you'd see benefits from your end
13:22:50 <sfinucan> always good to get contrasting opinions :)
13:23:27 <m1dev> Is there any plan to support using DPDK from inside a Guest OS
13:23:35 <m1dev> on VFs
13:24:14 <moshele> I mean for start I just want to address the selecting the correct pci devices to achieve HA  and we can extend it later
13:24:51 <sfinucan> m1dev: Probably a question best directed to the dev@dpdk.org mailing list
13:25:13 <sfinucan> m1dev: nova wouldn't be doing anything like that inside guests
13:25:56 <sfinucan> moshele: Yeah, I get that. Need to think about it a little more, tbh
13:26:10 <m1dev> dpdk supports it but when you have a hostdev device that needs to be configured outside sriov agent since we cannot use iproute to configure vf
13:26:57 <sfinucan> m1dev: Ping me after this meeting? Would need to know a little more about your desired configuration
13:27:08 <m1dev> sure
13:27:32 <moshele> anything else regarding specs?
13:27:46 <moshele> do you have anything else to talk about?
13:27:46 <trevormc> hi all, I do have a spec I want to put up for review but I am looking for collaboration on it. should I wait for open discussion to talk about this?
13:28:07 <moshele> we can talk about it now
13:28:50 <lbeliveau> hi, sorry I'm late
13:28:52 <sfinucan> moshele: (nothing else on my end this week)
13:28:59 <sfinucan> hey lbeliveau
13:29:08 <moshele> hi, lbeliveau:
13:29:14 <trevormc> Here is a paste for the spec I was talking about
13:29:18 <trevormc> http://paste.openstack.org/show/567157/
13:30:04 <lbeliveau> trevormc: I'll have a closer look ...
13:30:52 <trevormc> lbeliveau: thanks
13:30:53 <moshele> what are  other vf configuration utilities?
13:31:09 <moshele> is that vendor tools?
13:31:36 <trevormc> vf configuration utilities can be seen here https://github.com/att/vfd
13:32:42 <moshele> so you want to make the agent with plugable driver to configure VFs?
13:32:54 <moshele> and the default will be iproute
13:33:03 <m1dev> yes
13:33:14 <moshele> I see
13:33:46 <m1dev> the difference is these vfs are bound to dpdk modules and are in userspace
13:34:27 <moshele> do you have  a POC code of the agent refactoring?
13:34:53 <m1dev> Yes we hacked the vip.py to invoke the vfd
13:35:00 <m1dev> vif.py in nova
13:35:04 <m1dev> not via agent yet
13:36:11 <moshele> I would suggest to POC the code on the agent and you can open RFE in neutron
13:36:25 <m1dev> ok
13:36:55 <sfinucan> +1
13:37:09 <moshele> also you know that the agent only know the pci device of the VF and the PF, is that enough to use you utils?
13:37:52 <m1dev> no we used the binding profile field in neutron port to pass vf configuration for now
13:38:25 <m1dev> so the dpdk allow you to customize lot of params on vf via agent
13:38:53 <moshele> ok start POC and we will take a look
13:39:06 <moshele> anything else/
13:39:17 <lbeliveau> also I suggest submitting an official blueprint for ocata
13:39:20 <m1dev> so these values can be passed in binding profile, i think mellanox already supply a json to their switch
13:39:54 <m1dev> sure trevor will summit it
13:40:29 <moshele> no, Mellanox is using iproute to configure  the VFs
13:40:56 <m1dev> yes i understand but i am talking about the data that will be used
13:41:12 <m1dev> so depending on agent that field will configure the vf
13:41:59 <moshele> not sure I understand. it will be easier with spec/blueprint and some code :)
13:42:04 <m1dev> sure
13:42:11 <moshele> anything else
13:42:35 <sfinucan> nothing on my end
13:42:37 <moshele> #endmeeting