13:00:47 <moshele> #startmeeting sriov
13:00:48 <openstack> Meeting started Tue May 24 13:00:47 2016 UTC and is due to finish in 60 minutes.  The chair is moshele. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:00:49 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:00:51 <openstack> The meeting name has been set to 'sriov'
13:01:01 <moshele> hi everyone
13:01:04 <lennyb> hi
13:01:08 <lbeliveau> hello
13:01:45 <moshele> #topic  Improving SR-IOV/PCI Passthrough CI
13:02:02 <moshele> lennyb any update on the Mellanox CI?
13:02:35 <lennyb> moshele: it did not workout with Ubuntu 14.04 due to vfio issue. So I am trying Fedora 23 now
13:02:53 <lennyb> I have assistance from wznoinsk
13:03:13 <moshele> lennyb ok thanks
13:03:40 <moshele> I also notice that the intel NFV CI is down
13:04:05 <lennyb> moshele, yeap, they found the problem, I guess it will be up soon
13:04:18 <moshele> ok cool
13:04:31 <moshele> anything more about CI?
13:04:50 <lennyb> nothing from me
13:05:14 <moshele> the Intel PCI added resize to the experimental tests and now it passing with the resize fix
13:06:26 <moshele> I have as sean-k-mooney to join SR-IOV meeting see if he will respond
13:06:30 <moshele> let move one
13:07:08 <moshele> #topic Documentation
13:07:31 <moshele> so I think all the documentation is patches are merged
13:08:03 <moshele> we have only the PF Passthrough for nova and neutron doc
13:08:06 <lbeliveau> I think so as well, the only one missing is documenting PCI passthrough with neutron port
13:08:16 <lbeliveau> I can take care of it later this week
13:08:35 <moshele> lbeliveau ok cool
13:08:48 <moshele> lbeliveau: do you know if it working ?
13:09:07 <lbeliveau> moshele: no, I have to try
13:09:33 <moshele> ok let move to bug fixes
13:09:51 <moshele> #topic Bug Fixes
13:10:10 <moshele> so I as dansmith to review some of the resize patches
13:10:19 <moshele> s/as/ask
13:10:41 <moshele> no major issues yet  :)
13:10:42 <lbeliveau> looks like it should get merged pretty soon
13:10:49 <moshele> I hope
13:11:12 <moshele> lbeliveau: did you start looking on migration with your patch?
13:11:31 <lbeliveau> moshele: no but will do either today or tomorrow
13:11:39 <lbeliveau> will let you know how it goes
13:12:45 <moshele> lbeliveau: I think you will need to update the _update_usage_from_migration with the sign value for migration
13:13:15 <lbeliveau> noted
13:13:55 <moshele> if you need help we can talk on skype
13:14:00 <moshele> ok anything else on bugs?
13:14:02 <lbeliveau> cool
13:14:06 <lbeliveau> not from me
13:14:38 <moshele> #topic Specs for review
13:15:01 <lbeliveau> haven't updated my spec yet ...  hopefully I'll do it this week
13:15:18 <moshele> ok once it ready I will review it
13:15:45 <lbeliveau> gjayavelu: you there ?
13:15:51 <gjayavelu> yes lbeliveau
13:16:08 <gjayavelu> i can try to answer if there are questions on my spec
13:16:19 <lbeliveau> gjayavelu: you ping me last week for your spec, sorry I think we are not in the same time zone
13:16:43 <gjayavelu> oh no problem. just wanted to chat about performance impact
13:16:44 <moshele> gjayavelu: yes I have question regarding the sub node  with migration and resize
13:16:55 <gjayavelu> sure
13:17:44 <moshele> currently nova-compute claim what are the pci_devices for the new falvor on resize
13:18:10 <moshele> how this will work with sub node without changing the pci claim method?
13:18:48 <gjayavelu> live migration is not supported on vsphere. so for cold migration, we have to detach the device and re-attach by querying the pci manager
13:19:27 <moshele> but you need to get the same one on the sub node right?
13:20:24 <moshele> so you need to filter it according to sub node as well in here https://github.com/openstack/nova/blob/master/nova/pci/manager.py#L172
13:20:30 <gjayavelu> when pci manager allocated a new pci device from the pool, it should have address, vendor, and sub_node too
13:21:27 <gjayavelu> yes, to ensure device from same sub_node, it should be filtered
13:23:29 <gjayavelu> moshele: please let me know if I'm not clear
13:23:58 <moshele> gjayavelu: that why I was wandering  if we should add this as filed  to pci device and not to extra spec
13:24:57 <moshele> I wander if other virt drivers has the concept of sub node
13:25:17 <gjayavelu> moshele: sure. I can add. I was thinking it might be a concern since it is specific to vmware driver only
13:25:24 <gjayavelu> moshele: no. not that i know of
13:25:46 <gjayavelu> this is only because a cluster is exposed as compute node instead of esx host
13:26:57 <moshele> I know that ironic driver doing something similar. one nova-compute serves several  bare metal
13:27:13 <moshele> but the resource tracking there is different
13:27:29 <moshele> anyway let me review again on the spec
13:27:40 <gjayavelu> oh ok. I will look into that
13:28:19 <lbeliveau> I also need to review your latest version
13:28:41 <gjayavelu> sure. thanks lbeliveau moshele
13:29:51 <moshele> there also 3 other pci/SR-iOV specs 1. https://review.openstack.org/#/c/139910/
13:30:05 <moshele> ^ SR-IOV attached/detached
13:30:32 <moshele> XenAPI: support VGPU via passthrough PCI - https://review.openstack.org/#/c/280099/
13:31:07 <moshele> PCI passthrough device role tagging - https://review.openstack.org/#/c/307028/
13:31:22 <moshele> if you have time you can review them
13:31:36 <lbeliveau> will do
13:31:43 <moshele> I will add them to the SR-IOV to the agenda
13:32:05 <gjayavelu> that would help.
13:32:15 <moshele> anything else on specs?
13:32:34 <gjayavelu> nothing from me.
13:32:37 <lbeliveau> I'm good
13:32:58 <moshele> ok I think we are done here
13:33:15 <moshele> #endmeeting