09:31:55 <BobBall> #startmeeting XenAPI
09:31:55 <openstack> Meeting started Wed Nov 11 09:31:55 2015 UTC and is due to finish in 60 minutes.  The chair is BobBall. Information about MeetBot at http://wiki.debian.org/MeetBot.
09:31:56 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
09:31:58 <openstack> The meeting name has been set to 'xenapi'
09:32:01 <BobBall> johnthetubaguy: pingity ping :)
09:32:26 <huanxie> hello
09:32:39 <BobBall> morning huanxie
09:32:50 <BobBall> Anyone else around?
09:32:51 <jianghuawang> hello, everyone.
09:32:57 * johnthetubaguy waves
09:33:41 <BobBall> Howdy
09:33:47 <BobBall> Also huazhihao I hope!
09:34:12 <huazhihao> Hi Bob
09:34:21 <huazhihao> Hi all
09:34:29 <BobBall> #link https://wiki.openstack.org/wiki/Meetings/XenAPI is the agenda
09:34:58 <BobBall> #topic blueprints
09:35:06 <BobBall> So - I think we have one blueprint to focus on
09:35:18 <BobBall> jianghuawang - what's the status of the VGPU blueprint?
09:35:45 <jianghuawang> yes, i'm reworking vgpu spec basing on the review comment.
09:36:49 <BobBall> OK - as we've discussed, johnthetubaguy has recently highlighted the dates for spec freeze
09:37:02 <BobBall> we really want VGPU in Mitaka so we can start making progress again :)
09:37:05 <jianghuawang> two changes by comparing to the last review version: add enable flag in flavor and image; specify vgpu model in the image metadata.
09:37:43 <johnthetubaguy> not sure what you would not just do both in the flavor, but I guess I should read the spec
09:38:00 <jianghuawang> another change is to consider the resource object model which will be done in Mitaka.
09:38:21 <johnthetubaguy> usually flavor says what you are allowed, image specifies what it needs, after that its a bit sketchy
09:38:23 <BobBall> johnthetubaguy: danp wanted the split so an operator didn't have to duplicate all flavors to add vGPU
09:38:41 <johnthetubaguy> right, but some admins will need to do that
09:38:47 <BobBall> johnthetubaguy: that was a comment on the spec at the moment
09:39:11 <johnthetubaguy> yeah, we don't really agree on that stuff right now, its very very messy
09:39:37 <johnthetubaguy> key thing is to try not get distracted by everything thats moving around in that area right now
09:39:58 <johnthetubaguy> currently trying to re-define the concept of flavor and image properties (in a backwards compatible way)
09:40:27 <BobBall> OK - so what do we do for thsi spec
09:40:52 <BobBall> do we stick with the "old" way?
09:41:06 <johnthetubaguy> BobBall: we should have raised this at the summit, in the unconference style stuff, to argue it out I guess
09:41:12 <BobBall> in which case we'd need your comments on the spec saying we should stick to the old way
09:41:24 <johnthetubaguy> unsure, I have to go read the spec
09:41:32 <johnthetubaguy> currently working my way through the older ones
09:41:32 <BobBall> Hindsight is an exact science :)
09:41:59 <BobBall> OK - is the VGPU spec on your todo list? And do you know when you might get to it?  Just concerned about meeting the timescales
09:42:18 <johnthetubaguy> every spec is on my todo list, effectively
09:42:19 <jianghuawang> the link is: https://review.openstack.org/#/c/229351/2/specs/mitaka/approved/xenapi-add-support-for-vgpu.rst
09:42:23 <jianghuawang> john, thanks.
09:43:14 <BobBall> Any idea how long it might take to get to this spec?  Is it something you're ploughing through this week, or expecting to take a couple of weeks to get through?
09:43:19 <johnthetubaguy> BobBall: for next time, I recommend any spec with an unresolved -1 to try and raise the issue at the summit, its good for clearing the air
09:43:43 <BobBall> I didn't think it was an unresolved -1 at the time of the summit, but maybe I'm mis-remembering the timescales
09:43:44 <johnthetubaguy> BobBall: we have 110 specs, ish, takes me 30 mins per spec, on average
09:44:00 <johnthetubaguy> I have chosen not to work that out, because its depressing
09:44:30 <BobBall> OK; but clearly if it takes 2 weeks to review the spec there is a very high risk that it's going to miss the deadline you published
09:45:09 <johnthetubaguy> yes, thinkings not merged by the summit are at risk, roughly
09:45:19 <johnthetubaguy> anyways, lets go through this spec in real time
09:45:37 <johnthetubaguy> I think we should cover why vGPU and GPU passthrough are different
09:45:42 <BobBall> OK, that'd be fantastic, thanks
09:45:55 <BobBall> There is no support for GPU passthrough; just generic passthrough
09:45:59 <BobBall> which happens to work for GPU
09:46:10 <johnthetubaguy> correct, and thats a difference
09:46:11 <BobBall> We can add a comment at the top explaining that there are no PCI devices for VGPU
09:46:19 <johnthetubaguy> yeah
09:46:26 <johnthetubaguy> in the problem description
09:46:45 <johnthetubaguy> Nova supports passthrough of GPU PCI devices, but vGPU is different because...
09:46:49 <johnthetubaguy> etc
09:46:59 <jianghuawang> that's in the spec already: there are no PCI devices for VGPU
09:47:02 <BobBall> *nod* jianghuawang - taking notes, yes? :)
09:47:13 <johnthetubaguy> oh yeah, I missed that bit
09:47:18 <johnthetubaguy> oops, sorry
09:47:45 <johnthetubaguy> that project priority section can be removed
09:47:46 <BobBall> perhaps we could make it more obvious
09:47:50 <johnthetubaguy> its been dropped from the template
09:47:55 <BobBall> Yup - that's a comment in the spec already
09:48:06 <johnthetubaguy> BobBall: I would probably put that first in the problem spec, yeah
09:48:49 <johnthetubaguy> do vGPUs have a size?
09:48:56 <jianghuawang> Ok. I will make it to be more obvious.
09:48:56 <johnthetubaguy> or is it more like vCPUs?
09:49:02 <BobBall> They have a 'model' which sort of defines the size
09:49:31 <BobBall> the model could include definitions of the size (e.g. the K160Q model has a defined size)
09:49:50 <johnthetubaguy> do most cards offer a single model?
09:49:53 <BobBall> so it's more like vCPUs
09:49:59 <BobBall> no; K1+K2 offer a range
09:50:17 <BobBall> #link http://support.citrix.com/servlet/KbServlet/download/38329-102-714577/XenServer-6.5.0-graphics.pdf
09:50:27 <johnthetubaguy> right, we should capture those details in the use case section, here is a typical setup and what it might offer
09:50:37 <johnthetubaguy> PS, its super useful for destop as a service, etc
09:51:14 <BobBall> OK - so we can add an overview of how a "model" works
09:51:37 <BobBall> and an example of the DaaS use case
09:51:56 <johnthetubaguy> well just an exmaple setup, and the available resources would do the trick I think
09:52:11 <BobBall> GRID K180Q
09:52:11 <BobBall> Designer/Power User
09:52:13 <BobBall> 2560x1600
09:52:34 <BobBall> That's one of the "models" (but I thought it would paste on one line)
09:52:40 <johnthetubaguy> so when you use a larger size, I guess it reduces the other models too?
09:52:49 <BobBall> no; you define the models in advance
09:53:05 <BobBall> So that's why its much more like vCPUs than memory
09:53:18 <johnthetubaguy> oh... you define the split you want on setup
09:53:22 <BobBall> yes
09:54:03 <BobBall> But it's ok - even if we change that in XAPI, the approach that we're proposing will capture the latest number of 'free' vGPUs of each type
09:54:23 <BobBall> We have 5 minutes left in this meeting in theory
09:54:24 <johnthetubaguy> oh, thats a XAPI modeling thing
09:55:02 <johnthetubaguy> so I can't see why people would configure two models, FWIW, its just too complicated, but we should cover that case
09:55:25 <BobBall> Agreed; the OpenStack code will be completely independent of the number of models and we can make that clear in the spec.
09:55:26 <johnthetubaguy> so I guess pick a card, and describe two example setups in that use cases section
09:55:37 <BobBall> OK
09:55:54 <johnthetubaguy> the bit I was curious about what do you report 2xsmall 1xlarge and you use the large and the 2xsmall go away
09:56:03 <johnthetubaguy> bit seems like you have avoided that case with the current system
09:56:09 <johnthetubaguy> s/bit/but/
09:56:29 <BobBall> It would just work
09:56:51 <BobBall> because the proposal is for the resources to be updated by the resource checker
09:57:02 <BobBall> so the 2xsmall would disappear when the 1xlarge was used
09:57:08 <johnthetubaguy> so, do you plan to create a versioned object to describe the vGPU capabilities
09:57:09 <BobBall> but currently I don't think we can configure that
09:57:23 <johnthetubaguy> BobBall: that still horrible breaks the scheduler
09:57:26 <BobBall> No - we were suggesting adding it to capabilities
09:57:37 <BobBall> It shouldn't as the scheduler always checks the latest resources
09:58:05 <johnthetubaguy> says add new field into the table right?
09:58:13 <BobBall> yes
09:58:30 <johnthetubaguy> BobBall: its races, you send 2xsmall and 1xlarge to the same box, and they race each other to claim the resources
09:58:52 <johnthetubaguy> so the new field should be modeled by a new object right?
09:59:15 <BobBall> Fair enough; well that's a future question since XAPI only uses a single model
09:59:24 <BobBall> I think the suggestion is it's a json string? is that right jianghuawang?
09:59:41 <BobBall> If we need a new object, then we can add that to the spec
09:59:42 <jianghuawang> yes.
10:00:04 <BobBall> but if we have a new object I think it'd mean a new custom filter
10:00:19 <BobBall> rather than re-using the existing capabilities / instance filter ?
10:00:43 <johnthetubaguy> BobBall: not sure if thats true, it could serialize into a string for the capabilities
10:01:27 <BobBall> hmmmz - are there any examples of that currently?
10:01:43 <BobBall> I know of the numa example for the full-object new-filter approach
10:03:02 <jianghuawang> sorry, I don't understand why new filter is needed at here.
10:03:23 <johnthetubaguy> jianghuawang: me too, seems like we can keep the same filter, but not 100% sure I get the conversation now
10:03:41 <BobBall> maybe I'm confused :)
10:03:42 <johnthetubaguy> BobBall: unsure, its all heading that way though
10:03:49 <johnthetubaguy> objects I mean
10:04:08 <BobBall> I thought if we converted the string to an object that we couldn't use instancefilter - but if we can use it then even better
10:04:19 <BobBall> OK - so you want the vGPU to be defined as a very light weight object
10:04:28 <BobBall> or can it just be a dict?
10:04:41 <BobBall> basically it's a list of model -> free pairs
10:05:00 <BobBall> but you want it as an object so it can be versioned?
10:05:27 <johnthetubaguy> BobBall: I was thinking a o.vo object that defines the fields, so its clear what should be in it
10:05:39 <johnthetubaguy> the reason we do it, is to make maintaining upgrade easier
10:05:45 <BobBall> OK
10:05:45 <jianghuawang> John, are u mean the resource-object model which will be implemented in Mitaka?
10:05:50 <johnthetubaguy> its clear when the format changes and how to convert
10:05:54 <jianghuawang> https://review.openstack.org/#/c/239772/1/specs/mitaka/approved/resource-objects.rst
10:06:02 <BobBall> ok
10:06:11 <jianghuawang> which will be implemented by this spec?
10:06:12 <johnthetubaguy> jianghuawang: I think we can do your own thing before that merges, just using the same framework
10:06:55 <johnthetubaguy> oslo.versionedobjects is just a lib that gives us the modelling to version the data format, basically
10:07:11 <johnthetubaguy> so I could be seeing this all wrong, I need to go dig a little
10:07:29 <BobBall> We can dig too
10:07:41 <johnthetubaguy> ah, here we go
10:07:42 <johnthetubaguy> https://github.com/openstack/nova/blob/master/nova/objects/compute_node.py#L81
10:07:45 <BobBall> I think that just leaves the question of how to specify that we want a vGPU - flavor vs instance
10:07:56 <johnthetubaguy> pci_device_pools is a PciDevicePoolList object
10:08:08 <johnthetubaguy> BobBall: I think hard code the whole thing in flavor, as a first attempt
10:08:22 <johnthetubaguy> the flaovr decomposition work should fix the issues that creates
10:08:32 <johnthetubaguy> matches what we do with PCI passthrough right now, as I understand it
10:08:48 <BobBall> OK - I'll comment on the spec linking to this IRC chat and explaining why we're doing it in flavor for now
10:09:03 <BobBall> Does flavor decomposition have a spec for mitaka? or is it post-mitaka?
10:10:07 <johnthetubaguy> I think its for mitaka
10:10:20 <johnthetubaguy> but honestly, it feels more like an N thing, if we are realistic
10:10:24 <BobBall> OK, so we'll need to link to that spec in our VGPU spec
10:10:28 <BobBall> Understood
10:10:35 <johnthetubaguy> not sure you need a link
10:11:05 <johnthetubaguy> I think just saying in the alternatives section that you note there is a problem with too many flavors, but that it can be sorted by flavor decomposition work at a later date
10:11:17 <BobBall> ok
10:11:53 <jianghuawang> ok
10:12:05 <BobBall> So - in summary...
10:12:21 <BobBall> 1) a few more details in the justification, explaining models and why VGPU has no PCI devices
10:12:32 <BobBall> 2) Convert to use an object
10:12:47 <BobBall> 3) Keep flavor definition as defined in the current spec but talk about why in the alternatives section
10:13:12 <johnthetubaguy> yeah, I think thats it
10:13:17 <BobBall> Is that right, or did I miss something?
10:13:41 <BobBall> Perfect - many thanks for the live review johnthetubaguy :)
10:14:00 <jianghuawang> add my thanks.
10:14:05 <BobBall> We were also going to cover Neutron + MOS but we're 15 minutes over and I'm over-running for a meeting with huazhihao
10:14:18 <huazhihao> many thanks
10:14:21 <johnthetubaguy> so the capabilities filter thing is an open question I think
10:14:31 <johnthetubaguy> but lets get the other bits sorted first
10:14:42 <BobBall> Quick potted summary... we've got a very interesting bugfix for the Neutron two-port issue, in the Nova VIF device
10:14:59 <BobBall> We've also got a bugfix for the Neutron/Nova races
10:15:08 <johnthetubaguy> using the event system?
10:15:10 <BobBall> both are quite recent and not quite ready for core review
10:15:11 <BobBall> yes
10:15:13 <johnthetubaguy> cool
10:15:24 <johnthetubaguy> do you have the XenAPI group in the etherpad yet?
10:15:35 <johnthetubaguy> #link https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking
10:15:36 <BobBall> we did in Liberty not sure if it's there in the Mitaka etherpad
10:15:51 <johnthetubaguy> if you haven't added it, its not there, I suspect
10:15:53 <BobBall> It's not
10:16:21 <huanxie> johnthetubaguy: yes nova/neutron race condition use event notifictaion
10:17:20 <BobBall> We will populate those
10:17:21 <BobBall> thanks johnthetubaguy
10:17:24 <BobBall> We'll let you go now.
10:17:37 <BobBall> #endmeeting