09:31:55 #startmeeting XenAPI 09:31:55 Meeting started Wed Nov 11 09:31:55 2015 UTC and is due to finish in 60 minutes. The chair is BobBall. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:31:56 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:31:58 The meeting name has been set to 'xenapi' 09:32:01 johnthetubaguy: pingity ping :) 09:32:26 hello 09:32:39 morning huanxie 09:32:50 Anyone else around? 09:32:51 hello, everyone. 09:32:57 * johnthetubaguy waves 09:33:41 Howdy 09:33:47 Also huazhihao I hope! 09:34:12 Hi Bob 09:34:21 Hi all 09:34:29 #link https://wiki.openstack.org/wiki/Meetings/XenAPI is the agenda 09:34:58 #topic blueprints 09:35:06 So - I think we have one blueprint to focus on 09:35:18 jianghuawang - what's the status of the VGPU blueprint? 09:35:45 yes, i'm reworking vgpu spec basing on the review comment. 09:36:49 OK - as we've discussed, johnthetubaguy has recently highlighted the dates for spec freeze 09:37:02 we really want VGPU in Mitaka so we can start making progress again :) 09:37:05 two changes by comparing to the last review version: add enable flag in flavor and image; specify vgpu model in the image metadata. 09:37:43 not sure what you would not just do both in the flavor, but I guess I should read the spec 09:38:00 another change is to consider the resource object model which will be done in Mitaka. 09:38:21 usually flavor says what you are allowed, image specifies what it needs, after that its a bit sketchy 09:38:23 johnthetubaguy: danp wanted the split so an operator didn't have to duplicate all flavors to add vGPU 09:38:41 right, but some admins will need to do that 09:38:47 johnthetubaguy: that was a comment on the spec at the moment 09:39:11 yeah, we don't really agree on that stuff right now, its very very messy 09:39:37 key thing is to try not get distracted by everything thats moving around in that area right now 09:39:58 currently trying to re-define the concept of flavor and image properties (in a backwards compatible way) 09:40:27 OK - so what do we do for thsi spec 09:40:52 do we stick with the "old" way? 09:41:06 BobBall: we should have raised this at the summit, in the unconference style stuff, to argue it out I guess 09:41:12 in which case we'd need your comments on the spec saying we should stick to the old way 09:41:24 unsure, I have to go read the spec 09:41:32 currently working my way through the older ones 09:41:32 Hindsight is an exact science :) 09:41:59 OK - is the VGPU spec on your todo list? And do you know when you might get to it? Just concerned about meeting the timescales 09:42:18 every spec is on my todo list, effectively 09:42:19 the link is: https://review.openstack.org/#/c/229351/2/specs/mitaka/approved/xenapi-add-support-for-vgpu.rst 09:42:23 john, thanks. 09:43:14 Any idea how long it might take to get to this spec? Is it something you're ploughing through this week, or expecting to take a couple of weeks to get through? 09:43:19 BobBall: for next time, I recommend any spec with an unresolved -1 to try and raise the issue at the summit, its good for clearing the air 09:43:43 I didn't think it was an unresolved -1 at the time of the summit, but maybe I'm mis-remembering the timescales 09:43:44 BobBall: we have 110 specs, ish, takes me 30 mins per spec, on average 09:44:00 I have chosen not to work that out, because its depressing 09:44:30 OK; but clearly if it takes 2 weeks to review the spec there is a very high risk that it's going to miss the deadline you published 09:45:09 yes, thinkings not merged by the summit are at risk, roughly 09:45:19 anyways, lets go through this spec in real time 09:45:37 I think we should cover why vGPU and GPU passthrough are different 09:45:42 OK, that'd be fantastic, thanks 09:45:55 There is no support for GPU passthrough; just generic passthrough 09:45:59 which happens to work for GPU 09:46:10 correct, and thats a difference 09:46:11 We can add a comment at the top explaining that there are no PCI devices for VGPU 09:46:19 yeah 09:46:26 in the problem description 09:46:45 Nova supports passthrough of GPU PCI devices, but vGPU is different because... 09:46:49 etc 09:46:59 that's in the spec already: there are no PCI devices for VGPU 09:47:02 *nod* jianghuawang - taking notes, yes? :) 09:47:13 oh yeah, I missed that bit 09:47:18 oops, sorry 09:47:45 that project priority section can be removed 09:47:46 perhaps we could make it more obvious 09:47:50 its been dropped from the template 09:47:55 Yup - that's a comment in the spec already 09:48:06 BobBall: I would probably put that first in the problem spec, yeah 09:48:49 do vGPUs have a size? 09:48:56 Ok. I will make it to be more obvious. 09:48:56 or is it more like vCPUs? 09:49:02 They have a 'model' which sort of defines the size 09:49:31 the model could include definitions of the size (e.g. the K160Q model has a defined size) 09:49:50 do most cards offer a single model? 09:49:53 so it's more like vCPUs 09:49:59 no; K1+K2 offer a range 09:50:17 #link http://support.citrix.com/servlet/KbServlet/download/38329-102-714577/XenServer-6.5.0-graphics.pdf 09:50:27 right, we should capture those details in the use case section, here is a typical setup and what it might offer 09:50:37 PS, its super useful for destop as a service, etc 09:51:14 OK - so we can add an overview of how a "model" works 09:51:37 and an example of the DaaS use case 09:51:56 well just an exmaple setup, and the available resources would do the trick I think 09:52:11 GRID K180Q 09:52:11 Designer/Power User 09:52:13 2560x1600 09:52:34 That's one of the "models" (but I thought it would paste on one line) 09:52:40 so when you use a larger size, I guess it reduces the other models too? 09:52:49 no; you define the models in advance 09:53:05 So that's why its much more like vCPUs than memory 09:53:18 oh... you define the split you want on setup 09:53:22 yes 09:54:03 But it's ok - even if we change that in XAPI, the approach that we're proposing will capture the latest number of 'free' vGPUs of each type 09:54:23 We have 5 minutes left in this meeting in theory 09:54:24 oh, thats a XAPI modeling thing 09:55:02 so I can't see why people would configure two models, FWIW, its just too complicated, but we should cover that case 09:55:25 Agreed; the OpenStack code will be completely independent of the number of models and we can make that clear in the spec. 09:55:26 so I guess pick a card, and describe two example setups in that use cases section 09:55:37 OK 09:55:54 the bit I was curious about what do you report 2xsmall 1xlarge and you use the large and the 2xsmall go away 09:56:03 bit seems like you have avoided that case with the current system 09:56:09 s/bit/but/ 09:56:29 It would just work 09:56:51 because the proposal is for the resources to be updated by the resource checker 09:57:02 so the 2xsmall would disappear when the 1xlarge was used 09:57:08 so, do you plan to create a versioned object to describe the vGPU capabilities 09:57:09 but currently I don't think we can configure that 09:57:23 BobBall: that still horrible breaks the scheduler 09:57:26 No - we were suggesting adding it to capabilities 09:57:37 It shouldn't as the scheduler always checks the latest resources 09:58:05 says add new field into the table right? 09:58:13 yes 09:58:30 BobBall: its races, you send 2xsmall and 1xlarge to the same box, and they race each other to claim the resources 09:58:52 so the new field should be modeled by a new object right? 09:59:15 Fair enough; well that's a future question since XAPI only uses a single model 09:59:24 I think the suggestion is it's a json string? is that right jianghuawang? 09:59:41 If we need a new object, then we can add that to the spec 09:59:42 yes. 10:00:04 but if we have a new object I think it'd mean a new custom filter 10:00:19 rather than re-using the existing capabilities / instance filter ? 10:00:43 BobBall: not sure if thats true, it could serialize into a string for the capabilities 10:01:27 hmmmz - are there any examples of that currently? 10:01:43 I know of the numa example for the full-object new-filter approach 10:03:02 sorry, I don't understand why new filter is needed at here. 10:03:23 jianghuawang: me too, seems like we can keep the same filter, but not 100% sure I get the conversation now 10:03:41 maybe I'm confused :) 10:03:42 BobBall: unsure, its all heading that way though 10:03:49 objects I mean 10:04:08 I thought if we converted the string to an object that we couldn't use instancefilter - but if we can use it then even better 10:04:19 OK - so you want the vGPU to be defined as a very light weight object 10:04:28 or can it just be a dict? 10:04:41 basically it's a list of model -> free pairs 10:05:00 but you want it as an object so it can be versioned? 10:05:27 BobBall: I was thinking a o.vo object that defines the fields, so its clear what should be in it 10:05:39 the reason we do it, is to make maintaining upgrade easier 10:05:45 OK 10:05:45 John, are u mean the resource-object model which will be implemented in Mitaka? 10:05:50 its clear when the format changes and how to convert 10:05:54 https://review.openstack.org/#/c/239772/1/specs/mitaka/approved/resource-objects.rst 10:06:02 ok 10:06:11 which will be implemented by this spec? 10:06:12 jianghuawang: I think we can do your own thing before that merges, just using the same framework 10:06:55 oslo.versionedobjects is just a lib that gives us the modelling to version the data format, basically 10:07:11 so I could be seeing this all wrong, I need to go dig a little 10:07:29 We can dig too 10:07:41 ah, here we go 10:07:42 https://github.com/openstack/nova/blob/master/nova/objects/compute_node.py#L81 10:07:45 I think that just leaves the question of how to specify that we want a vGPU - flavor vs instance 10:07:56 pci_device_pools is a PciDevicePoolList object 10:08:08 BobBall: I think hard code the whole thing in flavor, as a first attempt 10:08:22 the flaovr decomposition work should fix the issues that creates 10:08:32 matches what we do with PCI passthrough right now, as I understand it 10:08:48 OK - I'll comment on the spec linking to this IRC chat and explaining why we're doing it in flavor for now 10:09:03 Does flavor decomposition have a spec for mitaka? or is it post-mitaka? 10:10:07 I think its for mitaka 10:10:20 but honestly, it feels more like an N thing, if we are realistic 10:10:24 OK, so we'll need to link to that spec in our VGPU spec 10:10:28 Understood 10:10:35 not sure you need a link 10:11:05 I think just saying in the alternatives section that you note there is a problem with too many flavors, but that it can be sorted by flavor decomposition work at a later date 10:11:17 ok 10:11:53 ok 10:12:05 So - in summary... 10:12:21 1) a few more details in the justification, explaining models and why VGPU has no PCI devices 10:12:32 2) Convert to use an object 10:12:47 3) Keep flavor definition as defined in the current spec but talk about why in the alternatives section 10:13:12 yeah, I think thats it 10:13:17 Is that right, or did I miss something? 10:13:41 Perfect - many thanks for the live review johnthetubaguy :) 10:14:00 add my thanks. 10:14:05 We were also going to cover Neutron + MOS but we're 15 minutes over and I'm over-running for a meeting with huazhihao 10:14:18 many thanks 10:14:21 so the capabilities filter thing is an open question I think 10:14:31 but lets get the other bits sorted first 10:14:42 Quick potted summary... we've got a very interesting bugfix for the Neutron two-port issue, in the Nova VIF device 10:14:59 We've also got a bugfix for the Neutron/Nova races 10:15:08 using the event system? 10:15:10 both are quite recent and not quite ready for core review 10:15:11 yes 10:15:13 cool 10:15:24 do you have the XenAPI group in the etherpad yet? 10:15:35 #link https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking 10:15:36 we did in Liberty not sure if it's there in the Mitaka etherpad 10:15:51 if you haven't added it, its not there, I suspect 10:15:53 It's not 10:16:21 johnthetubaguy: yes nova/neutron race condition use event notifictaion 10:17:20 We will populate those 10:17:21 thanks johnthetubaguy 10:17:24 We'll let you go now. 10:17:37 #endmeeting