17:05:39 #startmeeting XenAPI 17:05:40 Meeting started Wed Feb 13 17:05:39 2013 UTC. The chair is johnthetubaguy. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:05:41 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:05:43 The meeting name has been set to 'xenapi' 17:06:15 hello everyone 17:06:22 hi 17:06:37 howdy 17:06:42 hi 17:06:48 I see there are a few things on the wiki page to talk about 17:06:53 #topic agenda 17:07:01 #link http://wiki.openstack.org/Meetings/XenAPI 17:07:08 #topic blueprints 17:07:40 Anyone got anything to raise about blueprints this week? 17:07:55 I have just added a summit session for the XenAPI roadmap for Havana 17:08:01 It seems one of my blueprints was delayed 17:08:10 you got a link? 17:08:18 #link https://blueprints.launchpad.net/nova/+spec/xenapi-volume-drivers 17:08:28 how much is left? 17:08:53 Not too much, the main problem, is that I would rather concentrate on the glance integration 17:08:53 hi 17:09:04 THat gives us new functionality. 17:09:19 that is the XenAPI Cinder driver right? 17:09:29 #link https://blueprints.launchpad.net/cinder/+spec/xenapinfs-glance-integration 17:09:34 Yes. 17:09:49 I posted a blog entry on xenapinfs - glance integration 17:09:57 #link http://blogs.citrix.com/2013/02/12/xenapinfs-integrated-with-glance/ 17:09:59 cool, thanks 17:10:04 If anyone is interested. 17:10:35 That should help with documentation stuff I guess 17:10:35 So I would like to add the generic implementation as well, so it could deal with other image types as well. 17:10:53 Anyone tried XenAPINFS? 17:10:55 I would put the driver refactor above generic glance integration myself, but that is your call 17:11:16 I tried it the one time, but not since you added the glance stuff! 17:11:22 Okay. 17:11:31 oh, btw, I set up CI jobs for it. 17:11:42 internal to Citrix CI right? 17:11:52 y, But leave it for the qa section 17:11:58 for sure 17:12:09 Okay, so that's it. 17:12:14 any more blueprints on the edge? 17:12:19 What other blueprints do we have pending? 17:13:14 not sure I spotted any 17:13:30 I just looked at the configdrive, it is marked as completed. 17:13:42 is it complete? 17:13:49 marked as implemented. 17:13:51 I guess there are edge cases to worry about? 17:14:04 I haven't tried it myself. 17:14:22 Do I need the latest cirros for trying that? 17:14:43 I see release 0.3.1 17:15:02 not sure 17:15:07 depends on cloud-init version 17:15:18 I see. 17:15:25 smoser should be able to tell you 17:15:51 ok. 17:16:02 #topic docs 17:16:23 I guess we need to make sure how the doc bugs for XenAPI are going 17:16:34 anyone fancy taking a look at some? 17:16:58 I guess XenAPI NFS stuff we need something, not sure how the Cinder docs are doing 17:17:34 John, how much time do we have for the doc -ing? 17:17:59 till release I guess, not sure how it will work this time 17:18:02 Or those patches are welcome at any time? 17:18:07 depends on any translation freezes 17:18:20 Oh, let me look at the schedule. 17:18:25 worst case it will just sit in a queue till it opens 17:18:32 they are good about backports 17:18:57 previous docs released after the code 17:19:03 #link http://wiki.openstack.org/GrizzlyReleaseSchedule 17:19:09 but I know there was a hope with string freezes to bring that forward 17:19:19 It doesn't show the translation freeze. 17:19:45 String freeze is to help the docs and translation, but not the docs translation I guess 17:19:48 Or is it the same? 17:19:50 ah 17:20:07 o/ 17:20:11 sorry I'm late. 17:20:13 chatty today 17:20:24 hi 17:20:34 #link http://wiki.openstack.org/StringFreeze 17:20:42 Thierry says it best 17:21:14 pvo: hi! 17:21:27 OK so lets go to bugs 17:21:36 #topic bugs and QA 17:22:38 #link https://bugs.launchpad.net/devstack/+bug/1119268 17:22:39 Launchpad bug 1119268 in devstack "XS devstack fails to install on Quantal" [Undecided,Fix released] 17:22:59 so that was in the agenda, following on from last week 17:23:27 Yes, we successfully installed a devstack - quantal combo. 17:23:50 I have a private branch looking to simplify the scripts, but not had time to work on that beyond a quick stab 17:23:58 on github 17:24:12 any other XenAPI bugs people want to discuss 17:24:22 Is armando around? 17:24:41 live block migration bugs. 17:24:54 johnthetubaguy: not a bug per se, but we're looking to do some "diagnostics" on a vm that a support person would execute. 17:24:55 But it's not strictly xenapi, I think. 17:25:11 basically calls that any support person would execute when first investigating an issue 17:25:18 would love some of your thoughts on things we would want to include. 17:25:29 nova-api extensions or more specific? 17:25:38 there is a 'diagnostics' extension in the nova api 17:25:47 but we're wanting to do some xen specific checks 17:25:53 got ya 17:25:55 which would likely be some xenapi plugins 17:26:18 things like Dom0 resource levels, but I guess that is more monitoring 17:26:23 pvo: is there any blueprint for that, or some other info? 17:26:24 ideally we could develop the extensions without having to modify too much nova code 17:26:41 right, makes sense 17:26:43 matelakat: not yet. We're just forming thoughts around it now. Not sure if its too late for blueprints in this cycle. 17:26:47 can get it going for the next one though. 17:27:02 I'll get a bp started and we can add to it. 17:27:08 cool. 17:27:32 there was a really good session (maybe two summits ago) where devops guys went through the main pain they were seeing 17:27:44 it might be good to have a more structured version of that 17:28:02 Sorry guys - I know I'm late! 17:28:12 johnthetubaguy: thats exactly what I want 17:28:18 the think that comes to mind are Xen health checks, like resource levels 17:28:31 rabbit queue lenghts are interesting 17:28:44 there are checks on the hypervisor and checks on what the vm is doing. 17:28:50 also looking for things like noisy neighbors, etc. 17:29:02 right, looking for average load on the VMs 17:29:06 or something like that 17:29:08 XAC can show some useful things like resource levels - we're thinking about exposing them through a supplemental pack 17:29:24 XAC? 17:30:36 I am not sure what Bob meant with the XAC. Let's wait, until he reconnects. 17:30:42 cool, you back, XAC? 17:30:44 sorry - dunno what happened there guys 17:31:13 Bob, did you mean the javascript stuff? 17:31:14 what was XAC again? 17:31:18 XAC, yes, it's a useful little tool to do some very lightweight management of a XS host 17:31:28 #link https://github.com/jonludlam/xac 17:31:28 bobba: where would I find more info on that? 17:31:30 ah 17:31:36 oh right, that fella 17:31:48 talks straight to XenAPI right? 17:31:53 via javascript 17:31:58 That's the one 17:32:05 #link xac 17:32:08 sorry! 17:32:13 #link https://github.com/jonludlam/xac 17:32:14 Okay, so if that's Xapi, it should be easy. 17:32:54 it is certainly a nice visual tool to check the "heath" of the hypervisor 17:33:12 my worry is not stamping on monitoring things 17:33:26 they are clearly different though 17:33:29 looks interesting. Would have to figure out how to get it to scale 17:33:42 it's got some charts there 17:33:43 it just sits on each hypervisor 17:33:49 doesn't really scale pvo 17:33:58 but it's useful to look at individual hosts 17:34:08 I guess the scalable monitoring would be through ceilometer? 17:34:42 I worked with these guys once http://real-status.com/ 17:34:53 very cool collection and visualization 17:34:58 but not open source 17:35:10 I think we are going too far. 17:35:15 anyway, certainly worth some thought 17:35:17 later 17:35:19 indeed 17:35:26 Let's whiteboard some ideas 17:35:41 I guess pvo will register a bp. 17:35:47 bobba: scale meaning, if we built it into another tool to do diags on a host 17:36:20 matelakat: planning on doing that this afternoon. Or actually training someone on building BPs. You'll see one, but it likely won't be from me. 17:36:50 pvo: thanks 17:36:51 I guess providing the URL to that host is what you need to integrate, assuming you have access to that network 17:36:56 or some kind of proxy 17:37:22 anyway, lets not get too distracted I guess 17:37:22 pvo: It just uses javascript, XAPI and the RRD information - so I imagine that would be easily portable. However, it will only work on remote hosts in Tampa+ because that's when JSON was added 17:37:30 #topic Open Discussion 17:37:33 johnthetubaguy: that and the login credentials. all our hosts are different. 17:37:46 bobba: gotcha 17:38:14 pvo: right, makes me think back to integrating keystone into XenAPI again 17:39:47 the web gui can have a token for limited access, or something, but need thought 17:39:51 needs^ 17:40:02 johnthetubaguy: that would be interesting for sure. 17:40:22 lets jump to stuff we have on the agenda 17:40:27 I think we'd talked about doing ldap as well for xapi. 17:40:28 kk 17:40:59 right, I guess that is already there in some capacity with AD integration 17:41:19 so we coved XenAPI NFS blog already 17:41:23 #link https://review.openstack.org/#/c/15022/ 17:41:31 LDAP on xenserver is very possible, but it does need some tuning and a few extra packages installed 17:42:05 Oh, yeah, ovs. 17:42:11 talk of XenServer supplemental pack 17:42:31 I guess that review includes extra plugins 17:42:54 there is also talk of python26 packages, git, puppet, and others 17:42:57 Yeah - that's right. So I was thinking, I think that the only reason that devstack pulls via a zipball is because we don't have git in dom0? 17:43:14 right 17:43:27 EPEL can give you that if you want it for dev 17:43:39 we used to do it that way 17:43:53 but moved away for reasons that escape me 17:43:55 Okay. So we're planning to produce a supplemental pack that can be installed on a XenServer that will install python 2.6 - I was wondering if pulling in git and simplifying the XenServer devstack setup scripts would be a good option 17:44:07 git would be great :) 17:44:25 it has to be handy for pulling dev ops scripts too right 17:44:34 I'm sure! 17:44:38 definitely 17:45:10 Give me an action on looking at that. 17:45:11 What else are people dying for in dom0? Clearly the best things to consider are ones that don't affect the base XS installation 17:45:16 Maybe these can be separate sup packs, since there is not much overhead in the suppack? 17:45:28 +1 for many small suppacks 17:45:35 matelakat: look at what? 17:46:05 is there plans to land Ceilometer support for XenServer? 17:46:15 small supp packs would work, then you could have a chef and a puppet pack seperated if needed 17:46:17 look at the suppack creation, and how hard it is. So on the next meeting, I could show some progress. 17:46:26 antonym: you read my mind 17:46:35 Ceilometer for XenServer is being looked at for Grizzly. I haven't caught up with rfy 17:46:43 sorry! premature-enter pressing 17:47:02 yjing5 (I think, can't quite remember his IRC nick) was looking at Ceilometer for XenServer. 17:47:08 yep someone form intel was taking a look, not sure if sandy was too 17:48:05 #action matelakat to look at suppack for git, python26 (with pip), puppet 17:48:32 supplimental pack is an rpm + metadata, there is a public SDK with tools to build them 17:48:59 there are some old build scripts that could help on git hub in the geppetto bits I think 17:48:59 zykes-: yes, we're working on Ceilometer and Xen support at RAX 17:49:04 Actually part of the "DDK" - driver development disk 17:49:14 thats the name, cheers 17:49:19 oh ! :) 17:49:19 it may be further out for fully supported however, 17:49:28 Okay, I'll look at it, I just wanted a record of that intention. 17:49:31 driver development __kit__ 17:49:35 sandywalsh is working on that 17:50:02 pvo: on the suppack packaging? 17:50:21 ceilometer I think 17:50:22 matelakat: no, sorry. on the ceilometer xenserver support. 17:50:27 ok. 17:50:38 cool, any more on those bits? 17:50:47 looks promising on python26 et al 17:51:07 official stuff will look good 17:51:27 python what johnthetubaguy ? 17:51:34 OK, next item is "Getting images suitable for use in XenServer: Ideal source, format and mechanism for uploading." 17:51:55 zykes: supplemental pack that contains python26 to run in Dom0 17:51:58 ok :) 17:51:59 yeah - I raised that one 17:52:22 mate had a blog post on some fun ways to do some of this 17:52:29 but the docs are very lacking 17:52:35 There's one image (ubuntu lucid) that I know about - I think someone at RAX generated that 17:52:42 So we are using this image here: #link https://github.com/citrix-openstack/warehouse 17:52:49 particularly all the semi-secret glance flags 17:52:50 for testing. 17:53:01 I'm concerned that the blog post is a little too difficult for mainstream really 17:53:23 I noticed that Canonical(?) are hosting a whole bunch of qcow2 images for openstack consumption 17:53:31 right 17:53:44 I wondered if there is any way we can capitalise on those, or have VHD images that we can use in a different way 17:53:50 there was a chat at the summit about that, or prehaps the ubuntu conference 17:54:04 Is there any way to use those qcow images with XenServer? 17:54:07 Mike McClurg had contacts with Ubuntu about their cloud images and getting VHD ones 17:54:35 I know comstud has some code for doing raw->vhd using vhd util 17:54:47 oh does he 17:54:54 basically there is no way for ubuntu to generate xenserver "happy" vhd files currently 17:54:59 I can get in touch with Mike and see what the score is with Canonical 17:55:07 yeah - we patch VHD util for performance reasons 17:55:17 the original VHD spec doesn't do everything we'd want 17:55:27 #link http://blogs.citrix.com/2012/10/04/convert-a-raw-image-to-xenserver-vhd/ 17:55:29 block alignment I guess, the stuff added to VHDX 17:55:42 #link http://blogs.citrix.com/2012/10/17/upload-custom-images-to-a-xenserver-powered-openstack-cloud/ 17:55:49 yep, its basically giving people some of these tools that will help things along 17:56:11 John, do you know about any tricks to get qcow working on XS? 17:56:18 I know comstud was thinking about linking vhdutil from qemu-img convert 17:56:35 erm, not actually tried, it involves hacking the SM scripts I think 17:56:40 which are python at least 17:57:04 it'd be nice to get the vhdutil from XS upstream into qemu 17:57:10 could you mail me a contact person to discuss it with? 17:57:12 you can do it with XL underneath, but there is no blk back driver or something 17:57:34 qcow? 17:57:49 storage architect you want 17:57:52 ok 17:57:55 keith petley 17:58:02 ta 17:58:07 (+ spelling corrections) 17:58:48 I think you can do raw by adding raw files and some small hacks 17:58:54 which is another option 17:59:05 BobBall: what was your exact question 17:59:17 around images? 17:59:36 bobba: I mean 17:59:45 I'm not sure 17:59:58 I think a simpler way, whatever that way is, would be better 18:00:14 for example us using qcow2 images, or automatically converting on upload or something like that 18:00:21 I guess updaing horizon to support Xen related upload options would help 18:00:45 maybe scritps that wrap the glance cli that add the appropriate extra keys 18:00:56 or glance cli extensions 18:01:06 ah, ok 18:01:10 you mean image formats 18:01:23 I always assume people would prep on the XenServer and export the vhd 18:01:29 yeah 18:01:36 we are out of time, so we should wrap up 18:01:45 I think we could support other type of images easily - if we use qemu-img convert to pipe the bytes to the attached vdi. 18:02:10 I think that's fine for small or huge deployments - but I guess some people want to consume other images - hence the market for the ubuntu qcow2 ones? 18:02:13 OK, this goes back to having a glance "convert" kind of function, possible using a conversion worker 18:02:13 But that's only one direction. 18:02:36 then glance supporting multiple disk types against a single image "parent" maybe 18:02:42 I've had a couple of guys asking me directly for XS images - the qcow2 ones are easily linked from OS documentation at the moment 18:02:44 just a thought :) 18:03:17 you can use the three part raw amazon image directly though I think 18:03:32 oh yes that's true 18:03:35 could be wrong on that one, maybe you need tools 18:03:37 the ami images? 18:03:42 yes 18:03:43 anyways 18:03:45 we are out of time 18:03:59 I did that today! :) 18:04:01 true 18:04:10 :-) 18:04:21 #endmeeting