16:06:10 <primeministerp> #startmeeting hyper-v
16:06:11 <openstack> Meeting started Tue Jul 16 16:06:10 2013 UTC.  The chair is primeministerp. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:06:12 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:06:14 <openstack> The meeting name has been set to 'hyper_v'
16:06:19 <primeministerp> hi everyone
16:06:25 <alexpilotti> hi there!
16:06:50 <ociuhandu> hi all
16:07:07 <primeministerp> so I think this might be quick
16:07:24 <primeministerp> figure we give some quick updates
16:07:37 <primeministerp> alexpilotti: I know you are in the process of pushing code
16:07:47 <alexpilotti> yep
16:08:00 <primeministerp> alexpilotti: is there anything specific which we should keep our eyes on
16:08:02 <alexpilotti> WMIV2 and Dynamic memory, as discussed last time
16:08:14 <primeministerp> alexpilotti: so still no changes
16:08:34 <alexpilotti> I'm going to send the email with the gerrit review urls as soon as those are up
16:08:47 <primeministerp> alexpilotti: great
16:08:58 <primeministerp> I don't see luis around
16:09:13 <primeministerp> so
16:09:37 <primeministerp> that means puppet discussion really isn't worth having w/o out him
16:10:12 <primeministerp> alexpilotti: any more crowbar related bits that are interesting to note?
16:10:48 <ociuhandu> primeministerp: we have started sending the first changes
16:10:52 <alexpilotti> yes, we started pushing for review the Hyper-V support in Crowbar pebbles
16:10:57 <primeministerp> ociuhandu: execellent
16:11:15 <alexpilotti> ociuhandu: would you like to give some details?
16:11:29 <primeministerp> no matt ray on the channel right now either
16:11:42 <ociuhandu> primeministerp: we had to add support for samba, filter some of the linux-specific bits, so that they will not apply to windows
16:12:00 <primeministerp> ociuhandu: the usual suspects
16:12:04 <primeministerp> ociuhandu: keep up the good work
16:12:21 <ociuhandu> we try to keep the nodes as a concept transparent, so that we don't treat differently the windows and the linux nodes
16:12:35 <ociuhandu> but rather apply only what's supported / required
16:13:18 <hanrahat> ociuhandu: does that require any refactoring of the base code?
16:13:32 <ociuhandu> the issues we had were with the ruby version update, as the windows clients were using ruby 1.9 while linux ones are on the 1.8
16:13:58 <primeministerp> hanrahat: basically an issue w/ the upstream chef bits for windows being newer than the linux side
16:14:00 <ociuhandu> and there are syntax changes between them, good thing that 1.8 supports also the 1.9 syntax
16:14:57 <ociuhandu> hanrahat: if we're talking about the base crowbar code, yes, we need to add in all windows specific parts and make sure that the existing linux-specific ones do not apply on windows nodes
16:16:11 <ociuhandu> one other reason for not removing part of the roles from the windows clients completely is that in the future those services should be available for windows nodes too
16:16:11 <primeministerp> ociuhandu: thanks
16:16:39 <hanrahat> ociuhandu: yes, base crowbar code... thanks
16:17:03 <zehicle_> sorry, I was late
16:17:10 <primeministerp> hi rob
16:17:15 <zehicle_> Hey!
16:17:17 <ociuhandu> zehicle: hi Rob
16:17:40 <primeministerp> so moving on
16:17:51 <primeministerp> alexpilotti: there were the new bits you had to share
16:18:22 <primeministerp> alexpilotti: if you want to mention it, re the runner
16:19:18 <primeministerp> alexpilotti: the job runner?
16:19:46 <primeministerp> he must be sleeping
16:20:14 <primeministerp> ociuhandu: do you want to comment on the glance image cleanup processes as well for garbage collection?
16:20:33 <primeministerp> ociuhandu: I know alex wanted to mention it
16:21:41 <primeministerp> so
16:21:47 <primeministerp> if he's not going to mention it
16:22:03 <alexpilotti> primeministerp: I believe ociuhandu got disconencted
16:22:12 <primeministerp> alexpilotti: you too?
16:22:42 <alexpilotti> primeministerp: nope I was around
16:22:59 <alexpilotti> primeministerp: unless you pinged me before and I was disconencted as well :-)
16:23:05 <alexpilotti> zehicle: hi!
16:23:18 <alexpilotti> zehicle: ociuhandu was just talking about Crowbar
16:23:41 <alexpilotti> primeministerp: should we switch to the next topic?
16:23:56 <alexpilotti> as I don't see any reply from zehicle or ociuhandu
16:24:00 <zehicle_> I'm here
16:24:24 <primeministerp> alexpilotti: I was waiting for you
16:24:35 <primeministerp> alexpilotti: to discuss the glance cleanup
16:24:40 <alexpilotti> zehicle_: I was pinging you on the wrong nick :-)
16:24:43 <zehicle_> missed the earlier thread on CB -> there are pulls to bring HyperV into CB "pebbles" Grizzly
16:25:01 <alexpilotti> zehicle_: yep, the first batch
16:25:03 <ociuhandu> me too, looks like colloquy came back to life
16:25:11 <zehicle_> I had a dead IRC client that was holding the nick :(
16:25:39 <alexpilotti> zehicle_: looks like we all have issues today here with IRC except primeministerp :-)
16:25:40 <zehicle_> We're coordinating w/ SUSE to review and accept.
16:25:56 <primeministerp> hehe
16:26:18 <alexpilotti> zehicle_: cool, let us know if you'd like to meet on IRC / Skype / etc to discuss them
16:26:43 <alexpilotti> zehicle_: there's quite an amount of work out there in those patches :-)
16:26:45 <ociuhandu> zehicle_:  one quick thing: the reason for not removing part of the roles from the windows clients completely is that in the future those services should be available for windows nodes too
16:27:07 <alexpilotti> zehicle_: e.g. nagios
16:27:08 <ociuhandu> zehicle_: like ipmi, ganglia, nagios
16:27:19 <zehicle_> We've got a regular design/plan cadence setup for Crowbar.  Plan is this thursday - would be helpful to include you there
16:27:47 <zehicle_> +1
16:27:52 <ociuhandu> we chose to "skip" the recipe until the windows bits get implemented, rather than removing the role and adding it back later on
16:28:19 <zehicle_> for the broader, OpenStack community - we're trying setup Crowbar as a quick way to do a Grizzly + HyperV  deploy
16:28:25 <alexpilotti> ociuhandu: did you tell zehicle_ about the IPMI chef issue on the crowbar UI?
16:28:25 <ociuhandu> that would be great
16:28:58 <alexpilotti> cool, at what hour is the meeting on Thu?
16:29:06 <zehicle_> 8 am central
16:29:18 <zehicle_> bit.ly/crowbar-calendar
16:29:31 <ociuhandu> alexpilotti: no, but i suggest we talk on that on the crowbar meeting. As a very short note, we do not have DMI info on the windows nodes so web interface is failing
16:30:37 <primeministerp> ok
16:31:26 <primeministerp> alexpilotti: do you mention the glance cleanup bits?
16:31:42 <primeministerp> er ^want to
16:31:47 <alexpilotti> oki, added
16:31:47 <ociuhandu> zehicle_: great, we'll be there
16:33:49 <alexpilotti> sure
16:33:49 <alexpilotti> primeministerp: should we change topic? :-)
16:33:55 <primeministerp> alexpilotti: i'm waiting on you
16:34:04 <primeministerp> #glance cleanup scripts
16:34:06 <primeministerp> er
16:34:12 <alexpilotti> :-)
16:34:13 <primeministerp> #topic glance cleanup
16:34:14 <alexpilotti> txc
16:34:30 <primeministerp> alexpilotti: all you
16:34:34 <alexpilotti> so we ran into an issue with the image cache
16:34:39 <alexpilotti> on nova compute nodes
16:35:07 <alexpilotti> we built a vaildation system that accepts arbitrary images sent in with an HTTP REST API call
16:35:44 <alexpilotti> that image gets downloaded, included in glance, a new image is spawned, a floating ip attached and the user gets notified of the availability
16:36:13 <alexpilotti> we are using it on a variety of images that need to be validated on Hyper-V
16:36:49 <alexpilotti> The issue is that the glance image cache used by nova-compute
16:36:49 <alexpilotti> is not getting "garbage collected"
16:36:59 <alexpilotti> this is not an issue in a regular environment, where images are relatively static
16:38:38 <alexpilotti> while it's an issue in a case like this one where potentially hundreds of images are getting added on a node every day
16:38:43 <alexpilotti> teh result as you can image, is that the host runs out of space sooner or later
16:38:44 <alexpilotti> as a workaround, I wrote a powreshell script that checks whicj images are not in glance anymore and deletes the corresponding VHD/VHDX files
16:39:04 <alexpilotti> this is scheduled as a Windows task every 15'
16:39:33 <alexpilotti> The best solution, would be to add this garbage collection in the nova-compute driver
16:39:48 <alexpilotti> For Grizzly we can just use the script
16:40:26 <alexpilotti> while for Havana it'd be nice to include it in the official codebase :-)
16:40:31 <alexpilotti> comments?
16:40:35 <primeministerp> alexpilotti: o
16:40:40 <primeministerp> alexpilotti: i'm for it
16:40:43 <zehicle_> this is an issues for local cache images - not boot from block?
16:41:10 <primeministerp> zehicle_: correct
16:41:15 <alexpilotti> zehicle_: yes
16:42:07 <alexpilotti> zehicle_: talking about boot from block, what's the status of the EQL driver? :-)
16:42:22 <zehicle_> it's in the pebbles code base
16:42:52 <primeministerp> zehicle_: it's not a standalone project?
16:43:03 <zehicle_> no, it's part of the cinder barclamp
16:43:25 <alexpilotti> zehicle_: don't you plan to add it in Cinder?
16:43:33 <zehicle_> https://github.com/crowbar/barclamp-cinder/tree/release/mesa-1.6/master/chef/cookbooks/cinder/files/default
16:44:09 <alexpilotti> zehicle_: so the only way to use it officially is through Crowbar?
16:44:14 <zehicle_> that's the plan... would have to be Havana at this point.
16:44:24 <primeministerp> zehicle_: so the equallogic doesn't connect directly?
16:44:27 <primeministerp> to cinder
16:44:30 <zehicle_> *officially* - the code's there
16:44:45 <alexpilotti> zehicle_: got it :-)
16:44:57 <primeministerp> zehicle_: ok
16:45:15 <zehicle_> EQL acts just like any cinder plug-in.  it sets up the iSCSI targets for the VMs
16:45:35 <primeministerp> zehicle_: but it doesn't run on the eql
16:45:39 <primeministerp> correct
16:45:50 <primeministerp> it's running on the controller/
16:45:52 <primeministerp> ?
16:46:08 <zehicle_> right, it connects to the SSH interface for the EQL
16:46:14 <primeministerp> got it
16:46:22 <primeministerp> so it's like a technology bridge
16:46:53 <zehicle_> hmmm, I'd say that it's using SSH to access the API
16:46:56 <alexpilotti> zehicle_: we have an EQL 6xxx here, I guess we'll give it a try :-)
16:47:09 <primeministerp> zehicle_: i just acquired one too, i'm going to give it a try as well
16:47:28 <primeministerp> alright anyone have anything addtional to add?
16:48:10 <primeministerp> alexpilotti: ociuhandu ?
16:48:18 <zehicle_> I have a question about Tempest runs against the HyperV work
16:48:26 <primeministerp> haha
16:48:27 <primeministerp> ok
16:48:33 <primeministerp> zehicle_: shoot
16:48:36 <zehicle_> is there parity on it?  if not, do you have an idea of the gaps?
16:48:58 <primeministerp> no idea
16:49:07 <primeministerp> it's on the list of todos
16:49:20 <primeministerp> i'm assuming we're going to have to add bits when we get there
16:49:28 <zehicle_> no prob - it's something we can check when we spin up the CB deploy (since that's part of the CB install)
16:50:28 <primeministerp> zehicle_: anything else to add?
16:50:29 <zehicle_> from my work on the Board side, there's going to be more emphasis on status of Tempest tests
16:50:39 <alexpilotti> primeministerp, hanrahat: we should IMO consider this as part of the work we have to do ASAP
16:50:40 <primeministerp> zehicle_: yes we've been following the thread
16:50:41 <zehicle_> which, IMHO, is a very good thing for the project
16:51:00 <primeministerp> alexpilotti: yes, indeed
16:51:17 <zehicle_> We'll need to think if that's a grizzly or havana challenge
16:51:24 <primeministerp> there are some changes going on here which will hopefully address this from my perspective
16:51:30 <alexpilotti> zehicle_: we want to get the driver into B category as soon as the CI is ready
16:51:40 <hanrahat> primeministerp: agreed... let's discuss among the three of us later this week.  can you set up a meeting?
16:51:52 <primeministerp> hanrahat: i need to wait until hashir is back
16:52:05 <primeministerp> hanrahat: a lot is dependant on changes on our team
16:52:10 <hanrahat> primeministerp: that's fine
16:52:11 <primeministerp> hanrahat: from my limited knolege
16:52:22 <alexpilotti> primeministerp: IMO this is before the CI stuff gets done
16:52:33 <alexpilotti> primeministerp: we need to be sure that the tests run fine
16:52:40 <primeministerp> alexpilotti: definately
16:52:42 <alexpilotti> primeministerp: as in no Linux dependencies, etc
16:52:48 <primeministerp> alexpilotti: agreed
16:53:03 <primeministerp> alexpilotti: I'll try to schedule soemthing for the end of the week
16:53:20 <alexpilotti> zehicle_: when you refer to tempest, you refer to a gate or just compliance with the tests?
16:55:20 <primeministerp> alexpilotti: he's referring to tempest as being the scorecard for complaince
16:55:27 <zehicle_> yy
16:55:40 <primeministerp> alexpilotti: and that will feed into refstack
16:55:50 <primeministerp> i'm assuming at some level
16:55:56 <alexpilotti> primeministerp: yep, but those are two different stages
16:56:00 <primeministerp> yes
16:56:18 <alexpilotti> getting the tests running and "green" is a thing that we can do w/o the CI in place
16:56:30 <alexpilotti> the gate is the next step that involves the CI
16:56:31 <zehicle_> ideally both - we'd like to be able to vote on gating based on multi-node & multi-os deploys
16:56:48 <primeministerp> agreed
16:57:00 <zehicle_> but even getting a refstack report would be a good step
16:57:04 <alexpilotti> +1000
16:57:05 <primeministerp> alexpilotti: if you want to put resources on tempest then feel free
16:57:22 <primeministerp> alexpilotti: it's on the list of todo's and sooner is always better
16:57:22 <alexpilotti> yep, I will
16:57:28 <primeministerp> alexpilotti: excellent
16:57:55 <primeministerp> looks like we're almost out of time
16:57:58 <alexpilotti> Starting already from this week
16:58:04 <primeministerp> alexpilotti: good
16:58:16 <primeministerp> so additional comments?
16:58:42 <primeministerp> alrighty then, i'll end it
16:58:46 <primeministerp> thanks everyone for the time
16:58:51 <primeministerp> #endmeeting