16:01:05 <primeministerp> #startmeeting hyper-v
16:01:06 <openstack> Meeting started Tue May 21 16:01:05 2013 UTC.  The chair is primeministerp. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:07 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:10 <openstack> The meeting name has been set to 'hyper_v'
16:01:17 <primeministerp> Hi Everyone
16:01:22 <luis_fdez> hi!
16:01:29 <gokrokve> hi
16:01:32 <mriedem> hi
16:01:35 <primeministerp> wow
16:01:36 <pnavarro> hello
16:01:39 <primeministerp> lots of new folk
16:01:50 <primeministerp> great
16:01:51 <kirankv> hi
16:01:55 <primeministerp> ociuhandu: where's alex?
16:02:42 <primeministerp> let's wait a minute for those who are late
16:02:52 <primeministerp> luis_fdez: thanks for the pull requests
16:03:05 <liuxpei> hi, this is Xiao Pei from China
16:03:05 <primeministerp> luis_fdez: we should talk about this more off line
16:03:13 <primeministerp> liuxpei: hellow
16:03:16 <ociuhandu> primeministerp: entering now
16:03:17 <ociuhandu> hi all
16:03:20 <primeministerp> ociuhandu: great
16:03:23 <luis_fdez> primeministerp, ok, I have some suggestions/ideas to discuss
16:03:31 <primeministerp> luis_fdez: we need to coordinate
16:03:42 <primeministerp> luis_fdez: so you can understand what I already have done
16:03:52 <primeministerp> luis_fdez: and what's not included in those bits yet
16:03:53 <alexpilotti> hi there!
16:03:58 <primeministerp> alexpilotti: hi alex
16:04:01 <primeministerp> full house today
16:04:05 <pnavarro> hi alexpilotti !
16:04:05 <luis_fdez> primeministerp, ok
16:04:18 <primeministerp> ok let's begin then
16:04:30 <alexpilotti> liuxpei: hi, thanks for joining us!
16:04:41 <primeministerp> #topic open issues
16:04:56 <primeministerp> so there have been some new bugs
16:05:00 <liuxpei> thanks, I wil try to join as much as I can :)
16:05:07 <alexpilotti> I can start on this one
16:05:14 <primeministerp> alexpilotti: please
16:05:23 <alexpilotti> There's an annoying bug on snapshot management
16:05:38 <alexpilotti> related to the size of the image
16:06:01 <primeministerp> liuxpei: thanks for your help with that one
16:06:26 <alexpilotti> I attempted the easy way, consisting in trying to convince Glance that the size of the image is the virtual one, not th eVHD file size
16:06:37 <alexpilotti> but that's a dead end
16:07:09 <pnavarro> alexpilotti: please  link?
16:07:11 <alexpilotti> the only other way to get this done is to consider the VHD file size
16:07:23 <alexpilotti> pnavarro: right away :-)
16:07:24 <liuxpei> https://bugs.launchpad.net/nova/+bug/1177927
16:07:25 <uvirtbot> Launchpad bug 1177927 in nova "VHD snapshot from Hyper-V driver is bigger than original instance" [Undecided,In progress]
16:07:27 <pnavarro> thanks
16:07:30 <alexpilotti> tx liuxpei
16:07:38 <liuxpei> yw
16:07:53 <primeministerp> #link https://bugs.launchpad.net/nova/+bug/1177927
16:08:18 <alexpilotti> so when we spawn an image, we could simply resize the VHD to instance['root_gb'] - vhd_header_size
16:08:54 <alexpilotti> this way we can guarantee that the file size will be alway less than the flavor size
16:09:07 <primeministerp> alexpilotti: and I'm assuming this is currently the easiest way to overcome the size difference
16:09:13 <alexpilotti> we have to be consistent with this during resize as well
16:09:36 <alexpilotti> yep. The shortest path after trying to trick Glance failed :-)
16:09:38 <primeministerp> alexpilotti: does this make it easier to be consistant?
16:09:50 <alexpilotti> we have no choice
16:09:54 <primeministerp> alexpilotti: ok then
16:10:05 <alexpilotti> I mean, thin disks are working now
16:10:12 <primeministerp> alexpilotti: then let's do it
16:10:16 <alexpilotti> because the file size limit is not capped
16:10:23 <liuxpei> a question: for VHDs with different size, does vhd_header_size always the same?
16:10:34 <alexpilotti> note: we have also to backport it to grizzly
16:10:42 <alexpilotti> liuxpei: yep
16:11:02 <alexpilotti> we have to remember to check aslo the VHDX header size ;-)
16:11:04 <primeministerp> alexpilotti: does it increase for vhdx?
16:11:09 <primeministerp> hehe
16:11:19 <alexpilotti> it's a different value, but still fiuxed
16:11:25 <alexpilotti> *fixed
16:11:29 <primeministerp> but different
16:11:36 <primeministerp> i would assume it would increase
16:11:40 <alexpilotti> since we are talking about a few KB
16:11:53 <primeministerp> ok
16:12:02 <alexpilotti> we can just add a fixed vlaue > max(vhd_header, vhdx_header)
16:12:16 <alexpilotti> *value
16:12:23 <primeministerp> i understood
16:12:24 <alexpilotti> and solve this issue
16:12:25 <primeministerp> that works
16:12:42 <primeministerp> I'm all for taking this approach,  I'm assuming the ammount of code is trivial
16:12:43 <alexpilotti> the alternative would be to try to add a "tolerance" in manager.py
16:13:03 <alexpilotti> but I have to convince the rest of the Nova world for that to happen ;-)
16:13:12 <primeministerp> alexpilotti: we don't want to have to do that
16:13:18 <primeministerp> ;)
16:13:44 <alexpilotti> I think we have enough w the rest there :-)
16:13:52 <primeministerp> I agree
16:13:58 <alexpilotti> ok, comments on this issue or should we move on?
16:13:58 <primeministerp> ok are w/ good with this topic?
16:14:03 <alexpilotti> lol
16:14:05 <primeministerp> pnavarro: ?
16:14:11 <pnavarro> +1 !
16:14:11 <liuxpei> another question: file size for a VHD is the actual size for it?
16:14:56 <primeministerp> liuxpei: right now?
16:15:00 <primeministerp> liuxpei: I believe so
16:15:02 <alexpilotti> liuxpei: can you please define "actual size"?
16:15:11 <liuxpei> disk size
16:15:19 <primeministerp> I think she means what's currently being used as disk size
16:15:23 <alexpilotti> yep file size = disk size
16:15:55 <primeministerp> ok
16:16:17 <primeministerp> #topic H1
16:16:18 <liuxpei> after adding a fixed vlaue > max(vhd_header, vhdx_header), then will the file size continue to be the disk size?
16:16:36 <alexpilotti> yep
16:16:53 <alexpilotti> and file size = disk size <= flavor size :-)
16:17:03 <liuxpei> ok, I am ok with that now~
16:17:08 <primeministerp> great
16:17:14 <alexpilotti> cool
16:17:18 <primeministerp> alexpilotti: H1
16:17:24 <alexpilotti> ouch
16:17:27 <primeministerp> alexpilotti: hehe
16:17:33 <primeministerp> I thought you were going to say that
16:17:34 <alexpilotti> let me fetch the link with the blueprints
16:17:40 <alexpilotti> lol
16:17:41 <primeministerp> I know dust is still settling
16:18:13 <primeministerp> pnavarro: while he's mentioning blueprints
16:18:14 <alexpilotti> #link https://blueprints.launchpad.net/nova?searchtext=hyper-v
16:18:24 <primeministerp> pnavarro: are there any specific to cinder we will require?
16:18:24 <alexpilotti> so here' the list
16:18:54 <alexpilotti> I aded "only" nova and ceilometer so far
16:19:12 <alexpilotti> cinder and quantum are missing (the latter for a good reason) :-)
16:19:14 <pnavarro> primeministerp: I'd add some one to complete the missing features that were added in G
16:19:25 <alexpilotti> pnavarro: great
16:19:31 <primeministerp> pnavarro: thank you
16:19:41 <primeministerp> alexpilotti: yes understood
16:19:58 <pnavarro> but, I won't have time for H1
16:20:06 <primeministerp> pnavarro: fair enough
16:20:09 <primeministerp> H1 is close
16:20:21 <alexpilotti> H1 is just for waking up
16:20:28 <primeministerp> hahaha
16:20:36 <alexpilotti> H2 is the real deal and H3 is for the last minute panic
16:21:01 <alexpilotti> so there's plenty of time :-)
16:21:08 <primeministerp> ok
16:21:22 <primeministerp> alexpilotti: do you want to update on the status of the clustering discussion
16:21:26 <alexpilotti> kidding, we'll have a lot of stuff in on H2
16:21:38 <primeministerp> we missed that whole thing last week
16:21:40 <alexpilotti> ohh yeah, clustering
16:21:43 <primeministerp> maybe for the record
16:21:46 <alexpilotti> yep
16:21:53 <primeministerp> #topic clustering
16:22:09 <alexpilotti> I want to hear also IBM opinion here :-)
16:22:18 <primeministerp> alexpilotti: I know they have one
16:22:34 <alexpilotti> so the idea is that we got tons of requests for supporting Hyper-V host level clusters
16:22:56 <alexpilotti> aka old school MSCS clusters with CSV storage etc
16:22:59 <schwicht> primeministerp: he he
16:22:59 <primeministerp> **hyper-v cluster as compute node***
16:23:02 <gokrokve> yes. this is a hot topic
16:23:07 <primeministerp> schwicht: ahh you joined us
16:23:16 <primeministerp> schwicht: glad you made it frank
16:23:29 <schwicht> sorry it took so long ...
16:23:32 <primeministerp> np
16:23:37 <alexpilotti> schwicht: nice to meet you
16:23:54 <primeministerp> alexpilotti: please continue, we have proper ibm representation in the channel
16:24:23 <alexpilotti> teh diea is that most Nova core guys are totally against clustering at the Nova level
16:24:50 <alexpilotti> theyr main argument is that it simply doesn't belong to OpenStack
16:25:17 <alexpilotti> it's not particularly easy from a technical standpoint, but feasible
16:25:36 <alexpilotti> on the other side, support for vCenter and SCVMM might be on the roadmap
16:25:40 <liuxpei> one thing I want is that for VM hA, to evacuate a vm from a failure hyper-v host to another
16:25:41 <alexpilotti> by using cells
16:25:53 <gokrokve> If I am not mistaken Xen clusters were supported by OpenStack. At least I heard that it should work.
16:25:58 <alexpilotti> liuxpei: that's our goal as well
16:26:09 <liuxpei> in Havana?
16:26:30 <alexpilotti> gokrokve: they are using a aspecific Nova feature for "grouping" the servers
16:26:52 <alexpilotti> gokrokve: but they still have a single scheduler on top (nova-scheduler)
16:27:16 <primeministerp> alexpilotti: is that something we could take advantage of?
16:27:23 <alexpilotti> the main issue in using any type of failvore clustering solution is that nova-scheduler will have a second scheduler in front
16:27:51 <alexpilotti> which is understandable
16:27:55 <primeministerp> alexpilotti: in this case the hyper-v cluster resource mgr
16:28:17 <alexpilotti> alternative would be to consider the cluster as single macro-host
16:28:35 <alexpilotti> leaving the actual host sceduling to the cluster
16:28:46 <alexpilotti> but that doesn't work for a lot of reasons
16:28:59 <primeministerp> alexpilotti: i can see how
16:28:59 <schwicht> alexpilotti: can you name a few?
16:29:01 <primeministerp> it doesn't
16:29:04 <primeministerp> well
16:29:08 <primeministerp> you need a single point
16:29:11 <primeministerp> of entry
16:29:19 <primeministerp> if it's handling hte host scheduling
16:29:23 <primeministerp> which means
16:29:27 <primeministerp> one node would have to be that
16:29:58 <alexpilotti> issues:
16:29:59 <primeministerp> bc it would make sence
16:30:12 <primeministerp> to have a nova compute on each individual node
16:30:16 <alexpilotti> nova-scheduler would see a single host with e.g. 200GB memory free
16:30:31 <alexpilotti> which asctually are separated in e.g. 20GB on 5 hosts
16:30:36 <schwicht> ok that does not work , I agree ...
16:30:43 <primeministerp> exactly
16:30:43 <schwicht> it makes sense to see how vmware solves that
16:31:01 <alexpilotti> at this point it could try to boot a VM with 40GB, which is not working
16:31:10 <schwicht> they have a total capacity and the largest deployable capacity
16:31:42 <alexpilotti> still, the largest deployable can be above the current limit
16:31:54 <primeministerp> schwicht: they handle it by having it talk to the higher vsphere layer
16:32:00 <pnavarro> guys, I have to leave, I'll read the logs later
16:32:08 <primeministerp> pnavarro: thanks again
16:32:13 <alexpilotti> pnavarro: bye!
16:32:59 <alexpilotti> beside that, the stats would be very coarse, but that's a trivial issue
16:33:28 <alexpilotti> another problem is related to manual failover
16:33:35 <alexpilotti> or live-migration
16:33:49 <primeministerp> schwicht: make sense
16:34:07 <alexpilotti> being a single host from the nova perspective, there's no way to interact with each single host
16:34:27 <alexpilotti> I proposed an alternbative, while waiting for the cell based approach to take shape
16:35:07 <alexpilotti> there's no particular reason for being forced to use MSCS
16:35:16 <alexpilotti> as long as we can achieve proper HA
16:35:47 <gokrokve> Why not to expose all cluster components individually but provide a hint to the scheduler that it is a cluster?
16:35:50 <alexpilotti> the idea is to add a heartbeat service on each compute host
16:36:11 <alexpilotti> gokrokve: they ditched that idea
16:36:27 <alexpilotti> gokrokve: it was proposed for baremetal initially
16:36:39 <primeministerp> also in theory you can reach the same result of vm avialability with out the complexity of a cluster underneith
16:37:00 <alexpilotti> anyway, to finish with the potential solution
16:37:13 <alexpilotti> we can provide agents for heartbeat
16:37:20 <schwicht> alexpilotti: primeministerp: I think you need both .. a single point of entry for the scheuling and vm management and the enumerate cluster members to be able to set a node in maint mode
16:37:24 <alexpilotti> and provide failover on top of nova itself
16:37:46 <alexpilotti> Nova has a feature called "evacuate"
16:38:06 <alexpilotti> we can use that to failover in case of missing heartbeat reports
16:38:24 <alexpilotti> and handle the usual live-migration for manual failvovers as we already do
16:38:52 <alexpilotti> IMO this would be a fairly simple approach, working with any hypervisor and available in a few days
16:39:20 <schwicht> alexpilotti: you would miss MS system Center co-existance .. that you may get with Cell, or did I misunderstand that ?
16:39:21 <alexpilotti> think about HALinux or similar solutions as a reference
16:39:37 <primeministerp> schwicht: you would get that w/ a cell
16:39:38 <alexpilotti> schwicht: correct
16:40:03 <primeministerp> ok
16:40:04 <alexpilotti> schwicht: but for that we need to wait for cells to get to the stage to support it
16:40:28 <alexpilotti> schwicht: and even with all of VMWare pressure on the subject, I doubt it will happen for Havana
16:40:49 <alexpilotti> so to recap we have 3 choices:
16:40:58 <alexpilotti> 1) fork and do our cluster
16:41:11 <alexpilotti> 2) a clustering project on top of Nova
16:41:15 * primeministerp doesn't support forks
16:41:16 <alexpilotti> 3) wait for cells
16:41:36 <alexpilotti> primeministerp: it was just for completeness ;-)
16:41:43 <primeministerp> I'll throw a +1 for #3
16:41:47 <primeministerp> alexpilotti: i know
16:42:16 <alexpilotti> any other votes or comments?
16:42:27 <schwicht> I like # 3 best because it seems clean
16:42:35 <primeministerp> ok
16:42:40 <primeministerp> gokrokve: ?
16:43:03 <gokrokve> Number 2 looks like more general approach and might work not only for Hyper-V
16:43:30 <gokrokve> While 1 and 3 looks like workarounds
16:43:57 <russellb> ideally help and not just wait :-)
16:44:15 <primeministerp> russellb: agreed
16:44:19 <alexpilotti> russellb: hehe
16:44:26 <primeministerp> schwicht: <cough>
16:44:43 <primeministerp> ok
16:44:45 <primeministerp> moving on
16:44:54 <alexpilotti> russellb: do you think it's feaible to get it done for Havana?
16:45:04 <russellb> doesn't look like it, nobody is working on it yet
16:45:18 <primeministerp> schwicht: <cough --^>
16:45:25 <alexpilotti> russellb: that was my feeling as well ;-)
16:45:33 <russellb> yep, just an idea so far
16:45:39 <schwicht> primeministerp: I heard the first one - gesundheit
16:45:48 <primeministerp> schwicht: ;)
16:46:00 <alexpilotti> russellb: do you know of anybody else interested in this?
16:46:15 <alexpilotti> russellb: we could set up a cross-driver team, to say so :-)
16:46:19 <russellb> plenty of interest in the idea ... nobody saying they want to help write it
16:46:31 <russellb> ideally anyone interested in the affected systems ... so the VMware sub-team
16:46:48 <gokrokve> We are looking for clstering support in OpenStack.
16:47:07 <alexpilotti> russellb: ok, beside us and VMWare, nobody else?
16:47:19 <alexpilotti> russellb: just to know to who I should reach out
16:47:31 <russellb> openstack-dev list in general
16:47:36 <gokrokve> Mirantis will join development.
16:47:41 <russellb> some core nova folks that need to be involved in the architecture/design of it
16:47:49 <alexpilotti> gokrokve: cool!
16:48:14 <alexpilotti> russellb: I guess I'm going to bug dansmith as usual then :-D
16:48:15 <russellb> those that seem most interested seem to be me, dansmith, and comstud
16:48:52 <alexpilotti> russellb: do you know the IRC nick of the VMWare-sub lead mantainer?
16:48:55 <russellb> but probably mostly from an advisory capacity
16:49:02 <dansmith> hartsocks, IIRC
16:49:04 <russellb> hartsocks: <---
16:49:15 <alexpilotti> dansmith: tx guys
16:49:45 <primeministerp> russellb: thanks
16:49:50 <alexpilotti> maybe his around and can join us
16:50:10 <primeministerp> alexpilotti: so additional issues to address?
16:50:48 <alexpilotti> I'd love to see if the VMWare guys are around as we have a pretty good group online to talk about this briefly
16:50:54 <alexpilotti> but it doesn't seem so
16:51:01 <primeministerp> alexpilotti: I would reach out via the list
16:51:06 <primeministerp> alexpilotti: as a start
16:51:10 <alexpilotti> cool
16:51:19 <russellb> ping dan wendlant too
16:51:20 <kirankv> VMware sub team meets tomorrow
16:51:27 <primeministerp> kirankv: thdx
16:51:29 <primeministerp> er thx
16:51:31 <alexpilotti> kirankv: tx
16:51:37 <schwicht> primeministerp: a common approach is best for the openstack consumers (like us) as well
16:51:48 <alexpilotti> primeministerp: you wnated to introduce the vhdx subject? :-)
16:51:55 <primeministerp> o
16:51:59 <primeministerp> did i?
16:52:07 <primeministerp> #topic vhdx
16:52:15 <primeministerp> alexpilotti: that work?
16:52:30 * primeministerp might be missing something
16:52:55 <primeministerp> alexpilotti: actually i've forgotten was there something specific to vhdx we needed to discuss?
16:52:58 <alexpilotti> primeministerp: it was just a follow up on your "thdx" typo :-)
16:53:02 <primeministerp> haha
16:53:03 <primeministerp> ok
16:53:06 <alexpilotti> which had a perfect timing
16:53:15 <kirankv> Vmware subteam meeting time is 1700 UTC
16:53:27 <alexpilotti> we are working on doing the V2 WMI api support
16:53:33 <alexpilotti> that will unlock VHDX
16:53:46 <alexpilotti> VHDX itself is a fairly easy feature
16:53:46 <primeministerp> alexpilotti: H2 timeframe?
16:53:54 <alexpilotti> primeministerp: sure
16:53:56 <primeministerp> hehe
16:54:06 <alexpilotti> ancutaly most blueprints depend on V2
16:54:15 <alexpilotti> *actually
16:54:33 <alexpilotti> which means that we won't have new features on V1
16:54:43 <alexpilotti> aka 2008 R2
16:54:58 <alexpilotti> I just wanted to make this clear
16:55:07 <alexpilotti> and hear if somebody has different ideas about it
16:55:15 <primeministerp> alexpilotti: I think we've been clear about that for some time
16:55:28 <alexpilotti> cool
16:55:30 <primeministerp> alexpilotti: the platform is 2012
16:55:43 <alexpilotti> cool
16:55:53 <alexpilotti> should we move to the RDP console?
16:56:01 <primeministerp> #topic RDP Console
16:56:36 <alexpilotti> schwicht: are you guys planning to add graphical console support on top of Hyper-V?
16:57:03 <schwicht> this is important to us lets say
16:57:43 <alexpilotti> ok, I'm just interested to understand the extent of the interest
16:57:43 <primeministerp> I would image it's important to anyone planning on using hyperv
16:58:14 <alexpilotti> that was my point as well, until I met a lot of people who simpole didn't care :-)
16:58:23 <schwicht> for the product consuming we try to get it into the release, but need to see if the code is solid
16:58:31 <alexpilotti> anyway, from a technicalk standpont we are there
16:59:02 <alexpilotti> but it requires a bit of work to add a few simple nova changes
16:59:24 <luis_fdez> alexpilotti, is it feasible for Havana?
16:59:28 <alexpilotti> that will inpact of all Nova, not only the Hyper-V driver
16:59:37 <alexpilotti> luis_fdez: technically for sure
16:59:50 <alexpilotti> luis_fdez: all we need is a sigle rest API call
17:00:08 <alexpilotti> luis_fdez: and rename any reference to VNC in Nova ;-)
17:00:21 <luis_fdez> ok
17:00:26 <alexpilotti> unless we want to add a "get_rdp_console" method
17:00:44 <alexpilotti> which would add on top of get_vnc_console and get_spice_console
17:01:02 <alexpilotti> the interface is getting crowded and IMO needs some cleanup
17:01:14 <schwicht> alexpilotti: primeministerp: we will follow up off line on the topic, need to check the latest
17:01:24 <primeministerp> schwicht: ok
17:01:26 <alexpilotti> schwicht: sure
17:01:30 <primeministerp> guys we're out of time
17:01:44 <primeministerp> let's end it and make note to pick up the rdp discussion next week
17:01:53 <alexpilotti> schwicht: tx, let me know if you'd like to schedule a meetinmg for that
17:01:59 <primeministerp> #endmeeting