13:01:08 <claudiub> #startmeeting hyper-v
13:01:08 <openstack> Meeting started Wed Sep 21 13:01:08 2016 UTC and is due to finish in 60 minutes.  The chair is claudiub. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:01:09 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:01:12 <openstack> The meeting name has been set to 'hyper_v'
13:01:24 <claudiub> hello hello
13:01:43 <abalutoiu> hello
13:01:46 <claudiub> waiting for a bit, so people can join
13:04:19 <sagar_nikam> Hi All
13:04:36 <claudiub> hello :)
13:04:46 <sagar_nikam> sorry late today...
13:04:53 <claudiub> it's fine :)
13:05:04 <claudiub> anyone else joining us?
13:05:08 <atuvenie> hi all, sorry, internet is a little slow today
13:05:20 <sagar_nikam> we can start
13:05:31 <sagar_nikam> sonu may not join
13:05:36 <claudiub> ok
13:06:00 <claudiub> #topic performance test results
13:06:16 <claudiub> abalutoiu: hellou. :) can you share with us some of your results?
13:06:46 <abalutoiu> claudiub: hello, sure
13:07:42 <abalutoiu> here are some performance comparison results between KVM, Hyper-V on WS2012R2 and WS 2016TP5 with all the latest improvements that we've been working on: http://paste.openstack.org/show/fzCXXHcLrk2L0SsrJgx9/
13:08:11 <abalutoiu> the test consists in booting a VM, attaching a volume and deleting the VM
13:08:42 <abalutoiu> 100 total iterations, with 20 iterations in parallel
13:09:50 <sagar_nikam> we are close to kvm... nie
13:09:54 <sagar_nikam> nice
13:10:05 <sagar_nikam> but ... only with TP5 ?
13:10:19 <sagar_nikam> cant we achieve the same with 2012 R2 ?
13:10:32 <abalutoiu> another test (which includes nova boot, test ssh connection to the VM and delete VM): http://paste.openstack.org/show/iRwQHKku6CCz6PoX0kRi/
13:10:37 <claudiub> i see in the paste there is also 2012 r2 results
13:10:48 <claudiub> there are*
13:11:08 <abalutoiu> it is written above each table the host used
13:11:17 <claudiub> and i see that we actually perform better thank kvm
13:11:18 <sagar_nikam> claudiub: yes... but the results are not as good as kvm or TP5
13:12:16 <claudiub> yeah, by 3 seconds in total, or 1%
13:12:43 <abalutoiu> load durations for the test, on WS 2012R2 250sec, on WS 2016TP5 244sec and on KVM 280sec
13:14:07 <claudiub> the difference between kvm and hyper-v is even greater on the 2nd paste abalutoiu sent
13:14:08 <sagar_nikam> OK
13:14:14 <sagar_nikam> looks good...
13:14:31 <sagar_nikam> the new version of pyMI available in pypi
13:14:42 <sagar_nikam> ?
13:14:55 <claudiub> and can it can be seen the difference between the latest pymi patches and without them
13:15:11 <claudiub> not yet, there are still 2 pull requests that we have to merge on pymi
13:15:29 <sagar_nikam> claudiub: ok.. not a issue
13:15:32 <claudiub> but it'll be released by next week
13:15:53 <sagar_nikam> one question... what were the tenant VMs.... windows or linux ?
13:15:59 * clarkb asks drive by question. Are you booting the same image on both?
13:16:41 <claudiub> i'm assuming it's a cirros image. abalutoiu?
13:16:50 <abalutoiu> yes, it's a cirros image
13:17:04 <sagar_nikam> ok
13:17:37 <abalutoiu> clarkb: yep, it's the same image on both, the only difference is the disk format
13:18:10 <clarkb> abalutoiu: for kvm are oyu using qcow2? and if so are you booting qcow2 or is nova converting to raw? (I juts recently discovered it does this and makes things slow)
13:18:58 <clarkb> (basically if you are going to have nova boot raw its best to upload raw to glance)
13:19:19 <sagar_nikam> will there be any difference if we use same other image ? other than cirros... or will the results be same
13:20:39 <abalutoiu> I'm using qcow2 for KVM and vhdx for Hyper-V
13:20:51 <abalutoiu> I'm not sure if nova is converting the image to raw
13:22:57 <claudiub> clarkb: i assume there is a nova.conf option for this behaviour, right?
13:23:32 <clarkb> claudiub: there are 2! this is why we were so confused about it and took a while. But yes you have to set use_raw_images to false and libvirt image type to qcow2 iirc
13:24:01 <clarkb> abalutoiu: but with infracloud we saw it was causing long boot times because the qcow2 was copied to the compute host then converted to raw before being booted
13:24:10 <clarkb> so we turned it off and just boot off the qcow2 now
13:25:06 <clarkb> (anyways I just learned this stuff yesterday and saw it might be relevant to your performance discussion ealrier since we were also tuning boot times)
13:25:42 <claudiub> it seems force_raw_images is set to True by default
13:26:14 <clarkb> claudiub: yup and its a fine default if you also upload raw images to glance
13:26:19 <claudiub> clarkb: yeah, thanks for the tips. :)
13:26:50 <claudiub> it'll be something we'll take into account next time we do some performance tests
13:28:16 <claudiub> clarkb: also, if you know any, do you know about specific kvm performance fine-tuning we can do? it'll be helpful to know we've applied all the best practices for performance for both kvm and hyper-v when we compare them.
13:29:31 <clarkb> claudiub: the only other thing we have done is set the writeback setting to unsafe beacuse all of our instances are ephemeral for testing
13:29:37 <clarkb> this gives us better io performance
13:31:13 <claudiub> i see. thanks for your input!
13:32:28 <claudiub> so, it seems we'll have to apply those suggestions next time we'll do some testing
13:32:56 <abalutoiu> yep, thanks clarkb for the tips
13:34:07 <lpetrut> hey guys. I think that the force_raw_images config option would not affect those results as in each scenario the image was already cached, correct me if I'm wrong Alin
13:35:14 <clarkb> lpetrut: if its already cached on all compute hosts it shouldn't affect it
13:35:40 <clarkb> (which happens by booting an instance of that image on all compute hosts prior)
13:36:17 <claudiub> i see. so the impact is quite small anyways
13:36:50 <claudiub> still worth setting the config option to false, imo
13:37:08 <clarkb> or use a raw image to start or make sure its cached across the board or something.
13:37:17 <abalutoiu> we used only one host for each hypervisor, and the image was already cached before running the tests
13:38:36 <abalutoiu> please let us know if you find any other performance tips&tricks on KVM if you don't mind clarkb
13:38:45 <clarkb> can do
13:39:11 <claudiub> thanks clarkb!
13:39:42 <claudiub> #topic release status
13:40:16 <claudiub> soo, newton is going to be released in 2 weeks, aproximately
13:40:32 <claudiub> discovered an issue with ceilometer-polling on windows
13:40:51 <claudiub> there is a new dependency in ceilometer, which crashes the agent
13:40:59 <claudiub> cotyledon
13:41:15 <claudiub> will have to send some pull requests to fix that
13:41:44 <claudiub> other than that, nothing new
13:42:29 <claudiub> #topic open discussion
13:42:57 <sagar_nikam> what happened to monasca patches ?
13:43:13 <claudiub> for the following weeks, testing will be our priority
13:43:33 <sagar_nikam> ok
13:43:50 <sagar_nikam> will we miss newton ? for monasca
13:43:53 <claudiub> we have a couple more common testing scenarios in mind, regarding performance
13:44:27 <claudiub> like live / cold migration, cold resize, different volume types, etc.
13:45:08 <claudiub> sagar_nikam: unfortunately, nothing. it got a bit too late for them. :(
13:45:13 <claudiub> they're still up for review
13:45:28 <sagar_nikam> ok
13:45:37 <sagar_nikam> hopefully O will have it
13:45:43 <claudiub> hope so too.
13:45:55 <claudiub> other than that, we're reproposed blueprints to Ocata
13:45:56 <claudiub> for nova
13:46:10 <sagar_nikam> ok
13:46:21 <claudiub> currently, the hyper-v UEFI VMs spec and the os-brick in nova spec are approved
13:47:13 <claudiub> vNUMA instances and hyper-v storage qos specs are up. since they've been previously approved, they'll get in fast.
13:47:44 <claudiub> and there are a few other specless blueprints that should be reapproved, like the Hyper-V OVS vif plug blueprint
13:47:56 <claudiub> all patches have already been rebased to master, ready for review.
13:48:35 <sagar_nikam> ok
13:48:53 <claudiub> the Hyper-V PCI passthrough spec should be up in the following weeks as well, as well as code.
13:50:02 <claudiub> that's pretty much all that comes to mind.
13:50:31 <sagar_nikam> any further news on magnum support ?
13:50:45 <sagar_nikam> since we last discussed few weeks back
13:51:02 <claudiub> atuvenie: hi. :)
13:51:41 <atuvenie> well, we won't make it in newton, obviously
13:51:56 <atuvenie> the work in k8 is kind of slow at the moment, but going
13:52:15 <sagar_nikam> ok
13:52:39 <atuvenie> we have had some problems with networking on TP5 for some reason, I'm trying to figure what/if something has changed in their model
13:55:15 <sagar_nikam> ok
13:55:32 <claudiub> sagar_nikam: any news from your side?
13:55:33 <sagar_nikam> claudiub: nothing much from my side...
13:55:47 <sagar_nikam> no... the scale tests will take longer
13:55:54 <sagar_nikam> not getting a free slot
13:56:30 <sagar_nikam> not sure when we will get it..
13:56:43 <sagar_nikam> we are planning to run it on Mitaka
13:57:45 <claudiub> i see
13:57:59 <claudiub> well, let us know when you start. :)
13:58:09 <claudiub> other than that, I think we can end the meeting here
13:58:15 <sagar_nikam> sure
13:58:22 <sagar_nikam> yes... we can end
13:58:27 <sagar_nikam> thank you ....
13:58:27 <claudiub> thanks folks for joining! see you next week!
13:58:37 <claudiub> #endmeeting