13:02:34 #startmeeting hyper-v 13:02:34 Meeting started Wed Mar 9 13:02:34 2016 UTC and is due to finish in 60 minutes. The chair is alexpilotti. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:02:35 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:02:38 The meeting name has been set to 'hyper_v' 13:02:41 o/ 13:02:41 Hi 13:02:52 Hi 13:03:32 #topic M3 13:03:57 we are marching towards the Mitaka release 13:04:12 so it's time to wrap up, since M3 got released last week 13:04:58 we are moving blueprint code to compute-hyperv 13:05:17 so from now till Newton window opens, there's nothing we can do on teh Nova side 13:05:33 expect possible bugs that should arise in the meantime 13:06:01 sagar_nikam: any particular request? 13:06:28 not as of now on Mitaka atleast for nova 13:06:41 kvinod: anything on neutron ? 13:07:07 any pending bug or issue that we didn't discuss and you guys would like to discuss now? 13:07:23 alexpilotti: i have some questions on nova-hyperv in general, not related to Mitaka, we can discuss later 13:07:51 Hi guys 13:08:04 sagar_nikam: can you do a quick list of the topics? 13:08:34 Hi all, while sagar_nikam is working on that list let me introduce myself in a sentence 13:08:43 i have one topic of nova-compute using certs to connect to keystone 13:08:59 hi domi_, welcome! 13:09:12 kvinod: has more topics for networking 13:09:36 kvinod doesnt seem to be online now 13:09:37 I work for a new company with just a couple of people loacted in Budapest, Hungary working on HyperV based OpenStack public and private cloud. Alessandro and CloudBase helped us a lot along the way, but we are still not at production yet, although we are getting closer every day. 13:10:47 domi_: thanks for joining! 13:11:15 do you have any particular topid that you'd like to discuss here? 13:11:19 welcome. :) 13:11:20 I'm glad to be here, I'll probably just be a silent observer first, and see if there is anything I can chime in on :) 13:11:43 we're setting up the agenda now, so if you have any topic it's a good time :) 13:11:50 alexpilotti: I guess the issue we were talking about on skype isn't interesting for anyone here, so nothing I think 13:11:56 cool 13:12:10 next topic then 13:12:23 #topic Rally tests 13:12:42 sagar_nikam: did you guys manage to do some more performance testing? 13:13:01 that is in plan. but done yet 13:13:42 we started packaging Mitaka3, so we will do the usual Rally runs probably next week 13:13:46 kvinod's team will do it 13:14:13 OVS 2.5 is also getting packaged, so we will do a separate run including that as well 13:14:48 not much more to add for now 13:14:57 next topic, then 13:15:05 #topic hyper-v cluster 13:15:18 Mitaka patches are close to merge 13:15:40 in compute-hyperv, necessarily 13:16:50 follwing that, we'll do the usual round of Tempest and Rally tests as well to see how they perform 13:17:08 #topic PyMI 13:17:27 moving on to PyMI 13:17:57 we're adding a few patches for improved compatibility with the old WMI module 13:18:18 compatibility is ensured until the current release 13:18:37 starting from Newton, we will target PyMI only 13:18:44 alexpilotti: we are planning to use pyMI with liberty 13:18:50 great 13:19:03 our initial perf tests gave us good numbers 13:19:19 but let us know whenever you add anything new in pyMI 13:19:21 since the goal is to a pure drop in replacement, I don't expect issues 13:20:03 we hist one on a Liberty backport (thanks domi_ for reporting it), where associator references have different types between WMI and PyMI 13:20:33 this is getting fixed, but has no impact on the current upstream codebase if PyMI is used 13:20:35 is that the only issue for using with liberty ? 13:21:03 yes, it's the only open bug in PyMI 13:21:12 ok 13:21:15 it's alaos avery small fix 13:21:27 your team tested pyMI with liberty ? 13:21:39 we're now testing the patch to make sure everything works 13:21:41 that's right, although it is possible that the code has more issue that are similar just we haven't hit them 13:21:59 but according to you pymi should be the answer to all of those 13:22:19 domi_: reason for those issues is that we test only with PyMI now 13:22:29 so they happened when using the old WMI 13:22:36 I see, that makes sense 13:22:49 which, although deprecated, is still supported on Liberty 13:23:19 our next Liberty release (12.0.2), will include PyMI by default 13:23:32 alexpilotti: will it be possible for your team to run tests on liberty using pyMI, just to check everything works fine 13:23:38 while 12.0.0 and 12.0.1 still come with the old WMI 13:23:53 sagar_nikam: teh CI uses PyMI 13:24:10 does CI run on liberty as well now ? 13:24:11 so the issue is in teh other case: if you use the old WMI :) 13:24:23 sagar_nikam: since a while 13:24:26 ok 13:24:34 it runs on all supported stable branches 13:25:21 but again, it runs only with PyMI, so if you happen to use the old WMI there might be issues that the CI cannot discover 13:25:33 alexpilotti: sorry to put this in here, but is there any documentation on what kind of testing is being done by the CI systems? Because it feels like there is a little bit of a gap between real life scenarios and CI scenarios 13:26:25 domi_: all teh CI automation scripts are published and documented 13:26:40 domi_: it'all on github 13:26:45 all right, I'll sniff through them then, thanks 13:27:18 for every patch, you can also look through the results of the CI run 13:27:33 which include all the devstack and hyper-v configurations, logs, etc 13:27:45 said that, OpenStack can be deployed in a gazillion ways 13:28:13 so there are of course scenarios that are not present in the CI runs 13:28:21 old WMI, being one of those 13:28:57 any other questions here? 13:29:39 ok, timing out :) 13:29:57 #topic local storage on CSV / SMB3 13:30:28 until now we always said that having local storage on common storage is not supported 13:30:38 there are some specific reasons here: 13:30:59 1) all nodes will report the same storage availability 13:31:22 so if we have 4 nodes, and the common storage has 100GB free 13:31:36 to total reported by the nodes will be 100*4 GB 13:32:08 this will of course confuse the scheduler resulting in involuntary overcommitment :) 13:32:31 this affects only cluster driver 13:33:34 not only 13:34:00 if you have c:\Openstack mounted on CSV or SMB3 13:34:18 ok 13:34:33 this includes also S2D 13:35:02 so having this scenario working, is very useful for hyper-converged deployments (as we do) 13:35:10 what needs to be done: 13:35:14 it could be worked around by disabling the diskfilter of the scheduler, so overcommitment is allowed but in the end that leads the hypervs crashing etc. 13:35:42 domi_: there's an actual solution 13:36:07 there's a Nova BP exactly for this use case 13:36:24 jaypipes is working on it if I'm not mistaken 13:36:40 claudiub: do you have the BP link at hand by any chance? 13:36:58 eh, sure, let me fetch it 13:37:03 this will target Newton at best of course 13:37:18 alexpilotti: oh okay. Is this related to the problem/bug that nova reports remote volumes as local disk space used? 13:37:49 yes, because they not remote: they are still local, but mounted remotely :) 13:37:57 https://review.openstack.org/#/c/253187/ 13:38:02 so it's not actually a bug 13:38:21 to be clear: local storage mounted remotely is NOT supported ATM 13:38:32 alexpilotti: talking about when they are specified as volumes not local disk (e.g. iSCSI volumes) 13:38:37 the current discussion is: what to do to have it supported 13:39:08 okay :) 13:39:45 looking at the BP, there are already a few WiP patches, it's not a trivial one: #link https://blueprints.launchpad.net/nova/+spec/generic-resource-pools 13:40:11 alexpilotti: me, cdent, bauzas, edleafe and others are all working on it :) but generic-resource-pools won't be done until Newton. 13:40:41 jaypipes: thanks for confirming it! 13:41:13 we'd be happy to help, as you know it's an important one for hyepr-v as well 13:41:13 alexpilotti: no worries :) here is the blueprint: https://review.openstack.org/#/c/253187/ 13:42:18 so, in the meantime some hacky custom filters can do the trick of course 13:42:32 meantime = Liberty, Mitaka :) 13:43:01 getting back to the Hyper-V driver, there's one more thing that needs to be done: 13:43:47 resize / cold migration must support a shared storage 13:44:53 currently resize uses a logic taken from the libvirt driver, where in case of resize the VM is stopped, copied to target, recreated and started 13:45:20 when source and target are the same, data is simply moved to a temp location and restarted 13:45:49 we just need to add a check, that in case of shared storage, we need to treat the remote case as the local case, that's it 13:46:39 alexpilotti: we would very much like that :) 13:46:53 to do that we can check at the host level if the path points to the same storage and act accordingly 13:47:22 this is a patch that will be done for Mitaka and will be easy to backport 13:48:13 it wont make it upstream beforeNewton, but it will be part of compute-hyperv (kilo+) 13:48:45 questions? 13:48:57 we are getting close to the 10' left mark 13:49:05 i have a question for using certs 13:49:20 sure 13:49:30 http://docs.openstack.org/liberty/config-reference/content/list-of-compute-config-options.html 13:49:36 for keystone 13:49:52 we can provide cafile 13:49:52 #topic using X509 certificates in nova-compute config 13:50:14 does it work in HyperV 13:50:37 in controller, we have.crt file 13:50:40 that code comes straight from the Nova compute manager, aka common code base 13:50:56 it works on Hyper-V in the same way as, say, libvirt 13:50:57 i mean does work on HyperV host 13:51:10 as in in if it works on windows? 13:51:11 .crt needs to be changed to .cer ? 13:51:31 yes, does it work on windows 13:51:38 has it been tested 13:51:47 we have always used only creds 13:51:49 that is python standard code, we never had particular issues 13:51:59 we have never provided the cafile 13:52:03 but it wont use Windows cert store 13:52:09 ok 13:52:10 you need to provide cert, key and ca 13:52:23 so just copying .crt file from controller 13:52:44 on hyperv host and then giving that path in nova.conf is sufficient ? 13:52:59 ibalutoiu just did a similar config 13:53:01 on the hyperv host 13:53:14 I'm pretty sure python is wrapping around openssl so it should work fine on any platform 13:53:33 so we dont need to change crt to cer file ? 13:53:45 i was reading on net that crt file will not work in windows 13:53:53 domi_: not necessarily in some cases the Python code uses cryptoAPI on Windows, but the usage is transparent 13:53:53 my idea would be no, but that's purely theoretical 13:54:02 but as domi_ mentioned python may be handling it 13:54:16 ok 13:54:16 we had no issues with it until now 13:54:30 note: crt is a file extension, the file format is either CER or DER 13:54:45 alexpilotti: can you give me some contacts from your team whom i can reach out to understand it further 13:54:56 we are almost end of this meeting 13:55:05 lpetrut: ? 13:55:15 sure, can you please send an email? I will loop the engineer working on it 13:55:24 alexpilotti: thanks 13:55:27 np! 13:56:01 moving to next topic if we are done on this 13:56:08 I dont expect any surprise (everything worked so far with X509), but in case we will document any differences compared to the Linux case 13:56:22 sure, 4' left 13:56:28 freerdp 13:56:30 #topic open discussion 13:56:45 i could not find anything on the proxy which you mentioned last week 13:56:47 so freerdp it is :) 13:57:11 the proxy is not part of freerdp-webconnect 13:57:13 novnc proxy does it the required proxy for libvirt 13:57:26 yes, since proxy is not part of freerdp-webconnect 13:57:39 wondering how it needs to be done 13:57:42 novnc equivalent is wsgate (freerdp-webconnect) 13:57:46 in case of novnc 13:57:51 we have novnc proxy 13:58:02 it's the same 13:58:08 we dont have a similar freerdp-webconnect proxy 13:58:13 what I was referring to is to use a reverse proxy in front 13:58:30 you need something that can reverse proxy web sockets 13:58:43 alexpilotti: ok you meant reverse proxy on controller 13:58:53 not anything related to freerdp-webconnect 13:58:56 proxy 13:59:10 controller or external load balancers, depends on your deployment 13:59:16 ok 13:59:27 let me check 13:59:42 i may try to use windows NLB 13:59:46 it'd be: rev_proxy -> wsgate -> hyper-v 14:00:02 by installing freerdp MSI and multiple wndows machines 14:00:08 all behind NLB 14:00:19 so everything in windows 14:00:21 will work well 14:00:31 only thing: you need cleint session affinity as the websocket needs to be connected 14:00:45 time's over 14:00:46 ok 14:00:52 thanks guys for joining! 14:00:57 #endmeeting