16:00:30 #startmeeting hyper-v 16:00:31 Meeting started Tue Jul 9 16:00:30 2013 UTC. The chair is primeministerp. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:32 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:35 The meeting name has been set to 'hyper_v' 16:00:41 hi everyone 16:00:43 hi 16:00:44 it's been a while 16:00:48 hey luis 16:01:21 let's wait for the others 16:01:26 I don't see alex here yet 16:01:58 ok 16:02:04 luis_fdez: so let's start w/ the puppet modules while waiting 16:02:08 ok 16:02:11 #topic hyper-v puppet modules 16:02:18 primeministerp: hi everyone 16:02:33 have you looked at the last pull requrest? 16:02:33 i renamed the module and merged your pull requests last night 16:02:49 luis_fdez: I haven't looked since last night 16:02:56 luis_fdez: you had 2 16:03:02 luis_fdez: i merged them both 16:03:09 the changes to the vswitch and the rename 16:03:14 yes 16:03:15 for module conformance 16:03:21 I also renamed the project 16:03:23 to match 16:03:25 perfect 16:03:36 luis_fdez: so go ahead and add your other changes 16:03:51 schwicht: Hi frank 16:04:04 ociuhandu: hi tavi 16:04:04 ok, one of the pull request is just a refactoring creating a new hyper-v.pp class 16:04:08 hi guys 16:04:13 luis_fdez: perfect 16:04:22 to extract the installation of Hyper-V and configuration .... it can be the future init.pp of the hyper-v module 16:04:24 luis_fdez: i was thinking we need to move some into general windows 16:04:30 class as well 16:04:42 sorry me and Tavi were in a meeting 16:04:49 alexpilotti: no problem 16:04:51 the other one is a proofofconcetp? hehe of how the nova_config type looks as in nova puppetlabs... but it adds one dependency 16:04:57 Hello! 16:05:01 hi all 16:05:16 luis_fdez: I need to add a puppetfile for dependencies 16:05:21 ((last time we talked about it I was wrong saying that it wont add new dependency)) 16:05:40 luis_fdez: I don't mind adding dependencies 16:05:49 also.. I extracted windows_feature as a separate define 16:05:53 then we can reuse 16:06:08 to make the code cleaner and also follow the style rules 16:06:15 luis_fdez: that works too, because if using hyper-v server the role is already there 16:06:36 luis_fdez: perfect 16:06:58 luis_fdez: is there anything else we need to discuss? 16:07:05 now... I'm looking to the download question and.... opened to new proposals... can i give you a hand on something related with python? 16:07:51 luis_fdez: any BP you'd like to work on? :-) 16:08:15 luis_fdez: it needs to be there 16:08:15 I'll take a look... now... im a puppet man... i have to revisit my compute driver profile :) 16:08:19 at least for the start 16:08:21 or 16:08:25 we have it as a seed 16:08:27 also 16:08:33 once we move to all source 16:08:41 we won't necessarily need it 16:08:45 bc we can move to a git pull 16:08:50 for all that 16:08:51 however 16:08:59 if we're going to use the public bins 16:09:19 we need to have a way to pull them 16:10:10 that's my 2c 16:10:19 yeps, I think for the start the public bins is the best option 16:10:34 ok 16:10:45 let's move on topic wise 16:10:49 do you have changes to be committed or can I changes thinks on the python part? 16:10:49 alexpilotti: ready 16:11:00 pasdfadf 16:11:00 luis_fdez: no I've been working on the provisioning side 16:11:04 had some stuff to clean up 16:11:04 sure 16:11:12 alexpilotti: where do we want to start? 16:11:19 wmiv2? 16:11:29 yep 16:11:37 #topic wmiv2 16:11:53 so 2012R2 Preview is publicly avlaible 16:11:59 haha 16:12:04 why yes it is 16:12:10 and the V1 namespace is gone :-) 16:12:20 why yes it is 16:12:22 ;) 16:12:32 so it's time to do some testing of Havana on 2012R2 16:12:58 to get there, the first BP that we are releasing this week is teh WMIV2 one 16:13:01 is the wmiv2 code upstream 16:13:08 perfect 16:13:09 primeministerp: ? 16:13:13 ok 16:13:17 you answered my question 16:13:43 thanks to the Grizzly refactoring it was a fairly contained work 16:14:13 a factory is instantiating the relevant utils classes based on the OS: < 2012: V1, >= 2012 V2 16:14:37 a separate pachset will be released for the Neutron agent 16:14:42 awesome 16:14:52 nice work 16:14:55 in the same timeframe? 16:14:58 one that's in place, we'll need also to update the installer 16:15:16 I'd love to get both (Nova, Quantum) in for H2 16:15:39 ok 16:15:47 that would be great 16:15:49 we'll need a lot of help in testing 16:15:57 schwicht: ..... 16:16:23 schwicht: will your folks be able to take a look 16:16:25 primeministerp: we'll need to chat off line 16:16:30 schwicht: np 16:16:32 one of the reasons I want to have those bits out early is that they replace the layer that interacts with the OS 16:16:55 which cannot be tested with unit testing, only system / integration testing 16:17:12 alexpilotti: i'll be able to try them tomorrow 16:17:14 and sicne we need a CI for that, the only alternative is having some hands doing it ;-) 16:17:19 is the testing the WMIv2 testing? 16:17:27 alexpilotti: yes I know 16:17:28 schwicht: yes 16:17:50 schwicht: basically doing a lot of regression testing 16:17:55 we will have both funtional (IBM speak IVT) and SVT testing for the base stuff 16:18:14 schwicht: great tx! 16:18:43 however we are investigating the scope of the testing atm 16:18:57 gotcha 16:19:39 so next topic? 16:19:47 Dynamic memory 16:19:50 sure 16:19:56 #topic dynamic memory 16:20:23 We have this patch ready as well, I'm reviewing it internally, but is basically ready for public review 16:20:25 alexpilotti: i'm assuming you've completed it as well? 16:20:43 we have 2 new options: 16:20:57 one to enable / disable the feature 16:21:06 and one for the overcomit ratio 16:21:28 for teh second one, we just reused the option coming from teh scheduler filter 16:21:45 is the ratio part of the flavor ? 16:22:00 no, it's a configuration in the nova scheduler 16:22:06 oh 16:22:07 default's to 1.5 16:22:17 so if the host that 100GB memory 16:22:22 understood 16:22:23 it overcommits to 150GB 16:22:33 I was hoping we can do that for each instance 16:22:33 on HyperV this was not possible before 16:22:51 and that I thought we would drive with a flavor attribute 16:23:06 as w/o dynamic memory there's no balooning and the mem is allocated 1:1 16:23:12 QoS (and overcommit are instance attributes) 16:23:28 alexpilotti: understood 16:23:35 we do that on other hypervisors ... 16:24:14 alexpilotti: anything else on the topic? 16:24:16 the reason for reusing the same option is that they have anyway to match 16:24:26 not really 16:24:37 only one question 16:24:48 same page sharing is not a feature supported, right? 16:24:58 should we default to true or false on enabling this feature? 16:25:08 schwicht: not on Hyper-V 16:25:11 k 16:25:21 schwicht: it's not efficient with SLAT 16:25:41 schwicht: unlike e.g. ESXi 16:25:45 yep 16:25:53 I was thinking at KSM .. 16:25:55 but that is ok 16:26:12 openstack can not offer more than the base hypervisor 16:26:17 can you guys please vote on the above question? :-) 16:26:28 > should we default to true or false on enabling this feature? 16:26:37 I would say false 16:26:50 schwicht: ? 16:27:10 pnavarro is not joining us today? 16:27:10 VMware uses false ... 16:27:34 schwicht: cool, consistency is important 16:27:43 I vote for false as well 16:27:54 It's a risky feature 16:28:05 agreed 16:28:08 what happens on overcommit isues? 16:28:10 I prefer the deoployer to know what he / she is doing 16:28:12 halt the VM ? 16:28:24 schwicht: yes 16:28:38 schwicht: which usually means that a lot of VMs get stopped 16:28:46 oh ok 16:28:51 there's also a "safety buffer" option 16:28:52 I thought the iggest gets supsended 16:28:56 biggest 16:29:08 the one requesting for the memory gets suspended 16:29:35 the safety buffer makes sure that an extra memory percentage is available (e.g. 20%) 16:30:07 so a new VM cannot be started if there's less than the expected memory 16:30:40 ok, that's it on the topic on my side 16:31:02 ok 16:31:10 anything on the RDP side of things? 16:31:38 not yet, we put the WMIV2 into priority 16:31:51 I'll probably send in a patch for RDP as well for H2 16:32:10 ok 16:32:13 knowing that the review will take some time 16:32:58 I'd like to discuss if we should support or not the serial console 16:33:23 o yes 16:33:56 #topic serial console 16:34:00 alexpilotti: I like the serial console a lot 16:34:02 OpenStack at the moment has only the option of getting the output 16:34:13 alexpilotti: you want to describe 16:34:13 r/o ? 16:34:20 schwicht: yep 16:34:32 we us it r/w a lot for troubleshooting 16:34:40 yep 16:34:42 both on Linux and on Windows 16:34:59 on Windows we created a service that provides a command prompt 16:34:59 logging boot issues is cool, messing with network etc 16:35:28 primeministerp: the OVS issue we usually debug with a serial console 16:35:32 schwicht: yep 16:35:43 schwicht: ;) 16:35:54 on Windows we use it to set the user name, reboot, setting network details 16:36:07 diskpart for resizing the partitions, etc 16:36:29 enabling RDP if it was not enabled 16:36:53 so as long openstack does not support r/w it is merly a debugging, logging tool 16:37:03 and for that not as important for my use cases 16:37:06 It'd be great to have in in OpenStack with an HTML5 websocked based VT100 emulator :-) 16:37:15 since I can attach a console on the hyperV host 16:37:49 schwicht: yes, we a simple powershell script that attaches it and starts putty 16:38:33 schwicht: but the logging part is already cool to detect issues with cloud-init, to detect why e.g. the instance does not get SSH connections 16:38:43 yes it is 16:38:55 (we SOL enable all physical hosts for the same reason) 16:39:20 ok, I wanted to introduce the topic, I have a working proto, just need to file up a BP and polish up the patch 16:39:50 nice 16:39:52 well good work 16:40:42 cool, tx 16:41:11 I'm running a bit late, we organized the second Python event in Timisoara 16:41:14 ok 16:41:18 let's finish up 16:41:27 is there anything else we're missing? 16:42:07 We have issues with OVS on RDO 16:42:25 but let's discuss about this next time once we have a solution :-) 16:42:26 schwicht: this is what we were discussing earlier 16:42:33 alexpilotti: can you discribe it 16:42:40 very simple: 16:42:44 alexpilotti: do you have a pointer with more details? 16:42:58 not yet, it came out this sunday 16:43:06 in brief: 16:43:13 multinode setup: 16:43:33 controller, network, kvm compute and hyperv compute 16:43:52 based on CentOS 6.4 16:44:14 everything works except networking 16:44:16 802.1Q ? 16:44:29 anything, VLAN or flat 16:44:55 what physical ethernet adapter? what switch? 16:45:00 looks like either the datapaths or the flows are not working 16:45:04 OVS 16:45:17 no phisycal switches 16:45:19 the physical switch between the KVM and and the hyperV node 16:45:56 cloud be virtual switches on ESXi / Fusion / workstation (testing) 16:46:03 ah ok 16:46:08 or any L2 switch when testing a flat network config 16:46:09 stacked ... 16:46:12 yep 16:46:38 do you use portgroup 4095 on ESXi ? 16:46:40 at the beginning I was thinking about an issue due to the stacking 16:47:01 nope 16:47:20 as I was saying the issue can be reproduced even with flat networking 16:47:25 we probably should move that off the meeting discussion 16:47:46 I can tcpdump the traffic 16:47:54 at the ethx level 16:48:17 it's simply not getting forwarded to the proper bridge / tap device 16:48:30 (in OpenVSwitch) 16:48:58 I'm gonna get in touch today with the RDO guys and see 16:49:15 oki guys, I really have to run unfortunately 16:49:19 are your br-eth and br-int bridges up? 16:49:19 ok 16:49:20 Ah, one more thing 16:49:27 schwicht: yep 16:49:45 This week we start committing the Crowbar upport for Hyper-V :-) 16:50:26 awesome! 16:50:59 so 16:51:46 anyone have anything else 16:51:49 to add? 16:52:10 if not, I'm going to end it. 16:52:29 thanks alexpilotti for the updates 16:52:37 #endmeeting