13:00:29 #startmeeting hyper-v 13:00:30 Meeting started Wed Nov 16 13:00:29 2016 UTC and is due to finish in 60 minutes. The chair is claudiub. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:00:31 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:00:33 The meeting name has been set to 'hyper_v' 13:00:37 hellou 13:01:15 anyone here for the meeting? 13:04:28 Hi 13:04:36 hello :) 13:04:53 was waiting for more people to join 13:05:01 sorry bit late... my network went down 13:05:08 sonu will join today 13:05:21 cool, shall we wait for him? 13:05:54 we can start 13:06:05 ok 13:06:12 he has some networking discussion... we can do it when he joins 13:06:17 #topic nova status 13:06:31 soo, the first ocata milestone is tomorrow 13:06:53 which means spec-freeze. that means that we can only propose new features until then. 13:07:26 the shielded vm spec is still up, unfortunately 13:07:48 and, I'd like to also get the hyper-v nested virtualization blueprint approved as well 13:08:04 i'll have to send a POC patch first, though. 13:08:15 I've already sent the patch needed for os-win 13:08:27 ok 13:08:28 currently writing the nova bits 13:09:04 from the last summit discussion, the hyper-v nested virtualization blueprint doesn't need a spec 13:09:10 so, yeay. :) 13:09:16 good 13:10:18 regarding the ovs vif plug patch, we're currently setting up the hyper-v ci to be able to run with ovs and neutron-ovs-agent instead of neutron-hyperv-agent 13:11:03 once that's done, we'll post some ci results on the patch, and ping the nova core about the patch 13:11:16 ok 13:11:59 there were some BPs you/cloudbase proposed for the NFV usecase 13:12:11 in last cycle... what happened to those 13:12:15 regarding the os-brick in the hyper-v driver (+ FC), it got some core review comments, they were addressed, waiting for other reviews 13:12:42 ok... good 13:12:46 but, this week, i'm not expecting a lot of reviews, since the cores are concentrating on blueprints / spec reviews, as the deadline is tomorrow 13:12:48 hopeful FC gets merged soon 13:12:55 yep 13:13:21 sagar_nikam: regarding your question, there is the hyper-v vNUMA placement blueprint 13:13:56 which we've sent for review some time ago, and it is in compute-hyperv since mitaka, or liberty, if my memory doesn't fail 13:14:01 correct.. this is the BP.. how is it going 13:14:30 it was reapproved for ocata 13:14:41 but no core reviews on the patch: 13:14:43 ok 13:14:55 #link Hyper-V: Adds vNUMA implementation https://review.openstack.org/#/c/282407/ 13:15:59 the pci implementation got some reviews 13:16:22 which needs to be addressed 13:16:35 ok 13:17:03 moving on 13:17:09 #topic monasca status 13:17:29 so, one small patch merged since last week 13:17:43 got more comments, which i'll have to answer 13:17:47 for which feature ? 13:17:55 got some merge conflicts, again. :) 13:18:14 sagar_nikam: this one: https://review.openstack.org/#/c/359453/ 13:18:32 basically, it gets some performance counters from windows 13:18:53 ok 13:19:09 as on today... what shape is monasca+ hyperv... upstream 13:19:16 what ever is merged 13:19:23 can it be used ? 13:19:26 as is... 13:19:43 or more patches need to be merged... before it can get used 13:19:56 yep, you can start monasca-agent on windows 13:20:11 and does it send the data ? 13:20:15 to monasca 13:20:58 let me check for a second 13:23:50 checkers that have pending patches on monasca-agent: hyperv, cpu, disk; checkers that are working now: network, memory, process, host_alive 13:23:59 and i thought i've sent a patch for iis checker 13:24:26 oh yeah 13:24:29 iis works as well 13:24:40 ok 13:24:46 it was included with the wmi_checker 13:24:49 cpu and disk 13:24:52 the last link i've sent 13:24:53 are important 13:25:15 i think we need to get it merged at the earliest 13:25:51 the other checkers, i haven't tested them yet. there are more than 40 checkers in total. Roland was suggesting those, as those are most commonly used. 13:26:38 you can see how many checkers are here: https://github.com/openstack/monasca-agent/tree/master/conf.d 13:27:53 any questions? 13:27:55 agree... i think those are the important ones... especially the disk, cpu and memory 13:28:09 no... we can move to next topic 13:28:30 #topic open discussion 13:29:20 sagar_nikam: so, last week I've asked if there is someone from your team that can help with getting monasca-log working on windows 13:29:56 yes... most of my team is also new to monasca 13:30:10 monasca team is in US 13:30:20 india team does not work on it 13:30:37 anyhow let me try ... not sure though 13:31:09 i see 13:31:18 well, let me know when you find out. :) 13:31:22 india team works on nova and neutron 13:31:29 for hyperv 13:31:59 have you tried hyper-v 2016 yet? 13:32:04 sonu: are you there in the meeting ? 13:32:16 claudiub: not yet 13:32:45 we wanted to try nano... but did not proceed much 13:32:56 as of now all our test systems are 2012 r2 13:33:25 i see 13:33:44 do you have any plans to upgrade in the near future? 13:34:52 we wanted it on atleast one machine... but could not do it 13:34:59 due to various reasons 13:35:02 as of now 13:35:07 we support only 2012 r2 13:35:22 hence getting 2016 has been a challenge in test machines 13:35:56 hm, interesting 13:36:25 cloudbase supports 2016? 13:36:29 yep 13:36:31 for nova and neutron 13:36:35 ok 13:36:39 TP5 ? 13:36:46 that's been always a target, since it was officially announced 13:36:54 hyper-v 2016 has been released 13:37:16 oh... missed that release announcement 13:37:21 when was it released 13:38:09 you can already download it from microsoft's site: https://www.microsoft.com/en-us/evalcenter/evaluate-hyper-v-server-2016 13:38:35 i think it was sometime around the openstack summit in barcelona 13:39:08 but yeah, we do support 2016. :) 13:39:30 plus, we already have a few of the 2016's features in nova, like the upgraded support for remotefx 13:39:55 ok 13:40:12 may be for our next release we will try hyperv-2016 13:40:41 cool :) 13:40:55 any other news from your side? 13:41:14 could you check the link i gave you last week on k8s on azure ? 13:43:59 hm, wondering how they're dealing with container networking across different hosts. 13:45:05 and storage as well 13:45:17 even i was curious 13:45:39 one more question which i had was... containers are only supported in 2016 13:45:48 yep 13:45:59 so does azure provision containers in 2016 ? 13:46:11 and is it stable 13:46:14 that, i don't know :) 13:46:24 that can be a good indication of its stability 13:46:46 your team is not working with MS on this ? 13:46:51 yep, that's exactly my thought as well 13:46:54 peter is not involved 13:47:03 on containers ? 13:47:08 from MS 13:48:28 hm, i don't keep in touch with peter, only rarely 13:48:44 ok 13:48:50 sagar_nikam: any news from sonu? 13:49:03 he wanted to join 13:49:10 he mentioned me today as well 13:49:17 some networking discussion with you 13:49:20 on ovs 13:49:27 he is not in office 13:49:35 a bug, or? 13:49:36 might have got struck in traffic 13:50:17 i think he is planning to use ovs for our next release mostly probably based on mitaka or newton 13:50:30 and wanted to discuss some things on it 13:51:06 sounds good. :) 13:51:17 i will request him to join next week.. or may be mail you abut it 13:51:37 well, he can drop me an email 13:51:41 also if i remember right, we got the certification from MS on the OVS solution right ? 13:51:47 yep 13:53:04 Then i think we may mostly go with it for our next release 13:53:19 sonu can provide more info on it 13:54:02 here's the link, where you can see the certification: https://www.windowsservercatalog.com/item.aspx?idItem=18117a8c-c7bf-f20c-9185-3a53117b9875&bCatID=1638 13:54:22 anything on cluster driver ? or it is down on priority 13:55:03 well, they didn't approve it for ocata 13:55:10 which is sad 13:55:40 ok 13:55:56 so, it got a bit down in the priority list, as they won't merge it 13:56:10 but at least it's merged in compute hyperv 13:56:21 ok fine... 13:56:27 time almost over 13:56:32 nothing much from my side 13:56:53 well, i guess we can end the meeting now :) 13:57:00 thanks 13:57:07 i'll be waiting for an email :) 13:57:49 thanks for joining, see you next week! 13:57:55 #endmeeting