16:03:24 #startmeeting hyper-v 16:03:25 Meeting started Tue Jul 22 16:03:24 2014 UTC and is due to finish in 60 minutes. The chair is primeministerp. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:03:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:03:28 The meeting name has been set to 'hyper_v' 16:03:56 we're going to give some a couple more minutes for others to join 16:04:05 hi all 16:04:06 primeministerp: good morning 16:04:11 hey tavi 16:04:19 hey there 16:04:31 ociuhandu: we should catch up later i'm planning on heading to the colo at some point 16:05:22 alexpilotti: hey there 16:05:30 is luis around 16:05:51 alexpilotti: start w/ some development updates? 16:05:59 sure 16:06:10 #topic nova blueprints/development 16:06:39 alexpilotti: I know there was a lot of code submitted for review including the blueprints for this development cycle 16:06:49 so we have a few BPs already approved 16:06:58 alexpilotti: great 16:07:23 other BPs on Nova are waiting for exceptions 16:07:40 what are the critical one's we're waiting on 16:07:46 we uploaded them in time based on the core team requests 16:07:54 but they didn’t get reviewed 16:08:15 SMB (Nova), x509, nova rescue and host power actions 16:08:34 I sent them as exception request to the ML 16:08:42 now we have to wait 16:09:15 the general problem is that Nova team is swamped 16:09:29 and they have already more BPs than they can handle 16:09:56 on the other side, teh SMB Cinber ones got approved w/o problems 16:13:59 alexpilotti: thanks 16:14:11 alexpilotti: sorry got sidetracked for a moment 16:15:49 alexpilotti: are there additional things that should be discussed 16:16:12 alexpilotti: should we discuss a blueprint for the service bus driver? 16:18:14 yes, we ned to send a BP request on Oslo for it 16:18:36 alexpilotti: can we get that in just in case 16:18:37 that can be done ASAP targetting K-1 16:18:40 k 16:18:51 with maybe some chance to have it by J-3 16:19:05 ok 16:19:20 back to the BPs, all the other ones which jhave been approved have already code under review 16:19:29 great 16:20:08 so the key now is to make sure that Nova core has time to review them. Sensing more code than what the review bandwidth allows is not useful. 16:20:21 scale is always an issue 16:20:34 alexpilotti: well at least we're moving forward 16:21:03 there are more talsk about subtree +2 rights which is really needed IMO 16:21:21 alexpilotti: alexpilotti: help especially in our case 16:21:27 ha 16:21:33 autocomplete 16:21:38 heh 16:21:56 so 16:22:21 anything more from a development/blueprint perspective 16:22:50 alexpilotti: ^ 16:23:08 no I think for now is enough 16:23:40 I hope we’ll have more news next week 16:24:00 great 16:24:11 I'll change the topic 16:24:20 #topic CI 16:25:06 so with recent events here we've got some people shuffled who were involved with our HW acquisition 16:25:24 as of now we're looking at mid aug for the hardware to be online 16:26:06 additionally we're currently rebalancing and cleaning up our current setup to try to stabilize 16:26:14 the existing infrastructure 16:26:19 primeministerp: we have had a blocking issue on the neutron side, due to the debug level in the CI, that has been fixed yesterday 16:26:50 ociuhandu: was this the root of the problems yesterday? 16:27:09 yes 16:27:12 heh 16:27:18 ociuhandu: thanks for hunting it down 16:27:29 ociuhandu: has a patch been committed to fix it yet? 16:27:45 and another pip update failure on hyper-v nodes for sqlalchemy, but we handled that manually 16:28:03 ok 16:28:22 primeministerp: alexpilotti was helping us in tracking it and the original author commited the fix quite fast 16:28:30 o great 16:28:36 primeministerp: we have also had a communication problem between zuul and jenkins last friday that led to a cascading effect 16:29:37 primeministerp: so we had to manually intervene to restabilize the CI on friday night / saturday morning, we have had a few hours of false failures then 16:29:43 ociuhandu: i'm aware of the zuul issue, we need to move it off of virtual and onto iron immediately 16:29:53 I had thought vijay and tim had done that already 16:30:02 primeministerp: the main problem here is related to jenkins 16:30:22 ociuhandu: jenkins scale or load issues? 16:30:32 it’s not the first time when jenkins stops communicating with zuul and only a manual restart will resume that 16:30:51 primeministerp: nope, just lain jenkins bug 16:30:59 errr, plain 16:31:00 did you have to update the jar? 16:31:04 no 16:31:11 k 16:31:25 it does not talk to zuul anymore, so zuul will just queue up jobs 16:31:35 once you do a restart, all is perfect again 16:31:42 we’re investigating this matter 16:33:46 hmm 16:33:58 i think we have serious network congestion issues 16:34:41 primeministerp: the simple result of this is zuul having queue while on jenkins all executors are idle 16:34:41 across the site link 16:34:53 ok 16:35:06 it’s usually after a network interruption though it should automatically retry 16:35:42 ociuhandu: i'll be heading to the colo after calls this am 16:35:48 primeministerp: we should have a sync a bit later on the hardware reallocation, when you have a moment 16:35:55 ok 16:35:58 after this 16:36:02 great 16:36:03 if possible 16:36:28 on the cinder side, we have the images ready, the scripts as well, we’re running now the final tests on them 16:37:04 Ok let me know when they are finalized 16:37:17 I may have some equipment for you to use already 16:37:34 ociuhandu: that's all I have for now 16:37:50 primeministerp: it’s going to be also virtualized on top of the undercloud 16:38:05 ociuhandu:thought we were going to go iron 16:38:12 due to constraints 16:38:36 if you think we can handle it given our current ram issue 16:38:48 i thought it would be easier to deploy onto iron 16:39:02 well maybe not easier but defiantly more performant 16:39:07 given our "issues" 16:39:13 primeministerp: constraints actually mean we should be able to use that hardware for multiple things, not only cinder, since on cinder the workload is way smaller 16:39:30 ociuhandu: what's the ram footprint 16:39:35 on the windows side 16:39:36 so 16:39:38 one sec 16:39:40 when you say image 16:39:42 considering the identified nodes that can be repurposed, it should not be a problem 16:39:47 are you talking the windows or devstack image? 16:39:59 i mean glance image on undercloud for both devstack and windows 16:40:09 that's what I thought 16:40:19 both get deployed in the undercloud 16:40:39 i thought we were going to use the devstack instance and a physical windows node 16:40:46 based on our discussion last week 16:41:04 if you think we can do it virtualized 16:41:08 then ok 16:41:09 the footprint for the windows node should fit in 2 Gb ram 16:41:17 ok 16:41:26 it's going to be tight 16:41:30 hehe 16:41:51 ociuhandu: I'm going to end it 16:41:56 ociuhandu: need to make some calls 16:42:00 #endmeeting