16:03:24 <primeministerp> #startmeeting hyper-v
16:03:25 <openstack> Meeting started Tue Jul 22 16:03:24 2014 UTC and is due to finish in 60 minutes.  The chair is primeministerp. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:03:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:03:28 <openstack> The meeting name has been set to 'hyper_v'
16:03:56 <primeministerp> we're going to give some a couple more minutes for others to join
16:04:05 <ociuhandu> hi all
16:04:06 <ociuhandu> primeministerp: good morning
16:04:11 <primeministerp> hey tavi
16:04:19 <alexpilotti> hey there
16:04:31 <primeministerp> ociuhandu: we should catch up later i'm planning on heading to the colo at some point
16:05:22 <primeministerp> alexpilotti: hey there
16:05:30 <primeministerp> is luis around
16:05:51 <primeministerp> alexpilotti: start w/ some development updates?
16:05:59 <alexpilotti> sure
16:06:10 <primeministerp> #topic nova blueprints/development
16:06:39 <primeministerp> alexpilotti: I know there was a lot of code submitted for review including the blueprints for this development cycle
16:06:49 <alexpilotti> so we have a few BPs already approved
16:06:58 <primeministerp> alexpilotti: great
16:07:23 <alexpilotti> other BPs on Nova are waiting for exceptions
16:07:40 <primeministerp> what are the critical one's we're waiting on
16:07:46 <alexpilotti> we uploaded them in time based on the core team requests
16:07:54 <alexpilotti> but they didn’t get reviewed
16:08:15 <alexpilotti> SMB (Nova), x509, nova rescue and host power actions
16:08:34 <alexpilotti> I sent them as exception request to the ML
16:08:42 <alexpilotti> now we have to wait
16:09:15 <alexpilotti> the general problem is that Nova team is swamped
16:09:29 <alexpilotti> and they have already more BPs than they can handle
16:09:56 <alexpilotti> on the other side, teh SMB Cinber ones got approved w/o problems
16:13:59 <primeministerp> alexpilotti: thanks
16:14:11 <primeministerp> alexpilotti: sorry got sidetracked for a moment
16:15:49 <primeministerp> alexpilotti: are there additional things that should be discussed
16:16:12 <primeministerp> alexpilotti: should we discuss a blueprint for the service bus driver?
16:18:14 <alexpilotti> yes, we ned to send a BP request on Oslo for it
16:18:36 <primeministerp> alexpilotti: can we get that in just in case
16:18:37 <alexpilotti> that can be done ASAP targetting K-1
16:18:40 <primeministerp> k
16:18:51 <alexpilotti> with maybe some chance to have it by J-3
16:19:05 <primeministerp> ok
16:19:20 <alexpilotti> back to the BPs, all the other ones which jhave been approved have already code under review
16:19:29 <primeministerp> great
16:20:08 <alexpilotti> so the key now is to make sure that Nova core has time to review them. Sensing more code than what the review bandwidth allows is not useful.
16:20:21 <primeministerp> scale is always an issue
16:20:34 <primeministerp> alexpilotti: well at least we're moving forward
16:21:03 <alexpilotti> there are more talsk about subtree +2 rights which is really needed IMO
16:21:21 <primeministerp> alexpilotti: alexpilotti: help especially in our case
16:21:27 <primeministerp> ha
16:21:33 <primeministerp> autocomplete
16:21:38 <alexpilotti> heh
16:21:56 <primeministerp> so
16:22:21 <primeministerp> anything more from a development/blueprint perspective
16:22:50 <primeministerp> alexpilotti: ^
16:23:08 <alexpilotti> no I think for now is enough
16:23:40 <alexpilotti> I hope we’ll have more news next week
16:24:00 <primeministerp> great
16:24:11 <primeministerp> I'll change the topic
16:24:20 <primeministerp> #topic CI
16:25:06 <primeministerp> so with recent events here we've got some people shuffled who were involved with our HW acquisition
16:25:24 <primeministerp> as of now we're looking at mid aug for the hardware to be online
16:26:06 <primeministerp> additionally we're currently rebalancing and cleaning up our current setup to try to stabilize
16:26:14 <primeministerp> the existing infrastructure
16:26:19 <ociuhandu> primeministerp: we have had a blocking issue on the neutron side, due to the debug level in the CI, that has been fixed yesterday
16:26:50 <primeministerp> ociuhandu: was this the root of the problems yesterday?
16:27:09 <ociuhandu> yes
16:27:12 <primeministerp> heh
16:27:18 <primeministerp> ociuhandu: thanks for hunting it down
16:27:29 <primeministerp> ociuhandu: has a patch been committed to fix it yet?
16:27:45 <ociuhandu> and another pip update failure on hyper-v nodes for sqlalchemy, but we handled that manually
16:28:03 <primeministerp> ok
16:28:22 <ociuhandu> primeministerp: alexpilotti was helping us in tracking it and the original author commited the fix quite fast
16:28:30 <primeministerp> o great
16:28:36 <ociuhandu> primeministerp: we have also had a communication problem between zuul and jenkins last friday that led to a cascading effect
16:29:37 <ociuhandu> primeministerp: so we had to manually intervene to restabilize the CI on friday night / saturday morning, we have had a few hours of false failures then
16:29:43 <primeministerp> ociuhandu: i'm aware of the zuul issue, we need to move it off of virtual and onto iron immediately
16:29:53 <primeministerp> I had thought vijay and tim had done that already
16:30:02 <ociuhandu> primeministerp: the main problem here is related to jenkins
16:30:22 <primeministerp> ociuhandu: jenkins scale or load issues?
16:30:32 <ociuhandu> it’s not the first time when jenkins stops communicating with zuul and only a manual restart will resume that
16:30:51 <ociuhandu> primeministerp: nope, just lain jenkins bug
16:30:59 <ociuhandu> errr, plain
16:31:00 <primeministerp> did you have to update the jar?
16:31:04 <ociuhandu> no
16:31:11 <primeministerp> k
16:31:25 <ociuhandu> it does not talk to zuul anymore, so zuul will just queue up jobs
16:31:35 <ociuhandu> once you do a restart, all is perfect again
16:31:42 <ociuhandu> we’re investigating this matter
16:33:46 <primeministerp> hmm
16:33:58 <primeministerp> i think we have serious network congestion issues
16:34:41 <ociuhandu> primeministerp: the simple result of this is zuul having queue while on jenkins all executors are idle
16:34:41 <primeministerp> across the site link
16:34:53 <primeministerp> ok
16:35:06 <ociuhandu> it’s usually after a network interruption though it should automatically retry
16:35:42 <primeministerp> ociuhandu: i'll be heading to the colo after calls this am
16:35:48 <ociuhandu> primeministerp: we should have a sync a bit later on the hardware reallocation, when you have a moment
16:35:55 <primeministerp> ok
16:35:58 <primeministerp> after this
16:36:02 <ociuhandu> great
16:36:03 <primeministerp> if possible
16:36:28 <ociuhandu> on the cinder side, we have the images ready, the scripts as well, we’re running now the final tests on them
16:37:04 <primeministerp> Ok let me know when they are finalized
16:37:17 <primeministerp> I may have some equipment for you to use already
16:37:34 <primeministerp> ociuhandu: that's all I have for now
16:37:50 <ociuhandu> primeministerp: it’s going to be also virtualized on top of the undercloud
16:38:05 <primeministerp> ociuhandu:thought we were going to go iron
16:38:12 <primeministerp> due to constraints
16:38:36 <primeministerp> if you think we can handle it given our current ram issue
16:38:48 <primeministerp> i thought it would be easier to deploy onto iron
16:39:02 <primeministerp> well maybe not easier but defiantly more performant
16:39:07 <primeministerp> given our "issues"
16:39:13 <ociuhandu> primeministerp: constraints actually mean we should be able to use that hardware for multiple things, not only cinder, since on cinder the workload is way smaller
16:39:30 <primeministerp> ociuhandu: what's the ram footprint
16:39:35 <primeministerp> on the windows side
16:39:36 <primeministerp> so
16:39:38 <primeministerp> one sec
16:39:40 <primeministerp> when you say image
16:39:42 <ociuhandu> considering the identified nodes that can be repurposed, it should not be a problem
16:39:47 <primeministerp> are you talking the windows or devstack image?
16:39:59 <ociuhandu> i mean glance image on undercloud for both devstack and windows
16:40:09 <primeministerp> that's what I thought
16:40:19 <ociuhandu> both get deployed in the undercloud
16:40:39 <primeministerp> i thought we were going to use the devstack instance and a physical windows node
16:40:46 <primeministerp> based on our discussion last week
16:41:04 <primeministerp> if you think we can do it virtualized
16:41:08 <primeministerp> then ok
16:41:09 <ociuhandu> the footprint for the windows node should fit in 2 Gb ram
16:41:17 <primeministerp> ok
16:41:26 <primeministerp> it's going to be tight
16:41:30 <primeministerp> hehe
16:41:51 <primeministerp> ociuhandu: I'm going to end it
16:41:56 <primeministerp> ociuhandu: need to make some calls
16:42:00 <primeministerp> #endmeeting