14:01:41 #startmeeting tripleo 14:01:42 Meeting started Tue Feb 9 14:01:41 2016 UTC and is due to finish in 60 minutes. The chair is dprince. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:43 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:45 The meeting name has been set to 'tripleo' 14:01:52 o/ 14:02:11 \o 14:02:12 o/ 14:02:13 hi everyone 14:02:14 o/ 14:02:55 #topic agenda 14:02:56 * bugs 14:02:56 * Projects releases or stable backports 14:02:56 * CI 14:02:56 * Specs 14:02:58 * one off agenda items 14:03:00 * open discussion 14:03:35 o/ 14:03:45 o/ 14:03:50 hi 14:04:29 o/ 14:04:52 okay, lets get started 14:05:09 any other topics for this week? 14:05:35 no one-off agenda items on the wiki so perhaps we can just get anything new in open-discussion 14:06:16 Sounds good 14:06:26 #topic bugs 14:07:06 I think shardy just fixed this. 14:07:09 #link https://bugs.launchpad.net/tripleo/+bug/1543493 14:07:09 Launchpad bug 1543493 in tripleo "CI failing with nova/neutron NotFound error" [Undecided,New] 14:08:38 we've pinned puppet-nova in https://review.openstack.org/#/c/277756/ which should hopefully work around it for now 14:09:49 any other high priority bugs to mention this week 14:09:56 befor that problem started we also had the HA jobs failing, so we should next see if the patch from jaosorior fixes that 14:10:09 jaosorior: was there a bug for the HA problem ? 14:11:02 * derekh goes downstairs to see whats burning 14:11:42 okay, perhaps we link in the HA bug later if we find it 14:12:19 is it this https://bugs.launchpad.net/tripleo/+bug/1542405 14:12:20 Launchpad bug 1542405 in tripleo "puppet-pacemaker: Error: When present, must provide value at /etc/puppet/modules/pacemaker/manifests/property.pp:14" [Critical,Confirmed] - Assigned to James Slagle (james-slagle) 14:12:48 dprince: the patch is https://review.openstack.org/#/c/276701/ I think, no bug referenced 14:13:45 shardy: thanks 14:14:05 #topic Projects releases or stable backports 14:15:15 https://review.openstack.org/#/q/project:openstack/tripleo-heat-templates+branch:stable/liberty+status:open 14:15:24 still a slew of backports to stable liberty happening 14:15:49 are those blocked on CI for the most part? 14:15:52 Related to that, I wanted to maybe do a quick poll, if folks think we should propose a more restrictive backport policy for stable/mitaka 14:16:08 YES 14:16:10 my feeling ise we've had a "allow features" cycle, and it's been somewhat abused 14:16:11 +1 14:16:28 so I'd like to announce we'll be aligning with other projects from Mitaka onwards 14:16:30 shardy: yeah, this is out of control I think 14:16:40 although, it has been better of late... maybe because CI has slowed down the flow :p 14:16:43 If nothing else it has just been confusing :) 14:16:48 lol 14:17:35 trown: Yeah, we've not really managed to merge much at all for the last couple of weeks 14:18:10 ya, and delorean liberty has been pretty stable over that period 14:19:03 Do we need to record a vote on this, or is there consensus? 14:19:24 maybe this is important enough to vote on ML? 14:19:29 shardy: probably worth an email thread so this is crystal 14:19:46 Ok, I'll start the thread, and folks can voice their support or opposition there 14:19:49 thanks 14:19:53 thanks 14:20:13 although, our email threads sometimes just cause more confusion so we'll see :) 14:20:38 hehe 14:21:12 #topic CI 14:21:38 derekh: is your house on fire? wanna give an update from Friday? 14:21:55 The tripleo ci cloud went down (or at least from the outside world) on friday, we fixed that last night, a default route on the controller changed as far as I can see 14:22:00 no clue why... 14:22:13 jobs are now back running again 14:22:24 but we now have a lot of jobs failing with comunication errors between jenkins and the slaves, not sure if this is cloud related, or something else 14:22:24 yeah, a bit of a head scratcher this one 14:22:54 derekh: could it be related to stale tripleo-bm-test ports or something? 14:23:10 historically this type of error hasn't been related to our cloud, but I havn't had a chance to look to be sure 14:23:14 the 192 range which provides the slaves access to the test-envs 14:23:54 derekh: if it isn't related to our cloud we should ask in infra to see if there is a general Jenkins slave stablity issue on Fedora today perhaps 14:24:33 dprince: I wouldn't have thought so, the communication error is between the slaves and jenkins, so the 192 addresses asn't involved 14:25:17 derekh: oh right. Sorry I got confused and though this was to our testenvs 14:25:27 dprince: wouldn't do any harm, wouldn't be a bad idea to poke around the overcloud aswell just incase it is our cloud 14:26:07 derekh: yeah, this sounds more like a potential infra thing. Perhaps worth asking and worth reviewing the last few commits in infra too since last friday 14:26:21 anyways, I'm still trying to fix something else, so wont get a chance to look until tonigh or maybe tomorrow 14:26:53 I will see what I can figure out on this 14:26:59 dprince: that or the data center links is back, I know they were doing some link switchover on friday 14:27:10 s/back/bad 14:28:29 okay, lets move on 14:28:36 #topic specs 14:28:37 there's one CI related issue I have, not sure if it belongs to this meeting or this topic, but let's see: 14:28:52 I recently picked up the task of looking at the production RDO manager jobs (so the repos using rdo-release rpm) and the liberty jobs currently cannot build images due to this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1304395 -- there's a simple fix but somebody should push that fix there and rebuild packages so that our jobs would pass on ci.centos.org 14:28:54 bugzilla.redhat.com bug 1304395 in openstack-tripleo "openstack overcloud image upload fails with "Required file "./ironic-python-agent.initramfs" does not exist."" [Unspecified,New] - Assigned to jslagle 14:29:11 adarazs: that would be an RDO meeting thing 14:29:15 sorry :) 14:29:22 will ask it there :) 14:29:25 :) 14:29:35 adarazs: yeah, it may effect us too but probably more of an RDO thing perhaps 14:29:48 I think it is a pure packaging issue 14:30:14 adarazs: we can talk about it on RDO meeting tomorrow though, thanks for bringing it up 14:30:19 okay, so just please move on :) 14:30:46 okay, specs. A few of us synced up before devconf.cz last week and there seems to be agreement that writing a Mistral spec is in order 14:30:48 I have a new spec proposal for a possible replacement for instack-virt-setup https://review.openstack.org/#/c/276810/ 14:31:00 +1 to Mistral spec 14:31:06 so hopefully we can get that before next meeting 14:33:01 We have changed the documentation for RDO to use the tool in that spec ^, and it has really made test days go alot smoother 14:34:24 trown: I like the 'maze of bash scripts' reference 14:34:35 trown: where does the pre-built undercloud image come from - the periodic CI job? 14:34:48 dprince: maybe that is a bit harsh, but trying to explain how that works to new people is hard 14:35:05 shardy: yes the job that promotes delorean repos also creates the image 14:35:18 it definitely is really confusing and hard to debug when it breaks 14:35:56 trown: thanks for bringing it up. Lets see how the review goes 14:36:30 for me the toughest part of instack-virt-setup is the lack of CI upstream, so it is really impossible to propose major changes 14:37:11 trown: yeah, well our CI environment was designed before instack existed I think 14:37:14 I think we can only solve that with third-party CI, because of the need for a baremetal host, but I am volunteering to set that up as part of the work for that spec 14:38:05 trown: and even still we'd likely never really cover this path in CI if we used an OVB overcloud either 14:38:37 dprince: right, and not all users will have access to an ovb cloud 14:39:04 trown: my stance is that all users should have sufficient baremetal 14:39:07 trown: :) 14:39:40 trown: this fills a needed niche for TripleO testing for sure 14:40:19 awesome, that is all for me then 14:41:08 #topic open discussion 14:41:49 Hi folks. Would like to bring up the Manila integration into tripleo. Specifically this: https://review.openstack.org/#/c/188137/ 14:43:11 If anyone has time to review this, it would be great. It can leave users in quite a nasty place. https://review.openstack.org/#/c/275661/ 14:43:47 Trying to see what else needs to be done here to get this merged, and how I can help. thank you. 14:44:42 d0ugal: I would like to see us store this in a Mistral environment perhaps 14:44:51 d0ugal: or perhaps a swift artifact 14:44:57 dprince: Sure 14:45:28 d0ugal: the deployment workflow can then generate any non-specified (new) passwords and we'd be sure that wherever the deployment gets executed from they don't change 14:45:31 dcain: thanks for highlighting it - looks like it's already had some positive feedback, so will need a recheck when we fix the current CI issues and some more review attention 14:45:44 I'll add it to my list 14:45:48 dprince: Sure, that makes sense but sounds like a longer term fix 14:46:00 perhaps we should start an etherpad of integration related patches? 14:46:07 d0ugal: yep, nothing wrong w/ fixing python-tripleoclient today too 14:46:13 e.g new services and/or drivers for various backends? 14:46:34 it's quite easy for them to drop off the review radar atm unfortunately 14:47:25 shardy: yeah that makes sense. it is not easy to know which reviews to search for otherwise (e.g. as you don't know the name of the dev handling that integration to search on gerrit ) 14:47:43 shardy: obvious concern is it just groes into unusability again 14:47:49 s/groes/grows 14:48:34 d0ugal: eek yes that does look nasty 14:49:33 dcain: thanks for highlighting the Manila patch 14:49:41 shardy: Yeah, a massive oversight. Kind amazing we have only spotted it now. 14:49:49 shardy: We also found out that password changing doesn't work :) 14:50:14 shardy: thanks for the update. agreed on the idea of an etherpad, good idea. i would love to be added to that if possible. 14:51:02 dprince: no problem, I'm just eager to use manila with tripleo! 14:52:17 anything else this week? 14:52:22 thanks everyone 14:52:29 thanks for chairing 14:52:53 dprince: 14:53:17 dprince: here is a link to the talk that gfidente jistr and me gave last week https://www.youtube.com/watch?v=XlTg_Nk2UUw at devconf 14:53:19 Thanks! 14:53:34 in case anyone is interested, on updated in tripleo clouds 14:53:35 marios: nice, I will watch 14:53:47 is a multistream, select the right video 14:54:50 #endmeeting