14:00:20 #startmeeting tripleo 14:00:20 #topic agenda 14:00:21 * Review past action items 14:00:21 * One off agenda items 14:00:21 * Squad status 14:00:21 * Bugs & Blueprints 14:00:21 * Projects releases or stable backports 14:00:21 * Specs 14:00:21 Meeting started Tue Dec 12 14:00:20 2017 UTC and is due to finish in 60 minutes. The chair is mwhahaha. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:22 * open discussion 14:00:22 Anyone can use the #link, #action and #info commands, not just the moderatorǃ 14:00:22 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:23 Hi everyone! who is around today? 14:00:25 The meeting name has been set to 'tripleo' 14:00:27 arxcruz: I _had_ an deployed env and checked it there. "openstack service list" returned ec2 14:00:29 o/ 14:00:29 Good morning! o/ 14:00:36 hey o/ 14:00:53 o/ 14:00:56 o/ 14:00:57 o/ 14:01:09 o/ 14:01:10 o/ 14:01:11 o/ 14:01:23 o/ 14:01:30 o/ 14:01:41 hi 14:02:36 o/ 14:02:37 0/ 14:02:46 o/ 14:03:49 #topic review past action items 14:03:49 EmilienM to talk with slagle about switching default deployments to use config-download 14:03:55 o/ 14:03:59 so i think we confirmed that we wouldn't do that last week 14:04:09 due to various reasons we weren't going to make it the default 14:04:17 right 14:04:20 mwhahaha: correct. 14:04:37 i'm still working through getting ovb-ha to use it, so we really shouldn't make it the default atm 14:04:39 o/ 14:04:53 let's plan for rocky asap 14:04:57 p/ 14:05:20 cool 14:05:27 moving on 14:05:28 gfidente and shardy look at ceph-ansible and config-download 14:05:52 mwhahaha no updates, my fault, I'd include jistr in that list too cause he worked on that for k8s 14:06:10 (I guess if we disable by default, we'll accept backports of bugfixes wrt config-download *after* Queens GA) 14:06:32 gfidente: yup :) 14:07:50 ok maybe we'll pick that one up next year 14:08:09 #action jistr, gfidente and shardy look at ceph-ansible and config-download (postponed till next year) 14:08:30 team to review open validation patches 14:08:54 #link https://review.openstack.org/#/q/project:openstack/tripleo-validations+status:open 14:09:03 please take a look when you get the chance 14:09:13 EmilienM, akrivoka, gchamoul to check where validations are tested in tripleo ci 14:09:26 a bunch has been merged last week 14:09:39 ah, gchamoul made some progess on that 14:09:42 let me find the link 14:09:49 so where are validations tested in CI? 14:09:55 #link https://review.openstack.org/#/c/525637/ 14:10:06 please see the comments in Gerrit though 14:10:14 validations aren't fatal if failing :-/ 14:10:15 URGENT TRIPLEO TASKS NEED ATTENTION 14:10:15 https://bugs.launchpad.net/tripleo/+bug/1737716 14:10:16 Launchpad bug 1737716 in tripleo "TripleO CI jobs fail: can't connect to nodepool hosts" [Critical,Triaged] 14:11:05 EmilienM: sounds like a tech-debt bug 14:11:18 thanks mwhahaha 14:11:27 (for the validations link) 14:11:34 mwhahaha: true. I'll make sure we have a launchpad bug for this one 14:11:39 EmilienM: thanks 14:11:51 last action item 14:11:56 they are tested outside of upstream because of timing 14:11:57 EmilienM starts to send a weekly newsletter about tripleo - DONE 14:11:57 #link http://lists.openstack.org/pipermail/openstack-dev/2017-December/125214.html 14:12:05 thanks EmilienM for putting together the weekly owl 14:12:15 i expect high quality owl facts from now on 14:12:24 lol 14:12:26 cool! 14:14:14 that's it on the action items 14:14:22 #topic one off agenda items 14:14:22 #link https://etherpad.openstack.org/p/tripleo-meeting-items 14:14:31 the agenda is empty 14:14:37 anyone have anything pressing? 14:15:20 sounds like nope 14:15:22 status time 14:15:26 #topic Squad status 14:15:27 ci 14:15:27 #link https://etherpad.openstack.org/p/tripleo-ci-squad-meeting 14:15:27 upgrade 14:15:27 #link https://etherpad.openstack.org/p/tripleo-upgrade-squad-status 14:15:27 containers 14:15:28 #link https://etherpad.openstack.org/p/tripleo-containers-squad-status 14:15:28 integration 14:15:29 #link https://etherpad.openstack.org/p/tripleo-integration-squad-status 14:15:29 ui/cli 14:15:30 #link https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status 14:15:30 validations 14:15:31 #link https://etherpad.openstack.org/p/tripleo-validations-squad-status 14:15:31 networking 14:15:32 #link https://etherpad.openstack.org/p/tripleo-networking-squad-status 14:15:32 workflows 14:15:33 #link https://etherpad.openstack.org/p/tripleo-workflows-squad-status 14:15:57 weshay, sshnaidm|ruck, myoung|rover: ci status is not updated 14:16:09 arxcruz, ^ 14:16:16 gfidente, integration is missing status update 14:16:21 we (ui) are still waiting on some workflows patches for roles management 14:16:30 mwhahaha: we will have the meeting tomorrow, and we'll update it 14:16:40 arxcruz: it's missing last weeks update too 14:17:46 mwhahaha done, thanks 14:17:51 d0ugal: workflows is also missing update but they have their mieeting on weds as well 14:18:10 thanks everyone, these all feed the weekly owl :D 14:18:17 :D 14:18:25 weshay: mwhahaha I'll update the etherpad 14:18:31 don't need starving owls 14:18:37 mwhahaha: yeah, it is a little unfortunate that we do the update the day after this meeting, but nevermind. 14:19:08 d0ugal: it's ok, maybe as long as we have at least last weeks update next time 14:19:21 same for CI since they do on weds as well 14:19:29 we'll play with it 14:19:36 moving on to bugs and blueprints 14:19:42 #topic bugs & blueprints 14:19:42 #link https://launchpad.net/tripleo/+milestone/queens-3 14:19:42 For Queens we currently have 30 (-41) blueprints and about 553 (+13) open bugs. 553 queens-3. 14:19:52 I created the Rocky milestones and moved a bunch of the blueprints out. Please see the note to the ML about this effort. 14:19:52 #link http://lists.openstack.org/pipermail/openstack-dev/2017-December/125339.html 14:19:52 Additionally I have yet to move the medium bugs out to rocky-1. For now they are grouped together in queens-3 but please take a look at the Critical/High issues. 14:20:47 also about bugs, please read EmilienM's note about bug priorities 14:20:49 #link http://lists.openstack.org/pipermail/openstack-dev/2017-December/125378.html 14:21:03 any specific bug/blueprint issues? 14:21:05 I think it's more a discussion now, and we might want to document 14:21:23 sshnaidm|ruck's reply is good, we need to clarify what do we expect 14:21:36 perhaps we should throw it into a tripleo policy doc 14:21:48 sounds good 14:21:58 I'll kick off something 14:22:16 #action EmilienM to start a policy doc around bug priority 14:22:37 moving on 14:22:38 #topic projects releases or stable backports 14:22:47 any stable backport items? 14:22:55 stable newton/ocata/pike tagged on Friday, RDO updated 14:23:02 as a reminder we'll be looking to EOL newton in January 14:23:23 before EOL I suggest a newton review day 14:23:32 ok 14:23:39 to clear things up and properly say good bye 14:24:03 otherwise, backports to newton become rare nowadays, so I think it's becoming a good time 14:24:10 ok 14:24:29 #action mwhahaha/EmilienM to schedule a Newton day in January before EOL 14:24:33 as a reminder 14:25:26 moving on 14:25:31 #topic specs 14:25:31 #link https://review.openstack.org/#/q/project:openstack/tripleo-specs+status:open 14:25:31 As mentioned last week, we are allowing a small extra period of time for specs. Please review the open specs as we'll be freezing the repo for Queens at the end of this week. 14:25:58 mwhahaha: EmilienM sorry about the newton day, what about ffu ? 14:26:14 chem: that's why we're waiting until January 14:26:22 chem: so if you need more time you need to tell people now 14:26:37 EOL was techincally in october so we've been keeping it open because of ffu 14:26:44 so get your FFU stuff posted now 14:27:14 mwhahaha: right, january, we will discuss this on ffu meeting and tell you next week 14:27:17 chem: for FFU, do you need patches in stable/newton branch? 14:27:46 EmilienM: not that I'm aware off currently 14:27:48 chem: at this point we're keeping it open for you guys so if it needs to be fab/end of jan that's fine just let us know 14:27:49 because of not, then EOLing newton isn't a problem for you, you can still deploy newton in RDO CI (like you're working now) FFU'ing 14:27:59 is FFU'ing a verb? It should be. 14:28:17 EmilienM: yeah rdo is the main thing for ci testing 14:28:23 it's the sound you make when being asked to upgrade 3 versions at once 14:28:40 EmilienM: mwhahaha will make sure we have a point next meeting about this timetable and where ffu stands 14:28:49 thanks 14:28:50 cool 14:29:01 the only thing we need to know is, if you need patches in stable/newton 14:29:09 EmilienM: ack 14:29:14 worst case: even after EOL, we'll still be able to carry patches, in RDO I guess 14:29:40 EmilienM: oki, we need to talk with rdo people anyway :) 14:30:06 (I'm done with ffu topic ... thanks ) 14:30:12 sounds like there wasn't any spec related stuff 14:30:16 moving on 14:30:17 #topic open discussion 14:30:23 Super trivial, but I'd like to chase a second +2 so https://review.openstack.org/#/c/526497/ can be merged please. 14:30:25 now into free time, anyone have anything? 14:31:06 dpeacock: done 14:31:13 Thank you 14:31:14 rdo-cloud is down for upgrades still FYI 14:31:20 status on lists 14:31:25 does it affect our CI so far? 14:31:48 (check, check-tripleo, gate, RDO 3rd Party CI, promotions, etc) ? 14:32:50 yes.. promotions and ovb,upgrade jobs in rdo-cloud 14:33:11 which we still have jobs running in rh1 at this point for ovb, upgrades 14:33:15 but promotions are blocked 14:33:23 ok 14:33:58 the upgrades team is in touch w/ rdo-cloud admins 14:35:00 ok anything else? 14:35:00 mwhahaha, EmilienM what is the policy to add tripleo admins? 14:35:20 tripleo CI admins you mean? 14:35:48 yep 14:35:57 this file in tripleo-ci repo 14:36:23 so bnemec pointed to the previous documentation https://github.com/openstack/tripleo-incubator/blob/15f0afe41b009184c9f329efe9c997d3f30d6fcf/tripleo-cloud/README.md 14:36:37 it sounds like the policy should be updated/moved to the tripleo-specs repo 14:37:01 without an official change, i would assume the previous policy should continue to apply 14:37:27 ok 14:37:32 so you'd need to propose the new users individually and get a consensus amung the existing admins 14:37:47 so not the normal 2+2 14:38:16 it also sounds like we need a review of the existing admins as well 14:38:40 mwhahaha, I have other solution too, just for tripleo-ci-team 14:39:00 https://review.openstack.org/526187 14:39:09 Daniel Alvarez proposed openstack/tripleo-heat-templates master: Add support for containerized OVN Metadata Agent https://review.openstack.org/525164 14:40:01 sshnaidm|ruck: so it was raised to me that by doing this it actually can cause problems security wise because you could game the system and use depends-on to inject yourself into the clouds. I'm honestly not sure we should be doing this at all 14:40:38 mwhahaha, it's only for rdo cloud and rh1, I anyway can "inject" myself there 14:41:06 sshnaidm|ruck: right i don't think we should be doing that at all and you still haven't explained what you need specifically from local access 14:41:06 mwhahaha, we actually maintain rdo cloud 14:41:45 mwhahaha, in some cases it's possible only and sometimes it's faster to catch issues in running job 14:42:21 sshnaidm|ruck: without specific examples, it's hard to just agree 14:42:24 mwhahaha, I think it's helpful for anybody who work on CI on daily basis 14:42:35 it's a security hole, imho 14:42:37 sshnaidm|ruck: i don't have access to any of these boxes and i'm not having any of these problems 14:42:50 nor am i looking to get ssh access 14:42:57 a ci system is ephemeral, we shouldn't have access to it unless under specific circonstances 14:43:02 mwhahaha, I'm talking about people working on CI on daily basis 14:43:21 in fact, the ci jobs should be reproducable, which is afik WIP 14:43:32 EmilienM, mwhahaha maybe let's continue this in ML? 14:43:45 yeah ++ I want to see pabelanger's thoughts on this one 14:43:51 ok 14:43:52 sshnaidm|ruck: pretty sure i end up working on CI on a daily basis, but yes move this to the ML 14:44:03 anything else 14:44:10 we're done I guess 14:44:30 folks, do you still aim for containerized undercloud for Queens? 14:44:44 to be the default, no 14:44:47 dtantsur: we should still work on it, but i don't think it'll be the new default 14:44:59 got it 14:45:08 it doesn't work in CI yet 14:45:12 if it's in a good state at teh end of queens then it makes the switch early rocky-1 easier 14:45:38 good, then I don't have to urgently test all hardware provisioning features with it :) 14:45:40 so we should continue to work on the feature pairity 14:46:20 ok one last time 14:46:23 anything else? 14:46:23 :D 14:46:58 sounds like nope 14:47:01 thanks everyone 14:47:03 #endmeeting