14:01:22 #startmeeting tripleo 14:01:22 #topic agenda 14:01:22 * Review past action items 14:01:22 * One off agenda items 14:01:22 * Squad status 14:01:22 * Bugs & Blueprints 14:01:22 Meeting started Tue Jul 3 14:01:22 2018 UTC and is due to finish in 60 minutes. The chair is mwhahaha. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:22 * Projects releases or stable backports 14:01:23 * Specs 14:01:23 * open discussion 14:01:23 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:24 Anyone can use the #link, #action and #info commands, not just the moderatorǃ 14:01:24 Hi everyone! who is around today? 14:01:26 The meeting name has been set to 'tripleo' 14:01:35 o/ 14:01:37 o/ 14:01:38 o/ 14:01:38 thanks :) i'll get on testing 14:01:39 o/ 14:01:55 o/ 14:01:59 o/ 14:02:07 o/ 14:02:10 o/ 14:02:15 o/ 14:02:19 hi 14:02:30 o'> 14:02:58 hi 14:03:22 alright lets start, i don't think there's much on the agenda today 14:03:22 o/ 14:03:25 #topic review past action items 14:03:25 None! 14:03:32 #topic one off agenda items 14:03:32 #link https://etherpad.openstack.org/p/tripleo-meeting-items 14:03:48 any pressing issues? 14:05:01 sounds like nope 14:05:36 #topic Squad status 14:05:36 ci 14:05:36 #link https://etherpad.openstack.org/p/tripleo-ci-squad-meeting 14:05:36 upgrade 14:05:36 #link https://etherpad.openstack.org/p/tripleo-upgrade-squad-status 14:05:36 containers 14:05:36 #link https://etherpad.openstack.org/p/tripleo-containers-squad-status 14:05:37 config-download 14:05:37 #link https://etherpad.openstack.org/p/tripleo-config-download-squad-status 14:05:38 integration 14:05:38 #link https://etherpad.openstack.org/p/tripleo-integration-squad-status 14:05:39 ui/cli 14:05:39 #link https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status 14:05:40 validations 14:05:40 #link https://etherpad.openstack.org/p/tripleo-validations-squad-status 14:05:41 networking 14:05:41 #link https://etherpad.openstack.org/p/tripleo-networking-squad-status 14:05:42 workflows 14:05:42 #link https://etherpad.openstack.org/p/tripleo-workflows-squad-status 14:05:43 security 14:05:43 #link https://etherpad.openstack.org/p/tripleo-security-squad 14:06:44 mwhahaha: I'll send a weekly owl this week (was out 2 weeks) 14:07:31 k 14:07:44 looks like we've got some outdated status etherpads, please take some time to add something 14:08:05 Moving on to bugs 14:08:10 #topic bugs & blueprints 14:08:10 #link https://launchpad.net/tripleo/+milestone/rocky-3 14:08:10 For Rocky we currently have 54 (+0) blueprints and about 722 (+0) open Launchpad bugs. 722 rocky-3, 1 stein-1. 100 (-2) open Storyboard bugs. 14:08:10 #link https://storyboard.openstack.org/#!/project_group/76 14:08:17 Sagi Shnaidman proposed openstack-infra/tripleo-ci master: WIP: run CI playbooks separately https://review.openstack.org/579880 14:08:20 So reminder, that m3 is at teh end of the month 14:08:40 get your blueprint status updated as we'll be kicking out any things still in progress in a few weeks 14:09:56 Sagi Shnaidman proposed openstack-infra/tripleo-ci master: WIP: run CI playbooks separately https://review.openstack.org/579880 14:10:18 #topic projects releases or stable backports 14:10:53 have we done stable releases over the last 2 weeks? 14:10:57 no 14:11:06 I'll do it this week then 14:11:09 k 14:11:59 #topic specs 14:11:59 #link https://review.openstack.org/#/q/project:openstack/tripleo-specs+status:open 14:12:00 Jose Luis Franco proposed openstack/python-tripleoclient master: Add --roles to update run CLI command. https://review.openstack.org/579831 14:12:08 take some time to review the specs 14:12:33 also please hold off on merging specs until we have enough reviews 14:13:12 if your spec still has some items that need to be updated, please mark it as -1 workflow so we don't merge incomplete specs 14:13:37 shall we revert https://review.openstack.org/#/c/572761/ ? 14:13:41 seems it was not ready to be merged 14:13:45 i think so 14:13:59 James Slagle proposed openstack/tripleo-specs master: Revert "[WIP] Spec for improved privilege escalation in py-scripts" https://review.openstack.org/579881 14:14:06 ok 14:14:20 #topic open discussion 14:14:22 anything else? 14:15:12 fyi, we have this patch series: https://review.openstack.org/#/q/topic:snapshop-config-download+(status:open+OR+status:merged) 14:15:18 to manage the config-download dir as a git repo 14:15:24 which we feel will be very useful 14:15:46 instead of creating multiple new directories each time, we re-use the same directory and do a new commit 14:16:02 nice 14:16:19 interesting 14:16:27 don't forget to update the requirements & distgit 14:16:51 for python-git? yea i'll double check that 14:16:55 yea 14:16:58 it's not listed 14:17:02 and probably why the tests fail 14:17:02 something else was already pulling it in, but we should be explicit 14:17:10 :] guess that's why it's failing in tox checks 14:17:17 mwhahaha: udpated our status 14:17:17 could be :) 14:17:27 *magic* 14:17:31 anyway 14:17:33 anything else? 14:18:18 I've got something 14:18:40 as we convert some of the puppet/services to ansible.... do we want to rename (and link) those for clarity? 14:19:03 like we aren't using puppet anymore... so leaving them in the puppet/services directory could be confusing 14:19:18 it might be beneficial 14:19:35 +1 14:19:36 is there anyway to do some sort of include logic to handle the backwards compatibility 14:19:55 mwhahaha: we can just update the default resource-registry 14:19:57 we suffer from the same problem with environment files 14:20:15 well the issue is for folks who were leveraging it in their own custom THT 14:20:16 mwhahaha: I mean we just did the same thing with many of the services for puppet -> containers 14:20:38 mwhahaha: in this case it would just be a rename of everything. All the registry entries 14:20:59 updating the pathing in the templates is the problem 14:21:16 just wondering if there's some solution that doesn't require the end user update their paths 14:21:19 so this is an example 14:21:21 http://git.openstack.org/cgit/openstack/tripleo-heat-templates/commit/puppet/services/docker.yaml?id=00f5019ef28771e0b3544d0aa3110d5603d7c159 14:21:42 what if instead of inline updating puppet/services/docker.yaml we created an ansible/services/docker.yaml 14:21:45 dprince: I agree this is confusing 14:21:51 and then had our default resource registry use that instead 14:22:04 deprecate the old puppet interface in the meantime 14:22:09 how about we don't use the implementation in the name 14:22:21 so that when foobar comes along and we migrate we don't have to rename ansible/foobar 14:22:39 I agree aligning this will be good, but we may want to check the impact on the heat DB, as IIRC the PATCH update will store every renamed file as a new entry in the files map, without purging any of the old ones 14:22:50 that's more of a heat known issue, but something to be aware of 14:22:57 that seems to be the recurring problem 14:23:12 mwhahaha: well, I think a multi-vendor might want flexibility. We used to have multiple vendors who preferred different config options 14:23:37 vendors are free to add their own names 14:23:42 but as a default we just ship 'services' 14:23:55 not 'puppet/services' and 'ansible/services' and 'docker/services' 14:24:01 +1 on a single tht/services implementation 14:24:21 yea i like that idea 14:24:25 mwhahaha: I think we might be painting ourselves into a corner a bit 14:24:31 mwhahaha: you missed extraconfig/services/ 14:24:31 maybe symlink to the one we actually using? 14:24:43 shardy: if we had just services when we did containers it would have been more disruptive I think 14:24:44 symlinks don't work in swift containers FYU 14:24:47 FYI 14:25:03 shardy: like we needed docker/services to have a window of dev time to do that 14:25:15 I tried that for https://review.openstack.org/#/c/574753/ and had to copy the files 14:25:45 dprince: yeah - but like I argued at the PTG it'd be nice to standardize on one implementation vs the duplicate options 14:25:56 dprince: I know we didn't reach agreement on that point though ;) 14:26:01 shardy: just got notified here on the swift keyword so i'm speakng out of context, but swift does now have a 'symlink' API if you need that...just FYI... 14:26:09 there's nothing that says you can't have a services/experimental/ 14:26:13 where dev occurs 14:26:20 and the service graduates to services/ 14:26:26 tdasilva: the issue is a bulk upload of a tarball containing a symlink ignores the symlink 14:26:34 but i think there needs to be some structure and a rule set to apply 14:26:37 which AFAIK is still the case today, but happy to be corrected if not 14:26:41 it didn't work when I tried it 14:27:01 I think the single services is a bad idea. Like we are moving towards ansible... lets just call it that 14:27:01 we're just adding stuff with implementation names so like do we need to add kubernetes/services 14:27:03 which i don't want 14:27:14 i don't want the name of the tool in the file 14:27:25 it's already gotten us in trouble 14:27:29 s/docker/containers 14:27:32 like when we move towards kubernetes I would want a separate 'services' implementation I think and having a way to implement that is good 14:27:35 we need an abstract 14:27:48 mwhahaha: having to solve migration issues will always exists 14:27:49 to specify what it is, be it configmanagement/services and container/services 14:28:00 right how about we not add to the migration issues 14:28:10 mwhahaha: by not having a way to have multiple 'services' implementation I think it might actually cause more upgrade problems 14:28:42 mwhahaha: in short, yes we hit some issues. But I think we have a better handle on how to work around those things now 14:28:45 so it sounds like this needs to be brought up at the next PTG and we need to have a solution 14:28:51 i do not want to add ansible/services in 14:29:20 given that we're ~3weeks out for rocky m3, we're not going to solve it this cycle 14:29:21 FWIW my main concern is the layering, I mean it's good to be DRY but the current approach incurrs a pretty high runtime overhead with all the nested stacks 14:29:22 mwhahaha: well, what we are doing now is very confusing. We specifically didn't put the containers stuff in puppet/services for the same reasone 14:29:36 shardy: I think we can solve the layering issues 14:29:44 shardy: fwiw, this doesn't increase the layering 14:29:53 shardy: puppet would replace Ansible 14:30:26 dprince: ack, understood, was just explaining the motivation behind wanting something closer to a single implementation 14:30:37 watching *ResourceChain create during a deploy is fairly painful 14:30:39 shardy: and perhaps we could even implement some of it via Jinja so you get ansible/services and docker/services pulling in the same files and minimizing the heat resources that way 14:30:55 dprince: yeah j2 includes is certainly one option 14:31:17 shardy: ++ on minimizing the resource chains in heat 14:31:23 so let's take this to the ML and get something outlined for the PTG 14:31:26 I was kind of hoping the heat templates would become a small shim to interface to the ansible roles 14:31:45 e.g have most of the business logic in roles decoupled from the heat templates 14:31:46 i agree we need some organization, i just don't want to add yet another thing that we won't have completely migrated 14:31:55 mwhahaha: we can talk about it more. But again I think a single 'services' directory is a hard rule to follow. Honestly that sort of decision might force us to fork t-h-t in the future to move forwards 14:31:58 but I know we're not really there quite yet 14:32:38 All I'm advocating for in the meantime is clarity 14:32:42 dprince: so i don't think a single service directory is worse than {puppet,ansible,docker}/services, it could even be flipped so services/{pupept,ansible,docker} 14:32:51 the problem is i want to come up with something that we can fully migrate to 14:32:57 not just leave all the files everywhere 14:33:11 so i'd rather that we talk about it, plan a migration and 100% execute 14:33:23 mwhahaha: we are talking about it :) 14:33:32 beyond the 3 of us :D 14:34:06 we've seen ideas stall half way because not full buy in 14:34:13 so it's something we need to get everyone on board with 14:34:36 mwhahaha: that is why I brought it up 14:34:44 mwhahaha: I'd be happy to lead this discussion at the PTG 14:34:50 yeah it's definitely good to discuss it 14:34:53 mwhahaha: just on my mind 14:36:18 #action dprince to start discussion on service renaming (puppet -> ansible) 14:36:33 anything else? 14:36:45 maybe we could put that in a specs (or ML is enough...) 14:37:11 (I already see a long thread, so specs in gerrit might help) 14:38:06 we need to find the ways to escape the CI walltime when switching jobs to containerized UC and OC 14:38:25 just wanted to outline that becomes a real problem 14:38:51 bogdando: so like CI optimization for containers? 14:39:04 not that I wanted to go deep in details right here, just to point that out 14:39:09 yeah 14:39:26 it has been weeks, indeed. See https://review.openstack.org/575330 14:39:30 bogdando: sounds good to me 14:39:46 I wonder what could be the best way to do 14:39:53 brainstorming or meeting or whatever 14:40:20 ideally, with openstack infra folks participating :) 14:41:14 Alfredo Moralejo proposed openstack/tripleo-quickstart master: Add creator role to tempest configuration in Pike https://review.openstack.org/579888 14:41:46 to discuss options for including kolla builds into nodepool VM images, for example 14:41:53 that's it basically from my side 14:42:16 yea we need to figure out how to push for upstream container stuff, we're hurting from the transit i think 14:42:24 anything else? 14:44:08 sounds like no 14:44:10 thanks everyone 14:44:13 #endmeeting