14:01:19 #startmeeting tripleo 14:01:20 Meeting started Tue Nov 17 14:01:19 2015 UTC and is due to finish in 60 minutes. The chair is dprince. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:22 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:23 o/ 14:01:25 o/ 14:01:25 The meeting name has been set to 'tripleo' 14:01:32 o/ 14:02:00 o/ 14:02:02 howdy 14:02:15 #topic agenda 14:02:15 * bugs 14:02:15 * Projects releases or stable backports 14:02:15 * CI 14:02:15 * Specs 14:02:18 * Review Priorities: https://etherpad.openstack.org/p/tripleo-review-priorities 14:02:21 * one off agenda items 14:02:23 * open discussion 14:02:48 looks like a normal agenda this week. Did I miss anything? 14:03:14 o/ 14:04:38 hi 14:05:11 #topic bugs 14:05:51 From last week, we were able to get this fixed in Nova https://bugs.launchpad.net/tripleo/+bug/1513879 14:05:51 Launchpad bug 1513879 in tripleo "NeutronClientException: 404 Not Found" [High,Triaged] 14:07:24 I would also highlight this bug: https://bugs.launchpad.net/tripleo/+bug/1515315 14:07:24 Launchpad bug 1515315 in tripleo "instack fails to create ctlplane network" [Critical,In progress] - Assigned to Dan Prince (dan-prince) 14:07:45 we'll need that to bump the DELOREAN_REPO_URL in our CI too ^^ 14:09:09 any other bugs to highlight this week? 14:09:54 tripleo.sh doesn't currently work outside of CI due to puppet versions, I'm working on fixing that 14:10:12 not yet raised a bug, installing from source works for master, not for stable 14:11:02 shardy: cool, would be good to fix that 14:12:06 hi 14:12:46 One more issue I'd mention for anyone using Delorean trunk. That repo was broken late last week due to a neutron packaging issue. 14:12:53 yesterday that got reverted in https://github.com/openstack-packages/neutron/commit/4c82af80863c93f3860906ed6a98a16c3352edd8 14:13:10 so... again I think Delorea trunk is (mostly) working... 14:13:40 * dprince isn't sure how many people use delorean trunk these days but thought it worth mentioning... 14:13:52 The periodic jobs should catch this sort of stuff faster now, right? 14:14:08 bnemec: hopefully, but we need to pay attention to them 14:14:28 Yeah, can we get that status added to the CI status on tripleo.org? 14:14:49 bnemec: sure, good idea 14:15:14 ya I check tripleo.org CI status a couple times a day 14:15:49 trown: because of the cool owl I hope 14:15:56 :) 14:16:07 * d0ugal probably doesn't check it enough 14:16:33 ya, I just like staring into the owl's eyes 14:16:45 So stable CI just got enabled today (currently failing due to the puppet module issues) 14:16:45 moving to CI 14:16:49 #topic CI 14:17:07 anyone interested in updating tripleo-ci/scripts/tripleo-jobs.py 14:17:10 it would be good to differentiate the stable from trunk jobs in the tripleo.org status page? 14:17:29 bnemec: ^^ that script generates the current CI status page 14:17:38 shardy: dprince, for stable jobs wouldnt we just want to use current-passed-ci from RDO? 14:17:51 instead of our out of date current-tripleo 14:18:03 trown: for the puppet modules? 14:18:07 I have the liberty current-passed-ci very nearly automated 14:18:18 shardy: ya for anything not in the list we build 14:18:28 but it would immediately resolve the puppet module issue 14:18:47 trown: let's chat after, I was hoping we could align the RDO and upstream approaches 14:19:00 current-tripleo will be updated automatically too now that we have periodic jobs. 14:19:01 shardy: cool, sounds good 14:19:10 I tried to do that with the $STABLE_RELEASE patch, but the puppet source override breaks it atm 14:19:21 shouldn't be too hard to fix when I understand the right combinations of repos 14:19:26 bnemec: ya just seems odd to have 2 different promote jobs for not much gain on the liberty branch 14:20:57 okay, so derekh sent me a note saying that bumping the trunk Delorean URL is close (we think) 14:21:01 https://review.openstack.org/#/c/229789/ 14:21:01 I intend to put a tripleo.sh job in the liberty promote pipeline as well, but lets talk after 14:21:33 I think https://review.openstack.org/#/c/244197/ may be the only blocker that I know of at this point. 14:21:54 which fixes the same bug I linked a few minutes ago 14:22:20 trown: FYI https://review.openstack.org/#/c/245670/ is my recheck patch which shows the issues atm 14:22:31 so hopefully soon we can bump our stable CI delorean link to a more recent version... 14:23:22 dprince: +A 14:23:55 shardy: any updates on the 'updates' CI job 14:24:01 bnemec: great, thanks 14:24:53 dprince: Not yet, just getting stable CI in was the first step, and that has proven far more time consuming than anticipated 14:25:10 the main problem is project-config review latency, and no way to test those patches 14:25:31 IMHO it's worth revisiting the idea of running our CI outside of infra control at some point, as it's a huge bottleneck 14:25:52 but I know that was dismissed last time due to goals for cross-project gating 14:25:55 shardy: that is one place where doing tripleo stable ci in RDO would be a huge win 14:25:56 We already function as essentially third-party CI anyway. 14:26:08 We could make that official. 14:26:35 shardy: I will try to ping in infra to help these things along 14:26:37 Especially given the early plans to combine all the hardware into one big CI cloud. 14:27:15 shardy: while we wait, you could post an ad-hoc tripleo-ci job that overrides one of our existing jobs to do the same tests 14:27:54 dprince: well the change has landed now, it just took 2 weeks longer than I expected 14:28:18 dprince: I'll sync up with derekh as he's made some progress on initial update scripts in tripleo.sh 14:28:36 & we can hopefully show some faster progress on basic update testing after stable CI is working 14:28:40 shardy: yep, this is why we try to manage some things in tripleo-ci 14:28:49 at least the jobs are running now, which is a big step forward :) 14:29:00 dprince: ack, I fully understand why now :D 14:29:06 cool 14:29:12 any other CI updates? 14:29:58 #topic Specs 14:30:34 I posted 2 new specs this week 14:30:59 #link Deploy Puppet modules via Swift: https://review.openstack.org/#/c/245309/ 14:31:16 #link composable services within roles https://review.openstack.org/#/c/245804/ 14:31:46 I sort of went ahead and implement a fully working version of the puppet module deployment w/ swift 14:32:05 dprince: i'd really like to see that be more generic 14:32:09 the code wasn't too hard there 14:32:13 i don't know why that would be a different spec 14:32:31 there doesn't seem to be anything specific about puppet modules afaict 14:32:32 slagle: you mean more generic like what steve baker was suggesting? 14:32:42 yea, i suggested the same in my review 14:32:57 could be tgz/rpm/whatever 14:33:01 slagle: sure, Initially I think I thought you were suggesting just deploying openstack-puppet-modules 14:33:12 slagle: steve baker wants to deploy *any* RPM this way 14:33:29 slagle: which I personally think is a bit out of scope 14:33:30 take whatever is found in the bucket, if it's tgz, extract it to /, if it's rpm, install it 14:34:11 given the mechanism would be identical for any tgz/rpm, i guess i don't understand why it would be out of scope 14:34:21 why would you do that vs just exposing a webserver on the undercloud with a yum repo in it? 14:34:37 slagle: sure, but from a client tooling prospective I would still want this https://review.openstack.org/#/c/245310/ 14:34:51 slagle: the underlying mechanism can easily be made to support what you guys are asking I think 14:35:12 slagle: but I'd rather the script/tool for puppet modules to be focussed on just this task 14:35:15 shardy: that would work too, but swift is already there 14:35:22 I've been doing exactly that with local delorean builds, it seems odd to wire RPMs via swift 14:36:01 maybe this whole thing is just too subjective :) 14:36:04 slagle: Yeah I think it's fine provided we leave the RPMs in swift and pass a tempurl 14:36:06 slagle: I said this in my last comment, but I gotta say that my motivation for this has nothing to do w/ RPMs (although I will support them if need be) 14:36:10 swift is no less odd for tgz's either 14:36:16 might as well just use a static webserver on the undercloud 14:36:36 slagle: it's the tar/copy thing I found odd 14:36:38 * bnemec wants no more bash scripts. :-/ 14:36:51 in fact... the real point of using Swift to deploy a director of puppet module RPMs was about *not* having to go through the package build process 14:37:04 this was feed back from Graeme Giles at the summit too... 14:37:39 bnemec: there are some things that are *much* more verbose in python 14:37:45 that was one point of it 14:37:50 bnemec: this is perhaps one of those things... 14:37:51 dprince: And testable ;-) 14:38:00 i just don't see it as a tgz vs rpm discussion 14:38:19 no matter which i use, i want to get it updated on the overcloud 14:38:44 slagle: we already have a mechanism to deploy and updated version of openstack-puppet-modules RPM though (via the normal yum upgrade) 14:39:04 slagle: regardless, I agree we can support the RPM (easily). Patches to do that will follow 14:39:16 Yeah, I guess the method is somewhat a matter of preference, but delorean w/RPMs over http just makes sense to me as it's aligned with what we do in CI 14:39:25 that mechanism doesn't preupdate opm 14:39:43 slagle: hmmm, it was intended to 14:40:00 slagle: if it doesn't then that is probably a missing piece of the update workflow 14:40:43 maybe, it definitely doesn't do that now 14:40:50 anyway, perhaps we can discuss this more on the spec 14:41:00 slagle: sounds like a bug to me 14:41:54 The other spec (composable roles) is what I'm super interested in 14:42:01 would love feedback on that one too 14:42:36 That one's going to be an upgrade nightmare, isn't it? 14:42:57 bnemec: so I've listed the upgrades job as a specific CI requirement in the spec 14:43:09 bnemec: but I don't think it would necissarily be a nightmare 14:43:20 i dont think it will be a nightmare if we don't allow changing roles on upgrade 14:43:37 dprince: Yeah, I'm just trying to figure out how it will even work. We can barely do upgrades without completely refactoring the templates today. 14:43:52 e.g., you can't go from an already deloyed cloud to one with roles split out however you want 14:43:52 bnemec: having a more defined vendor/driver "interface" would actually make upgrades better in some cases 14:44:04 With such a refactoring it seems like a potential problem. 14:44:13 slagle: agree, our default enabled roles would probably match our defaults today 14:45:07 Anyways, this composable roles thing is super important IMO if we want to scale tripleo-heat-templates 14:45:15 are current arch isn't scaling so well 14:45:43 bnemec: i'm not saying there won't be issues, i'm sure there will be. i just think that we have to land it in such a way that upgrades work 14:45:48 specifically overcloud_controller.pp and the pacemaker version 14:45:52 bnemec: if they don't/won't, we can't land it 14:46:02 ya, it would be predicated on the upgrades CI passing 14:46:08 trown: yep 14:46:49 see line 256 in the spec: We should have the upgrades job in place to ensure we don't break the ability to heat stack-update from a previous stack. 14:47:07 Yeah, I guess I should leave upgrades talk to people who actually work on it. It's just my first thought when I hear "let's completely change the architecture of our heat templates". :-) 14:47:30 bnemec: this isn't as drastic as it sounds 14:47:46 bnemec: the underlying workflow is much the same (steps, through a puppet manifest) 14:47:50 It's not such a big issue when you're changing mostly what software is deployed, vs nodes and networking 14:47:55 bnemec: just creating a more defined interface 14:48:27 bnemec: at the end of the day it is basically the same hiera and puppet modules getting executed... 14:48:32 but I agree we need moar testing to prove any major rework 14:49:16 if we don't do this, in 3-6 months added patches for new services and vendors is going to be even more painful IMO 14:49:54 Okay, this is probably not a discussion I should have started here. It probably belongs on the spec. 14:50:01 Probably. ;-) 14:50:12 We are going to run out of time too :) 14:50:22 agree, lets move to review priorities 14:50:34 #topic Review Priorities: https://etherpad.openstack.org/p/tripleo-review-priorities 14:51:21 any reviews that need highlighting 14:52:04 the string of nuage patches 14:52:30 dprince: I believe they are listed on the review priorties etherpad 14:52:37 Wow, now I know why my patches don't get reviews. I didn't realize _everything_ was going on the priority page. :-) 14:52:43 * bnemec adds his stuff 14:53:02 i was going to ask if we felt the etherpad was still useful :) 14:53:09 bnemec: probably not everything should go onto the priorities page, otherwise it isn't going to help us 14:53:31 I liked the tripleo inbox created by the gerritdashboard 14:53:37 slagle: unclear to me whether it is still useful 14:53:44 gfidente: That's still available. 14:53:51 * dprince uses http://tripleo.org/reviews.html 14:54:00 yeah I think we can customize the queries as well if we want to define new 'priority' rules 14:54:00 I haven't actually looked at it before (obviously). 14:54:33 dprince: huh, cool. I didn't even know about that one 14:54:42 dprince: the midokura patches should be close now (tht, puppet-tripleo, tripleo-heat-templates) will update the etherpad (is pointing to older tht change) 14:54:56 marios: ack 14:54:57 dprince: yea, i don't use that one :) i find score rather meaningless 14:55:11 slagle: its just a column, feel free to ignore 14:55:20 should new patches even care about the templates in os-apply-config? 14:55:36 slagle: tripleo.org loads a lot quicker than gerrit for me 14:56:08 thrash: i tend to just leave a comment to drop that file (any templates under os-apply-config/ dir 14:56:27 marios: ack 14:56:38 thrash: It sounds like we should rename that to deprecated or something. 14:56:40 thrash: depends on the patch, there are some os-apply-config remnants that are still important 14:56:45 we should move them to deprecated soon IMO 14:56:55 oh in tripleo-heat-templates 14:57:02 I'm not sure whey we ever bothered to create a separate os-apply-config directory when realisitically that stuff has all been deprecated for a long time. 14:57:03 dprince: yes. :) 14:57:05 yeah, those can be deprecated 14:57:15 and/or just purge all the old untested stuff 14:57:22 dprince: so net-new functionality should ignore them 14:57:30 yep 14:57:55 so we didn't get to one off agenda items or open discussion this week 14:57:57 epic fail on the one-off agenda items 14:57:59 sorry about that 14:58:12 can I ask a quick question? 14:58:14 shardy: did you want to bring something up quick :) 14:58:26 it sounds like people are okay with putting the api in tripleo-common and renaming tripleo-common to tripleo 14:58:30 one-off items should probably come earlier in the list I guess 14:58:32 * dprince fails to watch the clock this week 14:58:36 shardy: +1 14:58:37 * bnemec fondly remembers when this meeting was slagle talking to himself :-) 14:58:41 the leftover issue is what that means for tripleo-incubator, which is packaged as tripleo 14:58:45 #link http://lists.openstack.org/pipermail/openstack-dev/2015-November/079543.html 14:58:57 bnemec: How do you remember that if it was just slagle? ;) 14:58:59 o/ 14:59:01 dprince: Yeah, can folks please discuss parameters vs parameter_defaults there 14:59:10 shardy: ack, we will all reply to your thread :) 14:59:11 d0ugal: I lurked ;-) 14:59:17 I'd like to clarify/document when to use each so we can review more consistently 14:59:26 a good thread I think 14:59:34 +1 to documenting that. 15:00:18 okay, thanks everyone 15:00:23 Thanks! 15:00:27 #endmeeting