14:00:34 #startmeeting tripleo 14:00:35 Meeting started Tue Oct 20 14:00:34 2015 UTC and is due to finish in 60 minutes. The chair is dprince. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:36 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:39 The meeting name has been set to 'tripleo' 14:00:46 Hello! 14:00:53 o/ 14:00:55 yo 14:00:58 \o 14:01:04 o/ 14:01:09 sup 14:01:29 hi everyone 14:01:41 #topic agenda 14:01:42 * bugs 14:01:42 * Projects releases or stable backports 14:01:42 * CI 14:01:42 * Specs 14:01:44 * Meeting every week? 14:01:46 * Review Priorities: https://etherpad.openstack.org/p/tripleo-review-priorities 14:01:48 o/ 14:01:49 * one off agenda items 14:01:52 * open discussion 14:02:13 o/ 14:02:26 I slightly adjusted the agendy this week. Adding a "Stable backports" section to the project releases item 14:02:33 o/ 14:02:46 also, added a top level item to continue the review priorities discussion (etherpad) 14:02:59 any comments on this before we get started? 14:03:28 o/ 14:04:44 #topic bugs 14:05:08 any new/critical bugs that need highlighted this week? 14:05:46 Using Delorean trunk I hit this one yesterday: https://bugs.launchpad.net/tripleo/+bug/1507738 14:05:46 Launchpad bug 1507738 in tripleo "ipxe fails on Centos 7 (inc: command not found)" [Critical,Triaged] 14:06:25 CI currently blocked by a puppet-heat regression https://bugs.launchpad.net/tripleo/+bug/1507934 14:06:25 Launchpad bug 1507934 in puppet-heat "Could not find resource 'Anchor[heat::db::begin]'" [High,In progress] - Assigned to Clayton O'Neill (clayton-oneill) 14:06:44 I've submitted a revert and a pin, doesn't look like the revert will land 14:07:21 derekh: can we update our manifests to support this change? 14:07:43 dprince: spredzy was looking into it 14:08:14 dprince: they are trying to fix the puppet module I think 14:08:18 okay, but in the meantime the pin keeps us running 14:08:33 dprince: it should, we'll know when CI is done 14:08:43 derekh: cool 14:08:55 any other bugs to be aware of? 14:09:21 #link https://bugs.launchpad.net/tripleo/+bug/1507934 14:09:21 Launchpad bug 1507934 in puppet-heat "Could not find resource 'Anchor[heat::db::begin]'" [High,In progress] - Assigned to Clayton O'Neill (clayton-oneill) 14:09:33 you'll notice the pin is a temp workaround that Is now in the tripleo.sh repository, I think we should in future put them there (not tripleo-ci) 14:09:39 #link https://bugs.launchpad.net/tripleo/+bug/1507738 14:09:39 Launchpad bug 1507738 in tripleo "ipxe fails on Centos 7 (inc: command not found)" [Critical,Triaged] 14:10:03 derekh: yeah, I noticed you left a comment on my tripleo-ci review to that effect as well. 14:10:07 +1 to putting pins in tripleo.sh 14:10:16 derekh: I'd agree that tripleo.sh is a good place for these 14:11:03 dprince: ya, I was going to resubmit you patch for tripleo-sh but this change should deem it redundant https://review.openstack.org/#/c/229906/ 14:12:09 derekh: okay, so that change would make it build openstack-heat then? 14:12:21 derekh: I will follow up on that afterwards perhaps... 14:12:43 dprince: ya, I'll explain in #tripleo afterwards 14:13:11 Sorry, for those not following there was a breakage in CI this weekend due to puppet-ceph moving its git to /openstack 14:13:21 anwyays, lets move along :) 14:13:45 #topic Projects releases or stable backports 14:14:00 shardy: would you like to give an update here? 14:14:14 dprince: sure 14:14:23 So, the release branch spec landed \o/ 14:14:28 woot 14:14:31 :) 14:14:34 \o/ 14:14:37 And I pushed a project-config patch which prepares for creating them: 14:14:45 #link https://review.openstack.org/#/c/237597/1 14:15:09 when that lands, I'll create the branches and start work on making tripleo.sh support them, then wire that into CI 14:15:32 So no action required from folks yet, but I hope that pretty soon after summit we should have the stable branches up and running 14:15:51 all sounds good to me 14:16:01 cool beans 14:16:04 which point will this branches be created from, the current HEAD? 14:16:19 shardy: any thoughts on which CI jobs we'll be running on stable? 14:16:45 dprince: If we get it stable enough I was thinking the HA job, but open to suggestions 14:16:52 dtantsur: I would think Head for now, yes 14:16:53 I was thinking initially pick one, then go from there 14:17:00 shardy: dtantsur, I would suggest we make the stable branches start at the commits used for RDO liberty packaging, I can provide a list 14:17:12 it is more or less HEAD 14:17:13 dtantsur: yeah, we'll pick a known-good point, e.g no pins in CI and branch from there 14:17:20 shardy: ha probably makes most sense, I guess it would give us most coverage 14:17:26 trown|mtg: Ok, that works, please do 14:17:40 shardy: we could maybe add the other jobs also in the experimental queue 14:17:43 we'll need some tripleo.sh patches as well to support the right repos in --repo-setup 14:17:43 are we going to do stable releases as well? 14:18:16 trown|mtg: For now, I was assuming we wouldn't tag stable releases, as AFAIK openstack generally is moving away from that 14:19:02 trown|mtg: if folks are happy with that, it seems simpler, e.g we just have a rolling release with CI coverage 14:19:10 does that work wrt RDO plans? 14:19:11 hmm... that makes packaging a bit more of a pita, but if all of openstack is doing that 14:19:24 okay, anything else for stable branches, etc? 14:19:38 trown|mtg: I'll look into it, last I heard that was the direction, but we'll align with what other projects do 14:19:58 so should be the same as other RDO packaging regardless 14:20:01 cool +1 to following what other projects do 14:20:03 dprince: not from me, thanks! 14:20:13 #topic CI 14:20:17 ironic team is still making releases, so not sure.. 14:20:21 I'd like to see the BP which does automatic backports when commit has backport: $release implemented, but this is probably a little out of scope nw 14:20:27 derekh: anyting updates for CI? 14:20:57 gfidente: that didn't actually get implemented 14:21:03 dprince: not much happening, I've tested and submitted patches to update the jenkins nodes to F22, all seems ok there 14:21:10 so we'll probably have to work out how to implement it before we can auto-backport 14:21:14 shardy, yeah read your comment :( 14:21:24 dprince: we had aoutage for 12 hours of our cloud last week 14:21:38 my fault. move along 14:21:41 derekh: yes, and thanks to you for resolving it quickly 14:21:42 hey 14:22:24 nothing much else to report 14:22:44 rhallisey: hi, we are talking about CI now. any updates from your team? 14:23:11 dprince, we're going to start trying to integrate there 14:23:30 anyone I could work with to get this going? 14:23:40 we have patches up for local registry and scripts to support containers 14:23:40 somebody merge this, its fixes a bug where you may see CI logs for a seed befor you even started one up https://review.openstack.org/#/c/230129/ 14:23:42 just need to be pointed in the right direction 14:23:51 very confusing if you run across it 14:23:57 rhallisey: okay, to be clear we are talking about integrate Docker CI jobs into CI right? 14:24:04 dprince, yes 14:25:26 derekh: do we have sufficient capacity for at least 1 docker job? along with the new stable branch HA job as well? 14:25:28 rhallisey: give me a shout, and I'll see if I can help 14:25:53 We're going to need an upgrade job of some sort too... 14:26:02 dprince: I'll take a look at numbers after this meeting and 14:26:07 derekh: we may want to put in caching somewhere for docker images too 14:26:23 derekh, ok thanks. 14:26:40 dprince: Yum, I'm working on that for the other jobs but it requires we would have a periodic job buil;ding images 14:26:55 bnemec: good point, I was thinking we'd tackle that after we have working stable branches in place 14:26:56 dprince: patch for that is here https://review.openstack.org/#/c/235321/ 14:27:24 derekh, correct, we would need to build to keep up 14:27:44 before we even do that we could do with really basic update tests, e.g prove that updating between t-h-t revisions on master doesn't destroy any resources unexpectedly 14:28:34 shardy, what do you mean by update tests? 14:28:52 rhallisey: updating a deployed overcloud 14:29:00 currently we only test initial deployment in CI 14:29:03 cool! 14:29:07 ok gotcha 14:29:29 okay, any other CI updates? 14:29:37 nope 14:29:50 so with a periodic job to build containers, would we have a docker registry running to load them into? 14:29:59 oh, for those who missed it last week our CI status page is now here 14:30:05 #link http://tripleo.org/cistatus.html 14:30:22 #topic Specs 14:30:34 derekh, if you're around after the meeting, I'll ping you and hopefully get started on this 14:30:51 rhallisey: ack, I'll be here 14:30:53 any spec items need attention this week? 14:32:02 #topic Meeting every week? 14:32:12 bnemec, I updated the container spec. Thanks for the comments 14:32:20 +1 weekly 14:32:21 +1 14:32:23 +1 14:32:26 +1 weekly 14:32:40 +1 weekly 14:32:51 dprince: i have the review ready for the irc-meetings change fyi - (was curious to see hwo that works) 14:32:57 dprince: i didn't submit it 14:32:57 on the spec, I'd like some eyes on the external lb spec at https://review.openstack.org/233634 14:33:14 marios: could you submit it now and link it in? 14:33:21 sure sec 14:33:34 the changes for the external lb spec are up for review already 14:33:43 marios: sounds like everyone pretty much agrees meeting once a week is good 14:34:24 https://review.openstack.org/#/c/237609/ 14:34:44 #link https://review.openstack.org/#/c/237609/ 14:34:44 #link https://review.openstack.org/#/c/237609/ 14:34:49 sry :) 14:35:03 #agreed everyone wants a weekly TripleO meeting 14:35:11 marios: thanks! 14:35:16 belated +1 on weekly! ;) 14:35:21 gfidente: Can you link those into the spec? That might have answered some of my questions. 14:35:24 * shardy got distracted for a moment 14:35:26 spec wise, if anyone has time to look at the exciting proposal for tuskar.... v3, that'd be appreciated! https://review.openstack.org/#/c/230432/ 14:35:29 +1 14:35:55 #topic Review Priorities: https://etherpad.openstack.org/p/tripleo-review-priorities 14:36:14 slagle: thanks for organizing this etherpad 14:36:20 #link https://etherpad.openstack.org/p/tripleo-review-priorities 14:36:38 tzumainn: how does that spec relate to https://review.openstack.org/#/c/227329/ ? 14:36:47 has this been helpful for getting eyes on important patches? 14:36:56 bnemec, https://blueprints.launchpad.net/tripleo/+spec/tripleo-mitaka-external-load-balancer 14:36:58 tzumainn: are they the same? or are these competing? 14:36:58 lets delete things instead of strickthrough, its should be as concise as possible 14:37:30 slagle, it's the same, rbrady is abandoning that patch at some point but we plan on adapting his work towards a tuskar v3 update 14:38:23 k, thx 14:38:59 derekh: fine by me, i started the strikethrough thing. was thinking of deleting the topic once all was struck through. i'm good either way 14:39:35 * dprince waits for any other specs updates 14:40:39 slagle: ahh ok, so that kinda makes sense, I'm good either way now aswell. gonna delete the done topics so 14:41:43 slagle, derekh: so we agree deleting done (merged) review items on this etherpad is the way to go? 14:42:36 dprince: ya, no point in clutter hanging around 14:42:50 dprince: make it so! 14:43:30 +1 for delete 14:43:35 cool. One top patchset I'd like to highlight is an idea for more "composable roles" within t-h-t. I've linked it into the etherpad too but here it is: 14:43:43 slagle picard 14:43:45 https://review.openstack.org/#/c/236243/ 14:44:48 The motivation here was to make integration w/ new services like Trove, Manila, Sahara easier. But I think if we bought this approach we'd do all services the same way 14:45:21 I will reply to shardy's thread on the mailing list with regards to these idea's 14:45:39 dprince: not properly reviewed yet, but +1 on more granular role definition 14:45:48 ^Ditto 14:46:04 dprince: do you have anything in progress to show what the templates/manifests would look like for say standalone Glance API server? 14:46:11 I think we need to eventually work towards a well defined "role plugin" type interface, where all services are deployed in the exact same way 14:46:12 since that what you started with :) 14:46:37 shardy: yep, agree 14:46:42 dprince, I think it's very clean and nice but wanted to point out 14:46:47 shardy: My initial example service was with glance 14:46:55 we have some issues when a resource type for a pre-existing resource changes 14:46:59 when doing upgrades 14:47:19 so I don't think we can avoid this if we want to make roles more granular, but it is worth looking how we can survive 14:47:26 an upgrade 14:47:35 that is a good point, we'll have to think carefully about the upgrade impact of a major refactor, but that shouldn't prevent us attempting it 14:47:56 indeed, I don't think it is avoidable 14:48:07 gfidente: yes, there may be some rubs there perhaps 14:48:12 we'll change a resource type again in the future 14:48:41 dprince: it'd be helpful to review how this might work, if that were there as well. in fact i was thinking of trying it myself to understand this patch better 14:48:48 dprince: just didnt know if you had already tried it 14:48:48 gfidente: along with the role changes there is perhaps a more radical idea we could investigate where we split our stack into 2 (or more) 14:49:08 I think the answer may end up being to fix heat abandon/adopt features 14:49:15 gfidente: stack 1 would create OS resources (servers), and the 2nd stack would only configure them (as external resources) 14:49:30 so it's possible to completely rework the heat template architecture, without deleting any nodes 14:49:40 shardy: dprince : i take it this patch does not use the resource chaining spec proposed for Heat 14:49:47 any thoughts on how that might affect this? 14:49:52 there are quite a few unsolved issues before we can do that 14:50:05 if we took this approach we wouldn't need to delete servers, but we could drastically refactor the "configuration" resources as we see fit 14:50:07 slagle: Yeah, I was thinkingt the same thing, I think it will be complimentary 14:50:27 e.g you specify a list of types as a resource chain, and each one would be a "role plugin" defining a service 14:50:30 dprince, ack I'd be happy to help there if I can 14:50:40 then you do the cluster wide config of the groups of resource chains 14:50:43 slagle: no resource chaining yet. I'm not super clear on how that helps 14:51:14 dprince: It'd be a way to wire in a bunch of "plugin" type resources, e.g which all have common interfaces 14:51:27 shardy: spec? 14:51:29 i guess i was thinking you'd have a roles resources, mapped to glance-api, keystone-api, etc 14:51:36 or whatever roles you wanted 14:51:48 slagle: this is a good example of the interface I've got 14:51:50 https://review.openstack.org/#/c/237370/1/overcloud-without-mergepy.yaml,cm 14:52:09 basically you define an input which is an array of roles you want on that server type 14:52:12 #link https://review.openstack.org/#/c/228615/ 14:52:15 dprince: ^^ 14:52:33 dprince: ah cool, that's what i was looking for :) 14:52:56 slagle: yeah, and for pacemaker I've got pacemaker version of the roles which "extend" the defaults 14:53:09 so the duplicate puppet manifests would go away 14:53:11 dprince: perfect, so the list in ControllerServices could eventually just be an input to a ResourceChain resource 14:53:20 longer term, i'd like to make use of the composable roles to deploy the Undercloud as well, using the standalone Heat container idea 14:53:23 https://review.openstack.org/#/c/237370/1/puppet/roles/pacemaker/glance-api.yaml,cm 14:53:42 jistr: you might want to look closely at that and see if you buy it? 14:54:25 I'm happy that it keeps the network resources out of the role as well 14:54:40 rhallisey: for docker I think we'd add some output metadata to describe the docker compose, and perhaps the actual containers we'd need (for the "plugin") too 14:54:55 gfidente: exactly, network resources are tied to the servers themselves 14:55:03 gfidente: a different "layer" 14:55:11 yes I think it's great this way 14:55:38 gfidente: it won't hurt if say glance::api::bind_host is set... but there is no glance API or registry running 14:55:40 dprince: yeah will look at the composable roles 14:55:47 dprince: in general i like the idea a lot 14:56:17 dprince, ya I think we should fit in pretty easily with composable roles. Metadata with list of containers works 14:56:27 dprince: same for the 2 separate stacks for HW prep and config phase. I'll think it through a bit more. 14:57:19 jistr: yes, I've been chatting w/ the Kolla community about using TripleO as a "paving machine" (they have nothing that does baremetal provisioning). 2 stacks would help there 14:57:43 or perhaps a more configurable single stack 14:57:49 lots to think about 14:58:10 #open discussion 14:58:15 #topic open discussion 14:58:19 dprince, sec, the comment on bind_host 14:58:31 any other things to bring up quickly in the meeting this week? 14:58:37 dprince: IMO we shouldn't necesarily say that we couldn't just support two different heat templates 14:58:44 we can always pass that as param no? 14:59:02 I probably won't be around next week to run the meeting next week. Anyone want to run it? 14:59:12 Or perhaps we just cancel? 14:59:14 it's kind of unusual the way TripleO uses one template for all deployment topologies, if we needed a "noconfig" overcloud template, IMHO that'd be perfectly fine 14:59:34 shardy: that is the idea 14:59:35 dprince: most folks will be at summit, so I'd say +1 on cancel 14:59:41 see some of you in tokyo! 14:59:47 Enjoy 14:59:50 I am jealous :) 14:59:50 ++ 15:00:00 gfidente: lets chat bind_host in #tripleo following this 15:00:08 * bnemec will not be attending next week regardless :-) 15:00:12 #endmeeting