14:00:34 <dprince> #startmeeting tripleo
14:00:35 <openstack> Meeting started Tue Oct 20 14:00:34 2015 UTC and is due to finish in 60 minutes.  The chair is dprince. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:36 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:39 <openstack> The meeting name has been set to 'tripleo'
14:00:46 <d0ugal> Hello!
14:00:53 <dtantsur> o/
14:00:55 <derekh> yo
14:00:58 <marios> \o
14:01:04 <jdob> o/
14:01:09 <slagle> sup
14:01:29 <dprince> hi everyone
14:01:41 <dprince> #topic agenda
14:01:42 <dprince> * bugs
14:01:42 <dprince> * Projects releases or stable backports
14:01:42 <dprince> * CI
14:01:42 <dprince> * Specs
14:01:44 <dprince> * Meeting every week?
14:01:46 <dprince> * Review Priorities: https://etherpad.openstack.org/p/tripleo-review-priorities
14:01:48 <trown> o/
14:01:49 <dprince> * one off agenda items
14:01:52 <dprince> * open discussion
14:02:13 <jrist> o/
14:02:26 <dprince> I slightly adjusted the agendy this week. Adding a "Stable backports" section to the project releases item
14:02:33 <florianf> o/
14:02:46 <dprince> also, added a top level item to continue the review priorities discussion (etherpad)
14:02:59 <dprince> any comments on this before we get started?
14:03:28 <jtomasek> o/
14:04:44 <dprince> #topic bugs
14:05:08 <dprince> any new/critical bugs that need highlighted this week?
14:05:46 <dprince> Using Delorean trunk I hit this one yesterday: https://bugs.launchpad.net/tripleo/+bug/1507738
14:05:46 <openstack> Launchpad bug 1507738 in tripleo "ipxe fails on Centos 7 (inc: command not found)" [Critical,Triaged]
14:06:25 <derekh> CI currently blocked by a puppet-heat regression https://bugs.launchpad.net/tripleo/+bug/1507934
14:06:25 <openstack> Launchpad bug 1507934 in puppet-heat "Could not find resource 'Anchor[heat::db::begin]'" [High,In progress] - Assigned to Clayton O'Neill (clayton-oneill)
14:06:44 <derekh> I've submitted a revert and a pin, doesn't look like the revert will land
14:07:21 <dprince> derekh: can we update our manifests to support this change?
14:07:43 <derekh> dprince: spredzy was looking into it
14:08:14 <derekh> dprince: they are trying to fix the puppet module I think
14:08:18 <dprince> okay, but in the meantime the pin keeps us running
14:08:33 <derekh> dprince: it should, we'll know when CI is done
14:08:43 <dprince> derekh: cool
14:08:55 <dprince> any other bugs to be aware of?
14:09:21 <dprince> #link https://bugs.launchpad.net/tripleo/+bug/1507934
14:09:21 <openstack> Launchpad bug 1507934 in puppet-heat "Could not find resource 'Anchor[heat::db::begin]'" [High,In progress] - Assigned to Clayton O'Neill (clayton-oneill)
14:09:33 <derekh> you'll notice the pin is a temp workaround that Is now in the tripleo.sh repository, I think we should in future put them there (not tripleo-ci)
14:09:39 <dprince> #link https://bugs.launchpad.net/tripleo/+bug/1507738
14:09:39 <openstack> Launchpad bug 1507738 in tripleo "ipxe fails on Centos 7 (inc: command not found)" [Critical,Triaged]
14:10:03 <dprince> derekh: yeah, I noticed you left a comment on my tripleo-ci review to that effect as well.
14:10:07 <trown|mtg> +1 to putting pins in tripleo.sh
14:10:16 <dprince> derekh: I'd agree that tripleo.sh is a good place for these
14:11:03 <derekh> dprince: ya, I was going to resubmit you patch for tripleo-sh but this change should deem it redundant https://review.openstack.org/#/c/229906/
14:12:09 <dprince> derekh: okay, so that change would make it build openstack-heat then?
14:12:21 <dprince> derekh: I will follow up on that afterwards perhaps...
14:12:43 <derekh> dprince: ya, I'll explain in #tripleo afterwards
14:13:11 <dprince> Sorry, for those not following there was a breakage in CI this weekend due to puppet-ceph moving its git to /openstack
14:13:21 <dprince> anwyays, lets move along :)
14:13:45 <dprince> #topic Projects releases or stable backports
14:14:00 <dprince> shardy: would you like to give an update here?
14:14:14 <shardy> dprince: sure
14:14:23 <shardy> So, the release branch spec landed \o/
14:14:28 <trown|mtg> woot
14:14:31 <dprince> :)
14:14:34 <dtantsur> \o/
14:14:37 <shardy> And I pushed a project-config patch which prepares for creating them:
14:14:45 <shardy> #link https://review.openstack.org/#/c/237597/1
14:15:09 <shardy> when that lands, I'll create the branches and start work on making tripleo.sh support them, then wire that into CI
14:15:32 <shardy> So no action required from folks yet, but I hope that pretty soon after summit we should have the stable branches up and running
14:15:51 <dprince> all sounds good to me
14:16:01 <derekh> cool beans
14:16:04 <dtantsur> which point will this branches be created from, the current HEAD?
14:16:19 <dprince> shardy: any thoughts on which CI jobs we'll be running on stable?
14:16:45 <shardy> dprince: If we get it stable enough I was thinking the HA job, but open to suggestions
14:16:52 <dprince> dtantsur: I would think Head for now, yes
14:16:53 <shardy> I was thinking initially pick one, then go from there
14:17:00 <trown|mtg> shardy: dtantsur, I would suggest we make the stable branches start at the commits used for RDO liberty packaging, I can provide a list
14:17:12 <trown|mtg> it is more or less HEAD
14:17:13 <shardy> dtantsur: yeah, we'll pick a known-good point, e.g no pins in CI and branch from there
14:17:20 <derekh> shardy: ha probably makes most sense, I guess it would give us most coverage
14:17:26 <shardy> trown|mtg: Ok, that works, please do
14:17:40 <derekh> shardy: we could maybe add the other jobs also in the experimental queue
14:17:43 <slagle> we'll need some tripleo.sh patches as well to support the right repos in --repo-setup
14:17:43 <trown|mtg> are we going to do stable releases as well?
14:18:16 <shardy> trown|mtg: For now, I was assuming we wouldn't tag stable releases, as AFAIK openstack generally is moving away from that
14:19:02 <shardy> trown|mtg: if folks are happy with that, it seems simpler, e.g we just have a rolling release with CI coverage
14:19:10 <shardy> does that work wrt RDO plans?
14:19:11 <trown|mtg> hmm... that makes packaging a bit more of a pita, but if all of openstack is doing that
14:19:24 <dprince> okay, anything else for stable branches, etc?
14:19:38 <shardy> trown|mtg: I'll look into it, last I heard that was the direction, but we'll align with what other projects do
14:19:58 <shardy> so should be the same as other RDO packaging regardless
14:20:01 <trown|mtg> cool +1 to following what other projects do
14:20:03 <shardy> dprince: not from me, thanks!
14:20:13 <dprince> #topic CI
14:20:17 <dtantsur> ironic team is still making releases, so not sure..
14:20:21 <gfidente> I'd like to see the BP which does automatic backports when commit has backport: $release implemented, but this is probably a little out of scope nw
14:20:27 <dprince> derekh: anyting updates for CI?
14:20:57 <shardy> gfidente: that didn't actually get implemented
14:21:03 <derekh> dprince: not much happening, I've tested and submitted patches to update the jenkins nodes to F22, all seems ok there
14:21:10 <shardy> so we'll probably have to work out how to implement it before we can auto-backport
14:21:14 <gfidente> shardy, yeah read your comment :(
14:21:24 <derekh> dprince: we had aoutage for 12 hours of our cloud last week
14:21:38 <derekh> my fault. move along
14:21:41 <dprince> derekh: yes, and thanks to you for resolving it quickly
14:21:42 <rhallisey> hey
14:22:24 <derekh> nothing much else to report
14:22:44 <dprince> rhallisey: hi, we are talking about CI now. any updates from your team?
14:23:11 <rhallisey> dprince, we're going to start trying to integrate there
14:23:30 <rhallisey> anyone I could work with to get this going?
14:23:40 <Slower> we have patches up for local registry and scripts to support containers
14:23:40 <derekh> somebody merge this, its fixes a bug where you may see CI logs for a seed  befor you even started one up https://review.openstack.org/#/c/230129/
14:23:42 <rhallisey> just need to be pointed in the right direction
14:23:51 <derekh> very confusing if you run across it
14:23:57 <dprince> rhallisey: okay, to be clear we are talking about integrate Docker CI jobs into CI right?
14:24:04 <rhallisey> dprince, yes
14:25:26 <dprince> derekh: do we have sufficient capacity for at least 1 docker job? along with the new stable branch HA job as well?
14:25:28 <derekh> rhallisey: give me a shout, and I'll see if I can help
14:25:53 <bnemec> We're going to need an upgrade job of some sort too...
14:26:02 <derekh> dprince: I'll take a look at numbers after this meeting and
14:26:07 <dprince> derekh: we may want to put in caching somewhere for docker images too
14:26:23 <rhallisey> derekh, ok thanks.
14:26:40 <derekh> dprince: Yum, I'm working on that for the other jobs but it requires we would have a periodic job buil;ding images
14:26:55 <shardy> bnemec: good point, I was thinking we'd tackle that after we have working stable branches in place
14:26:56 <derekh> dprince: patch for that is here https://review.openstack.org/#/c/235321/
14:27:24 <rhallisey> derekh, correct, we would need to build to keep up
14:27:44 <shardy> before we even do that we could do with really basic update tests, e.g prove that updating between t-h-t revisions on master doesn't destroy any resources unexpectedly
14:28:34 <rhallisey> shardy, what do you mean by update tests?
14:28:52 <shardy> rhallisey: updating a deployed overcloud
14:29:00 <shardy> currently we only test initial deployment in CI
14:29:03 <Slower> cool!
14:29:07 <rhallisey> ok gotcha
14:29:29 <dprince> okay, any other CI updates?
14:29:37 <derekh> nope
14:29:50 <Slower> so with a periodic job to build containers, would we have a docker registry running to load them into?
14:29:59 <dprince> oh, for those who missed it last week our CI status page is now here
14:30:05 <dprince> #link http://tripleo.org/cistatus.html
14:30:22 <dprince> #topic Specs
14:30:34 <rhallisey> derekh, if you're around after the meeting, I'll ping you and hopefully get started on this
14:30:51 <derekh> rhallisey: ack, I'll be here
14:30:53 <dprince> any spec items need attention this week?
14:32:02 <dprince> #topic Meeting every week?
14:32:12 <rhallisey> bnemec, I updated the container spec. Thanks for the comments
14:32:20 <marios> +1 weekly
14:32:21 <derekh> +1
14:32:23 <jistr> +1
14:32:26 <trown|mtg> +1 weekly
14:32:40 <gfidente> +1 weekly
14:32:51 <marios> dprince: i have the review ready for the irc-meetings change fyi - (was curious to see hwo that works)
14:32:57 <marios> dprince: i didn't submit it
14:32:57 <gfidente> on the spec, I'd like some eyes on the external lb spec at https://review.openstack.org/233634
14:33:14 <dprince> marios: could you submit it now and link it in?
14:33:21 <marios> sure sec
14:33:34 <gfidente> the changes for the external lb spec are up for review already
14:33:43 <dprince> marios: sounds like everyone pretty much agrees meeting once a week is good
14:34:24 <marios> https://review.openstack.org/#/c/237609/
14:34:44 <marios> #link https://review.openstack.org/#/c/237609/
14:34:44 <dprince> #link https://review.openstack.org/#/c/237609/
14:34:49 <marios> sry :)
14:35:03 <dprince> #agreed everyone wants a weekly TripleO meeting
14:35:11 <dprince> marios: thanks!
14:35:16 <shardy> belated +1 on weekly! ;)
14:35:21 <bnemec> gfidente: Can you link those into the spec?  That might have answered some of my questions.
14:35:24 * shardy got distracted for a moment
14:35:26 <tzumainn> spec wise, if anyone has time to look at the exciting proposal for tuskar.... v3, that'd be appreciated! https://review.openstack.org/#/c/230432/
14:35:29 <d0ugal> +1
14:35:55 <dprince> #topic Review Priorities: https://etherpad.openstack.org/p/tripleo-review-priorities
14:36:14 <dprince> slagle: thanks for organizing this etherpad
14:36:20 <dprince> #link https://etherpad.openstack.org/p/tripleo-review-priorities
14:36:38 <slagle> tzumainn: how does that spec relate to https://review.openstack.org/#/c/227329/ ?
14:36:47 <dprince> has this been helpful for getting eyes on important patches?
14:36:56 <gfidente> bnemec, https://blueprints.launchpad.net/tripleo/+spec/tripleo-mitaka-external-load-balancer
14:36:58 <slagle> tzumainn: are they the same? or are these competing?
14:36:58 <derekh> lets delete things instead of strickthrough, its should be as concise as possible
14:37:30 <tzumainn> slagle, it's the same, rbrady is abandoning that patch at some point but we plan on adapting his work towards a tuskar v3 update
14:38:23 <slagle> k, thx
14:38:59 <slagle> derekh: fine by me, i started the strikethrough thing. was thinking of deleting the topic once all was struck through. i'm good either way
14:39:35 * dprince waits for any other specs updates
14:40:39 <derekh> slagle: ahh ok, so that kinda makes sense, I'm good either way now aswell. gonna delete the done topics so
14:41:43 <dprince> slagle, derekh: so we agree deleting done (merged) review items on this etherpad is the way to go?
14:42:36 <derekh> dprince: ya, no point in clutter hanging around
14:42:50 <slagle> dprince: make it so!
14:43:30 <jistr> +1 for delete
14:43:35 <dprince> cool. One top patchset I'd like to highlight is an idea for more "composable roles" within t-h-t. I've linked it into the etherpad too but here it is:
14:43:43 <jrist> slagle picard
14:43:45 <dprince> https://review.openstack.org/#/c/236243/
14:44:48 <dprince> The motivation here was to make integration w/ new services like Trove, Manila, Sahara easier. But I think if we bought this approach we'd do all services the same way
14:45:21 <dprince> I will reply to shardy's thread on the mailing list with regards to these idea's
14:45:39 <shardy> dprince: not properly reviewed yet, but +1 on more granular role definition
14:45:48 <bnemec> ^Ditto
14:46:04 <slagle> dprince: do you have anything in progress to show what the templates/manifests would look like for say standalone Glance API server?
14:46:11 <shardy> I think we need to eventually work towards a well defined "role plugin" type interface, where all services are deployed in the exact same way
14:46:12 <slagle> since that what you started with :)
14:46:37 <dprince> shardy: yep, agree
14:46:42 <gfidente> dprince, I think it's very clean and nice but wanted to point out
14:46:47 <dprince> shardy: My initial example service was with glance
14:46:55 <gfidente> we have some issues when a resource type for a pre-existing resource changes
14:46:59 <gfidente> when doing upgrades
14:47:19 <gfidente> so I don't think we can avoid this if we want to make roles more granular, but it is worth looking how we can survive
14:47:26 <gfidente> an upgrade
14:47:35 <shardy> that is a good point, we'll have to think carefully about the upgrade impact of a major refactor, but that shouldn't prevent us attempting it
14:47:56 <gfidente> indeed, I don't think it is avoidable
14:48:07 <dprince> gfidente: yes, there may be some rubs there perhaps
14:48:12 <gfidente> we'll change a resource type again in the future
14:48:41 <slagle> dprince: it'd be helpful to review how this might work, if that were there as well. in fact i was thinking of trying it myself to understand this patch better
14:48:48 <slagle> dprince: just didnt know if you had already tried it
14:48:48 <dprince> gfidente: along with the role changes there is perhaps a more radical idea we could investigate where we split our stack into 2 (or more)
14:49:08 <shardy> I think the answer may end up being to fix heat abandon/adopt features
14:49:15 <dprince> gfidente: stack 1 would create OS resources (servers), and the 2nd stack would only configure them (as external resources)
14:49:30 <shardy> so it's possible to completely rework the heat template architecture, without deleting any nodes
14:49:40 <slagle> shardy: dprince : i take it this patch does not use the resource chaining spec proposed for Heat
14:49:47 <slagle> any thoughts on how that might affect this?
14:49:52 <shardy> there are quite a few unsolved issues before we can do that
14:50:05 <dprince> if we took this approach we wouldn't need to delete servers, but we could drastically refactor the "configuration" resources as we see fit
14:50:07 <shardy> slagle: Yeah, I was thinkingt the same thing, I think it will be complimentary
14:50:27 <shardy> e.g you specify a list of types as a resource chain, and each one would be a "role plugin" defining a service
14:50:30 <gfidente> dprince, ack I'd be happy to help there if I can
14:50:40 <shardy> then you do the cluster wide config of the groups of resource chains
14:50:43 <dprince> slagle: no resource chaining yet. I'm not super clear on how that helps
14:51:14 <shardy> dprince: It'd be a way to wire in a bunch of "plugin" type resources, e.g which all have common interfaces
14:51:27 <dprince> shardy: spec?
14:51:29 <slagle> i guess i was thinking you'd have a roles resources, mapped to glance-api, keystone-api, etc
14:51:36 <slagle> or whatever roles you wanted
14:51:48 <dprince> slagle: this is a good example of the interface I've got
14:51:50 <dprince> https://review.openstack.org/#/c/237370/1/overcloud-without-mergepy.yaml,cm
14:52:09 <dprince> basically you define an input which is an array of roles you want on that server type
14:52:12 <shardy> #link https://review.openstack.org/#/c/228615/
14:52:15 <shardy> dprince: ^^
14:52:33 <slagle> dprince: ah cool, that's what i was looking for :)
14:52:56 <dprince> slagle: yeah, and for pacemaker I've got pacemaker version of the roles which "extend" the defaults
14:53:09 <dprince> so the duplicate puppet manifests would go away
14:53:11 <shardy> dprince: perfect, so the list in ControllerServices could eventually just be an input to a ResourceChain resource
14:53:20 <slagle> longer term, i'd like to make use of the composable roles to deploy the Undercloud as well, using the standalone Heat container idea
14:53:23 <dprince> https://review.openstack.org/#/c/237370/1/puppet/roles/pacemaker/glance-api.yaml,cm
14:53:42 <dprince> jistr: you might want to look closely at that and see if you buy it?
14:54:25 <gfidente> I'm happy that it keeps the network resources out of the role as well
14:54:40 <dprince> rhallisey: for docker I think we'd add some output metadata to describe the docker compose, and perhaps the actual containers we'd need (for the "plugin") too
14:54:55 <dprince> gfidente: exactly, network resources are tied to the servers themselves
14:55:03 <dprince> gfidente: a different "layer"
14:55:11 <gfidente> yes I think it's great this way
14:55:38 <dprince> gfidente: it won't hurt if say glance::api::bind_host is set... but there is no glance API or registry running
14:55:40 <jistr> dprince: yeah will look at the composable roles
14:55:47 <jistr> dprince: in general i like the idea a lot
14:56:17 <rhallisey> dprince, ya I think we should fit in pretty easily with composable roles.  Metadata with list of containers works
14:56:27 <jistr> dprince: same for the 2 separate stacks for HW prep and config phase. I'll think it through a bit more.
14:57:19 <dprince> jistr: yes, I've been chatting w/ the Kolla community about using TripleO as a "paving machine" (they have nothing that does baremetal provisioning). 2 stacks would help there
14:57:43 <dprince> or perhaps a more configurable single stack
14:57:49 <dprince> lots to think about
14:58:10 <dprince> #open discussion
14:58:15 <dprince> #topic open discussion
14:58:19 <gfidente> dprince, sec, the comment on bind_host
14:58:31 <dprince> any other things to bring up quickly in the meeting this week?
14:58:37 <shardy> dprince: IMO we shouldn't necesarily say that we couldn't just support two different heat templates
14:58:44 <gfidente> we can always pass that as param no?
14:59:02 <dprince> I probably won't be around next week to run the meeting next week. Anyone want to run it?
14:59:12 <dprince> Or perhaps we just cancel?
14:59:14 <shardy> it's kind of unusual the way TripleO uses one template for all deployment topologies, if we needed a "noconfig" overcloud template, IMHO that'd be perfectly fine
14:59:34 <dprince> shardy: that is the idea
14:59:35 <shardy> dprince: most folks will be at summit, so I'd say +1 on cancel
14:59:41 <slagle> see some of you in tokyo!
14:59:47 <d0ugal> Enjoy
14:59:50 <d0ugal> I am jealous :)
14:59:50 <dtantsur> ++
15:00:00 <dprince> gfidente: lets chat bind_host in #tripleo following this
15:00:08 * bnemec will not be attending next week regardless :-)
15:00:12 <dprince> #endmeeting