14:00:42 <dprince> #startmeeting tripleo
14:00:43 <openstack> Meeting started Tue Mar 22 14:00:42 2016 UTC and is due to finish in 60 minutes.  The chair is dprince. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:44 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:48 <openstack> The meeting name has been set to 'tripleo'
14:00:50 <EmilienM> o/
14:00:55 <tzumainn> hi!
14:00:56 <dprince> hi everyone
14:00:56 <derekh> \o/
14:01:01 <michchap_> o/
14:01:03 <shardy> o/
14:01:03 <jdob> o/
14:01:07 <qasims_> o/
14:01:15 <d0ugal> o/
14:01:16 <pradk> o/
14:01:19 <bandini> o/
14:01:22 <adarazs> \o/
14:01:58 <marios> o/
14:02:07 <matbu> \o
14:02:07 <trown> o/
14:02:12 <rbrady> o/
14:02:20 <gfidente> o/
14:02:31 <dprince> #topic agenda
14:02:31 <dprince> * bugs
14:02:31 <dprince> * Projects releases or stable backports
14:02:31 <dprince> * CI
14:02:31 <dprince> * Specs
14:02:33 <dprince> * one off agenda items
14:02:36 <dprince> * open discussion
14:03:03 <dprince> there are no one off items to discuss this week. Anything else to add to our topics that doesn't fit into one of these categories?
14:03:05 <shardy> Hey the review highlights is a bit outdated in the wiki:
14:03:07 <shardy> https://wiki.openstack.org/wiki/Meetings/TripleO
14:03:34 <dprince> shardy: yeah, we've been skipping that in the weekly meetings too
14:03:41 <dprince> shardy: I think it is safe to remove it for now
14:03:48 <shardy> dprince: ack, I'll do it now
14:04:30 <beagles> o/ (a bit tardy)
14:04:38 <dprince> okay, lets get started
14:04:43 <dprince> #topic bugs
14:05:13 <dprince> I got bit by the os-collect-config breakage yesterday so it is worth mentioning I think https://bugs.launchpad.net/tripleo/+bug/1560257
14:05:15 <openstack> Launchpad bug 1560257 in tripleo "os-collect-config not covered in CI" [Critical,Triaged] - Assigned to James Slagle (james-slagle)
14:05:36 <dprince> basically, there was a switch to oslo.log in os-collect-config that broke us
14:05:54 <slagle> i'll try and look at this week
14:05:58 <trown> is os-collect-config just not getting built in CI? and therefore pulled from current?
14:06:02 <dprince> it was reverted so now we are okay, but it pointed out that we aren't even covering it in our CI :/
14:06:05 <slagle> trown: yes, that's the issue
14:06:11 <slagle> see the bug
14:06:17 <trown> k, I can help dig in to that too
14:06:17 <sshnaidm> dprince, I'd like to add to discussion:  tempest tests for CI
14:06:24 <pabelanger> o/
14:06:29 <slagle> since we couldn't CI the fix either, i just approved the revert
14:06:42 <trown> makes sense
14:06:47 <dprince> any other bugs to mention this week?
14:06:50 <shardy> Was there a HA specific bug yesterday too?
14:07:07 <dprince> sshnaidm: ack, lets talk about it under the CI topic in a few minutes
14:07:08 <shardy> pradk has an issue with https://review.openstack.org/#/c/289435/ and I wondered if anyone already knew why
14:07:23 <slagle> what's the issue?
14:07:43 <derekh> slagle: we should be testing it (obviously) /me will take a look too
14:07:49 <pradk> the ci ha job seems to fail on mongo connection issues
14:07:54 <shardy> slagle: it's failing to connect to mongo for some reason
14:08:10 <pradk> which doesnt seem related to aodh patch, so wonder if its something else that got in recently?
14:08:23 <EmilienM> maybe ipv6 stuffs?
14:08:33 <slagle> other ha jobs are passing
14:08:35 <dprince> shardy: I saw the mongo error messages in my puppet logs recently too, I wasn't sure if there was a functional breakage there or what
14:08:39 <slagle> pradk: are any other ha jobs failing with that error?
14:08:57 <dprince> I saw mongo errors w/ non-ha jobs I think
14:09:16 <pradk> it passes locally at least without net iso as i tested
14:09:27 <dprince> actually, I think I saw them locally w/ non-ha testing
14:09:36 <EmilienM> Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), is not an error
14:09:39 <shardy> It's not actually that clear if it's the connection to mongo or that it couldn't restart heat-engine
14:09:48 <EmilienM> I mean, it's not what makes the catalog failing
14:09:49 <shardy> the mongo connection looks broken, but that's actually only a warning
14:09:54 <EmilienM> looking at http://logs.openstack.org/35/289435/5/check-tripleo/gate-tripleo-ci-f22-ha/372d303/console.html#_2016-03-22_04_10_23_649
14:10:06 <EmilienM> the real error is:
14:10:08 <EmilienM> Error: /Stage[main]/Heat::Engine/Service[heat-engine]: Failed to call refresh: Could not restart Service[heat-engine]: Execution of '/usr/bin/systemctl restart openstack-heat-engine' r
14:10:43 <pradk> EmilienM, yea i think you're right
14:10:44 <EmilienM> I agree something is ERRORing with mongo but it's not blocking the deployment, at this step
14:10:47 <dprince> shardy: yep, that is what I thought as well. The mongo message had 'error' in it but it didn't cause an actual failure
14:10:50 <shardy> Ok, we can pick this up in #tripleo, just wanted to avoid duplicate effort if someone already knew what the problem is
14:11:08 <EmilienM> I can try to debug why mongo does an ERROR sometimes, i'll file a bug.
14:11:09 <dprince> still, it would be good to file a Mongo bug with regards to this and get it fixed...
14:11:14 <dprince> EmilienM: ++
14:11:18 <pradk> EmilienM, thx!
14:11:30 <dprince> okay, any other bugs before we move on?
14:11:36 <EmilienM> #action EmilienM to file a but about MongoDB random error http://goo.gl/Z6IZM4
14:12:28 <dprince> #topic Projects releases or stable backports
14:12:55 <EmilienM> just an FYI: puppet modules are branching stable/mitaka this week
14:13:01 <dprince> shardy, slagle: want to give an update of when we are branching Mitaka, etc.?
14:13:11 <shardy> So I was out most of last week - slagle you were following the blocker list for mitaka right?
14:13:59 <shardy> https://etherpad.openstack.org/p/tripleo-mitaka-rc-blockers
14:13:59 <slagle> yea, more or less, i think the only thing left for mitaka is aodh
14:14:05 <pradk> just a quick note that we want to get the above patch(aodh) into mitaka .. so its pretty much ready pending the ha ci issue that i need some help debugging
14:14:06 <derekh> This was the status logs at the last meeting
14:14:07 <derekh> * we're free to branch mitaka once aodh and and deprecated message
14:14:07 <derekh> have been merged, then patches merged into mitaka should be fixes
14:14:07 <derekh> only  (derekh, 14:26:29)
14:14:28 <dprince> derekh: thanks
14:14:32 <slagle> so we can branch anytime once that's merged, or we can branch before it's merged, and allow it to be backported
14:14:39 <shardy> Ok, so we should be good to go pretty soon after the puppet branch happens
14:14:43 <shardy> excellent news!
14:14:51 <dprince> yep, good news
14:14:56 <trown> ya nice one
14:15:07 <shardy> there's still a lot not struck-through in the etherpad, but I'm assuming we've invalidated that now based on the comments above?
14:15:17 <slagle> i had not specifically planned to do the branching :)
14:15:26 <slagle> but i could
14:15:52 <dprince> Yeah, lets decide how is going to take this
14:15:56 <shardy> slagle: I'm happy to do it, but you're also free to pick it up
14:16:08 <shardy> last time it was pretty easy once a list of SHAs to branch from were decided
14:16:26 <slagle> i'll double check the etherpad
14:16:29 <slagle> and will update it
14:16:59 <trown> I think we should make sure what we branch works with the rest of mitaka, we are testing that in the RDO promote job for mitaka delorean
14:17:01 <shardy> derekh: what's the status of the stable job - could we use that to get our list of sha's to branch from?
14:17:14 <shardy> trown: ack that works too
14:17:28 <shardy> IIRC last time we worked backwards from RDO known-good commits
14:17:30 <dprince> shardy: stable periodic job is passing I think http://tripleo.org/cistatus-periodic.html
14:17:34 <shardy> which is kind of weird, but fine :)
14:17:39 <trown> we have a versions.csv file in the delorean dir now too, so getting the hashes is super easy
14:18:12 <derekh> shardy: I'm not sure what you mean, we have to mitaka job
14:18:54 <shardy> derekh: yeah, but are we running a periodic job on master with additional tests now?
14:19:28 <trown> shardy: but master is converging to newton, so we dont really want to use that for deciding if we work with mitaka
14:19:38 <derekh> shardy: yup, we're running a periodic job on master, no extra tests
14:20:22 <derekh> have all the other projects branched at this stage and are all thier mitaka package in the stable repo?
14:20:32 <dprince> okay, perhaps we can follow up on the SHA's in #tripleo or on the list
14:20:34 <slagle> everything from the etherpad is merged except aodh and "Upgrades CI job (Liberty -> Mitaka)"
14:20:46 <slagle> are we still going to block on that upgrades job?
14:20:47 <shardy> trown: sure, but we can't CI test against the stable/mitaka repos until we have a set of branched repos
14:20:59 <trown> ya sounds like we will wait for aodh anyways, so might as well assess the situation then
14:21:17 <derekh> just pick now() and then we fix,
14:21:20 <dprince> slagle: lets talk about the upgrades CI job under the next topic
14:21:38 <shardy> derekh: hehe :)
14:21:47 <dprince> slagle: I'm thinking we don't block on that, although it is very bad of us to work on upgrades without an upstream CI job to tell us if it actually works or not
14:22:17 <dprince> anything that breaks upgrades would be an implied backportable fix anyways, so we don't need to block on it
14:22:54 <slagle> ok, wfm
14:22:55 <dprince> #topic CI
14:23:32 <dprince> so re. the upgrades job I spoke w/ Wes Hayutin (from RH) about possibly running his downstream upgrades job as a 3rd party test on select patches
14:23:42 <dprince> or perhaps periodically too on upstream
14:23:50 <weshay> aye
14:23:56 <dprince> weshay: hi
14:24:12 <derekh> sounds good to me
14:24:24 <slagle> weshay: do you have liberty to mitaka jobs?
14:24:31 <weshay> apetrich, matbu will enable that soon, is there a time frame we're up against? Mitaka release?
14:24:34 <derekh> are we talking about not doing it in our upstream CI? gfidente was working on it
14:24:34 <dprince> weshay: how close, or much work would it be to run your upgrades testing setup on select patches, or periodically
14:24:50 <dprince> derekh: I would like it in our CI too
14:24:57 <marios> slagle: dprince matbu is also setting up a upgrades job at https://rhos-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/RDO/view/ospd-upgrades/ (not sure if same as weshay is referring to)
14:25:04 <marios> (to liberty tho)
14:25:11 <dprince> derekh: but it has already taken so long that I'd like to have it covered from multiple angle's I think
14:25:13 <slagle> right, so that's kilo -> liberty
14:25:18 <derekh> dprince: its passing https://review.openstack.org/#/c/260466/
14:25:21 <slagle> and upstream we can only do liberty -> mitaka
14:25:31 <slagle> not sure we are talking about the same thing
14:25:41 <weshay> matbu, has not started but we didn't think changing the repos used would take too long.
14:25:47 <derekh> dprince: although I think its currently only updates not yet upgrades, that right gfidente ?
14:25:49 <pabelanger> Nice to see some downstream CI running as 3rd party CI, exciting time
14:26:03 <gfidente> derekh ack it's only updating atm
14:26:17 <dprince> derekh: still good progress
14:26:18 <weshay> we can plug in both update and upgrade we think
14:26:21 <dprince> gfidente: thanks for this
14:26:23 <slagle> gfidente: does that update to the patch being tested? e.g., deploy w/o the patch, update to the patch
14:26:30 <derekh> Yup, a big step forward
14:26:52 <gfidente> slagle nope it doesn
14:27:08 <slagle> ok, is that the next step?
14:27:34 <gfidente> yes and then I think we need to figure how to go from stable to master branch
14:27:36 <dprince> gfidente, derekh: don't we have an etherpad that outlines these steps (for the upgrades CI job upstream)
14:28:02 <dprince> if not, could we create that so we can reference it next week to follow up on these things...
14:28:19 <derekh> dprince: none that I know of
14:28:27 <dprince> an upstream etherpad for the upgrades CI job progress?
14:29:26 <dprince> derekh: if there is no etherpad how about we organize here: https://etherpad.openstack.org/p/tripleo-ci-upgrades
14:30:11 <dprince> lets follow up on this in #tripleo once we've organized
14:30:12 <derekh> dprince: sounds good to me, it mainly been gfidente working on it though
14:30:18 <dprince> gfidente: sound okay?
14:30:34 <gfidente> yes I'll edit the pad
14:30:35 <derekh> Our current CI under infra vs. 3rd party CI came up on the list, could we talk about that
14:30:40 <dprince> sshnaidm: re, tempest what are your thoughts?
14:30:48 <sshnaidm> I've created a patch that allows tempest run with tripleo.sh: https://review.openstack.org/#/c/295844/ I'd like to know your opinions about adding most relevant tests to the list and running tempest tests on the gate
14:30:55 <dprince> derekh: yes, that made me sad so I was avoiding it :)
14:31:20 <dprince> sshnaidm: all depends on how long it adds to the wall time
14:31:31 <dprince> sshnaidm: if it is reasonably fast I think it could be fine...
14:31:42 <derekh> dprince: ya, I'm sorry to see it happen to but I'm leaning in the direction of 3rd party CI, I think it leaves us a little more flexable, which in turn will make it easier to deal with out upcomming datacenter move
14:31:44 <sshnaidm> dprince, I'd suggest to add one by one, most relevant and not long
14:32:03 <derekh> dprince: was kind of thinking about it already before it came up
14:32:11 <dprince> derekh: I think 3rd party CI is really nice for focussed testing
14:32:25 <pabelanger> dprince: derekh: I'm happy to help fight to keep tripleO CI upstream, but I think it might be a hard fight
14:32:39 <dprince> derekh: in our case much of the pain is because we are not the gate, and by stepping by to 3rd party CI we'll never have the opportunity to fix things in the gate
14:32:44 <sshnaidm> I think it could be adjusted to run tests no longer than current pingtest
14:33:40 <derekh> dprince: yup I couldn't agree more, the majority of our breakages would have been prevented if we were in the gate
14:33:41 <dprince> pabelanger: ack, thanks.
14:34:26 <pabelanger> I am not sure what you mean not fix things in the gate as 3rd party?  Is there a specific project you think you won't be able to -1 / -2 on?
14:34:27 <dprince> derekh: the vision, which hasn't been realized was we would have a tiered gating setup. Our pipeline could perhaps come after the devstack-gate pipeline... and if we found an issue it could trigger a fast track revert or something
14:34:33 <derekh> dprince: pabelanger maybe we try and figure out if we'll have the HW required to even contemplate being in the gate any time soon
14:34:50 <dprince> pabelanger: 3rd party can't -2 I think
14:34:52 <trown> dprince: derekh, I think there are other barriers to being in the gate that we are not close to solving though
14:34:57 <shardy> One thing I've been trying is getting a TripleO heat stack deployed without ironic - that might open up testing a subset of our stuff (at least t-h-t and puppet pieces) on more generic CI resources
14:34:58 <dprince> pabelanger: and it typically only runs as check jobs too
14:35:00 <weshay> is it reasonable to focus the upstream tripleo-ci to be fast/reliable w/ a shorter scope of testing and leave larger and longer tests to 3rd party?
14:35:29 <dprince> weshay: yes, that is reasonable
14:35:44 <pabelanger> dprince: I'd have to check, but that could change.  It is just an ACL. I think it is more having the discussion with the project to allow your 3rd party to -2
14:35:45 <bnemec> Yeah, but HA is our major stability/time sink and that's something we need to be testing.
14:36:28 <dprince> bnemec: most of our outages aren't cloud outages I think
14:36:51 <dprince> bnemec: and I don't think HA is that big of a deal for our CI cloud
14:36:55 <bnemec> dprince: Our HA job has stability issues all by itself.
14:37:05 <derekh> dprince: he isn't talking about our cloud
14:37:05 <bnemec> Stuff randomly fails that has nothing to do with breakage in any other project.
14:37:07 <pabelanger> I could see your 3rd party doing gate queue on puppet-tripleO. Just a matter of making sure everybody is onboard with it.  The biggest issue I see, is if your 3rd party code is not public, it makes it hard for other projects to fix it.
14:37:25 <EmilienM> I've noticed lot of failures are happening randomly, like timeouts etc
14:37:30 <bnemec> There's no reason for our third-party code not to be public.
14:37:31 <pabelanger> So, you could have your 3rd party CI stuff still on review.o.o and use the depends-on flag too
14:37:52 <derekh> everything would be public except the HW
14:38:02 <EmilienM> ++
14:38:06 <shardy> EmilienM: most of those are because we were trying to run without enough ram on the nodes
14:38:13 <EmilienM> shardy: that's true
14:38:40 <bnemec> I still see bogus failures in the HA job after adding ram and swap.
14:38:49 <bnemec> Quite a lot of them, in fact.
14:38:53 <weshay> w/ a smaller set of jobs, you could provide HA w/ more memory
14:38:58 <derekh> not all our problems are down to ram
14:39:00 <trown> if 3rd-party can -2 in the gate (with permission from the project) I don't see what we lose with 3rd party
14:39:13 <dprince> if we go this route. Of making all our TripleO CI tests 3rd party gating I'm inclined to bail on contributing to it intirely and focus on how to get our actuall TripleO CI running on normal public cloud resources again. That can be gated on, etc.
14:39:17 <pabelanger> trown: Agreed
14:39:34 <dprince> split stack, etc. would move towards this I think
14:40:03 <shardy> dprince: as I said above, I think it's just a case of making the stack deploy on normal VMs
14:40:23 <shardy> that doesn't really need split stack, although I suppose that could enable more specific subsets of tests
14:40:24 <dprince> shardy: yep, that is becoming perhaps the most important thing for us I think
14:40:33 <derekh> dprince: so we'd just plug out the rack and use the nodpoolm  groups?
14:40:42 <dprince> derekh: yep
14:40:43 <shardy> dprince: There are two things I ran into - we hard-code assumptions for baremetal in our image building
14:40:59 <shardy> and we need to work around neutron networking, to enable proper connectivity back to heat
14:41:07 <shardy> both of those are totally solveable
14:41:16 <pabelanger> I should also note, there is ramblings of baremetal jobs in opentack-infra too. With us moving forward with infracloud and zuulv3 + ansible, I think baremetal in the gate is closer then ever
14:41:22 <dprince> derekh: we may well have more advanced tests that require OVB. But I'd aim at eliminating as much of those as possible
14:41:25 <pabelanger> outside what tripleO-CI does today
14:42:08 <derekh> dprince: I think we'd need to go hibryd, at least one job I think needs to be closer to real life w/ ironic
14:42:32 <shardy> Yeah, we'd still need at least one job w/ironic, but we probably don't e.g need every t-h-t job to use it
14:42:59 <dprince> derekh: sure. That is fine I think
14:43:38 <dprince> okay, I'd like to table this for now I think to give some time for other topics too
14:43:58 <dprince> derekh: cool? you got some more ideas I think :)
14:44:09 <trown> ya seems like CI will be a hot topic in Austin... wish I would be there
14:44:27 <dprince> trown: hitch hike?
14:44:42 <trown> dprince: lol, my issues are entirely of the incoming baby variety
14:44:50 <dprince> #topic Specs
14:44:51 <derekh> dprince: yup, lets move on
14:44:51 <trown> hitch-hiking would not help :)
14:45:16 <jdob> trown: think of it as contributing to the team by breeding new developers
14:45:18 <dprince> I'd like to skip specs this week. One quick note is after seeing trown's quickstart in action I'm +2 on his spec
14:45:31 <trown> \o/
14:45:37 <weshay> nice!
14:45:46 <dprince> any other specs before we move to open discussion stuff?
14:46:06 <EmilienM> the oen from michchap is also interesting
14:46:29 <EmilienM> #link refactor puppet spec https://review.openstack.org/#/c/286439/
14:46:31 <michchap_> I'll abandon the spec since there's already a blueprint
14:46:42 <EmilienM> ok
14:47:00 <EmilienM> michchap_: I'm working on a puppet-tripleo, rebased on your patch to add Glance roles
14:47:09 <dprince> EmilienM: yeah, I was just going w/ a specless blueprint for that
14:47:14 <EmilienM> ack
14:47:16 <dprince> michchap_: happy to land the spec too if it helps
14:47:27 <michchap_> EmilienM: cool, I'm going to set up a local ha test to try to nail down why keystone ha isn't working with the profile
14:47:31 <dprince> #topic open discussion
14:47:38 <michchap_> dprince: I don't think it's necessary
14:47:49 <dprince> michchap_: actually I wanted to mention our progress on composable services this week
14:47:53 <EmilienM> michchap_: dprince is also going to update
14:48:04 <shardy> Related to specs/blueprints - I was going to suggest folks start raising blueprints of what they expect to work on in newton
14:48:06 <michchap_> dprince: yep
14:48:09 <dprince> michchap_: your work on the puppet-keystone patch has been helpful in refining the interface I think
14:48:18 <dprince> #link https://etherpad.openstack.org/p/tripleo-composable-services
14:48:28 <EmilienM> michchap_: yeah, that's excellent work
14:48:28 <shardy> we don't necessarily need a spec for everything, but having just a blueprint so we can derive a roadmap would be useful
14:48:51 <michchap_> dprince: I was originally hoping to make my patches totally disconnected from your role patches
14:49:02 <shardy> I'd like to see our process be pretty lightweight, but have an easy way to track what's in-progress for the cycle
14:49:19 <EmilienM> shardy: ++
14:49:23 <michchap_> dprince: I'm concerned we're going to drag this on for a very long time to write all the profiles.
14:49:29 <jdob> did tripleo officially adopt the spec-lite approach?
14:49:31 <trown> shardy: +1
14:49:38 <derekh> we also spoke about the lightweigt spec and agreed to use it a while back
14:50:05 <derekh> jdob: I thought we agreed in a meeting to use spec-lite a while back
14:50:05 <dprince> michchap_: if we put the steps into puppet-tripleo first I think it would be fine that way
14:50:12 <shardy> derekh: Yeah, the spec-lite bug approach used by glance
14:50:18 <dprince> michchap_: that way ordering isn't as much of an issue
14:50:23 <michchap_> dprince: ok, that means I'll use the old numbering though, so your patch won't work.
14:50:23 <jdob> i remember it coming up, I just didn't remember that we said we'd use it
14:50:30 <jdob> but cool, I'm glad we are, I like it
14:50:44 <dprince> michchap_: however, one thing I would like to do first is fix the steps so they match what our heat deployments say
14:50:57 <shardy> that's fine too, but I'd say lets always raise a blueprint, which can then link to either a spec (big feature), spec-lite bug (smaller feature), or nothing (simple/obvious feature or refactoring)
14:51:04 <shardy> does that sounds sane/reasonable?
14:51:08 <EmilienM> dprince: we need to iterate service by service, I suggest we start patching puppet-tripleo first
14:51:20 <dprince> michchap_: right now we have a mismatched step sequence because we skip the ringbuilder.pp manifests. So lets do that first, then proceed with your puppet-tripleo patches I think
14:51:22 <EmilienM> with a THT patch + Depends-On that cleanup the code
14:51:43 <derekh> jdob: http://lists.openstack.org/pipermail/openstack-dev/2016-January/085126.html
14:51:51 <jdob> shardy: is a blueprint+spec-lite the glance approach? I haven't been making blueprints in heat, just the spec-lites
14:51:52 <dprince> EmilienM: yes, but with one exception which I listed above. The steps are wrong in t-h-t now. Lets fix that first
14:52:02 <EmilienM> dprince: ok, excellent
14:52:09 <jdob> derekh: awesome. has anyone actually done it yet?
14:52:15 <michchap_> dprince: can you submit that as a separate patch?
14:52:16 <EmilienM> dprince: which ling?
14:52:19 <EmilienM> link*
14:52:21 <michchap_> dprince: then I'll rebase on it
14:52:29 <dprince> michchap_: It is this patch for now https://review.openstack.org/#/c/236243/
14:52:30 <EmilienM> yeah that's sounds a good plan ^
14:52:32 <michchap_> dprince: that way your roles and my profiles are both working from the same base
14:52:34 <shardy> jdob: Ah, yeah they're just wishlist bugs tagged spec-lite
14:52:41 <dprince> michchap_: which will become much simpler when I refactor the interface
14:52:43 <shardy> that's fine I guess provided we're consistent
14:52:52 <shardy> http://docs.openstack.org/developer/glance/contributing/blueprints.html
14:53:06 <jdob> shardy: i'm not saying TripleO has to follow it to the letter, if you want to advocate for always blueprints I'm not going to object saying other teams don't do that
14:53:15 <jdob> it was more of me wondering "Shit, have I been doing it wrong?"  :)
14:53:31 <EmilienM> dprince: why https://review.openstack.org/#/c/236243/ is not passing CI ?
14:53:35 <shardy> jdob: I'll look at what other projects do and propose a docs patch outlining that we do the same
14:53:41 <jdob> i suggest adding a section to the weekly agenda to review them too
14:53:49 <shardy> we can debate any details on the review - I just wanted to ensure we're clear on a plan :)
14:53:53 <jdob> so far on heat, it doesn't seem to happen regularly and i normally just tack mine on in the agenda items
14:53:55 <dprince> EmilienM: it did pass the non-ha job. Not sure about the HA
14:54:06 <michchap_> dprince: I meant if we just patch overcloud_controller and pacemaker to include the ringbuilder change you mentioned it might make it easier for both of us to get our changes in, since they won't depend on each other at all.
14:54:09 <jdob> but they should be small enough that a quick pass through in the weekly meeting isn't painful
14:54:22 <dprince> michchap_: that is what I will do as the rebase :)
14:54:30 <EmilienM> dprince: I don't get why we need your patch right now
14:54:41 <michchap_> EmilienM: it's the next step of the process, right?
14:54:43 <EmilienM> dprince: michchap_ and I were just moving out the code
14:54:46 <derekh> shardy: there was no objections here http://lists.openstack.org/pipermail/openstack-dev/2016-January/085126.html
14:54:47 <dprince> EmilienM: it is going to change
14:55:10 <michchap_> EmilienM: like what I'm saying is we just move the code, and at the same time dprince is doing the roles on the heat side.
14:55:12 <dprince> EmilienM: patch will become including ringbuilder in the manifests and incrementing the step logic accordingly
14:55:22 <EmilienM> ok
14:55:37 <shardy> derekh: ack, I was just mis-remembering the spec-lite part I think, thanks
14:55:39 <michchap_> EmilienM: then at the end we have something much better. but we have a bit of friction right now because the steps are wrong.
14:55:58 <dprince> michchap_: exactly
14:56:00 <EmilienM> dprince: so we first make 236243 passing CI, then rebasing michchap_ patch on top of it and then continue with other profiles
14:56:26 <dprince> yes, and then the puppet-tripleo work can continue as a separate entity
14:56:34 <EmilienM> ok
14:56:39 <michchap_> dprince: actually yeah, does that mean I don't need the t-h-t patches at all?
14:56:47 <dprince> and I can work on the Heat roles after each puppet-tripleo profile lands, etc.
14:56:58 <michchap_> dprince: although they are my only way to test at the moment...
14:57:11 <EmilienM> dprince: sounds like a good plan
14:57:15 <dprince> michchap_: two ways to go. We could cut over with my roles.
14:57:36 <dprince> michchap_: or you could have an interim patch which inlines your profiles into the legacy role manifests
14:57:53 <michchap_> dprince: I'll -2 my t-h-t patches, but keep them so I can easily test.
14:57:59 <dprince> michchap_: that would give you CI coverage without the heat service templates frome me
14:58:02 <michchap_> dprince: and abandon them once I get the profiles working
14:58:19 <dprince> michchap_: sure, either way. It wouldn't hurt to land those patches I think
14:58:33 <michchap_> dprince: ok. I'll leave them up.
14:58:33 <EmilienM> 1 min left guys
14:58:55 * EmilienM moving to puppet meeting
14:58:57 <dprince> EmilienM: 2 minutes. your french time must be fast :)
14:59:10 <michchap_> btw there's a couple of neutron drivers trying to get included, so I want to move on to neutron profiles quickly before it gets even more messy.
14:59:11 <EmilienM> dprince: canadian you mean? :)
14:59:27 <dprince> michchap_: sounds good
14:59:47 <dprince> thanks everyone
15:00:03 <derekh> ttyl
15:00:07 <dprince> #endmeeting