14:00:25 <EmilienM> #startmeeting tripleo
14:00:25 <openstack> Meeting started Tue Apr  4 14:00:25 2017 UTC and is due to finish in 60 minutes.  The chair is EmilienM. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:29 <openstack> The meeting name has been set to 'tripleo'
14:00:30 <EmilienM> #topic agenda
14:00:38 <beagles> o/
14:00:44 <ccamacho> hey folks! (~˘▾˘)~
14:00:45 <EmilienM> * review past action items
14:00:46 <EmilienM> * one off agenda items
14:00:48 <EmilienM> * bugs
14:00:51 <EmilienM> * Projects releases or stable backports
14:00:52 <EmilienM> * CI
14:00:54 <EmilienM> * Specs
14:00:56 <EmilienM> * open discussion
14:00:58 <EmilienM> Anyone can use the #link, #action and #info commands, not just the moderatorǃ
14:01:00 <EmilienM> Hi everyone! who is around today?
14:01:05 <sshnaidm> o/
14:01:07 <ianychoi> o/ (observer from I18n team)
14:01:08 <mwhahaha> o/
14:01:08 <cdearborn> \o
14:01:13 <marios> o/
14:01:19 <d0ugal> \o
14:01:20 <tzumainn> \o
14:01:28 <slagle> hi
14:01:31 <trown> o/
14:02:04 <jpich> o/
14:02:24 <EmilienM> ok let's start
14:02:27 <EmilienM> #topic review past action items
14:02:32 <EmilienM> * EmilienM to postpone pike-1 Triaged bugs to pike-2 milestone: not done yet, will do this week
14:02:36 <EmilienM> #action EmilienM to postpone pike-1 Triaged bugs to pike-2 milestone this week
14:02:55 <EmilienM> pike-1 is next week so i'll move the pike-1 bugs this week
14:03:02 <EmilienM> at least the ones that are not in Progress
14:03:12 <EmilienM> * shardy to run CI patch that remove t-i-e and incubator projects: still WIP
14:03:16 <shardy> o/
14:03:43 <EmilienM> * EmilienM to retire os-cloud-config: almost done. git repo is empty now. Need reviews on RDO packaging and we're done
14:03:56 <EmilienM> #link https://review.rdoproject.org/r/#/q/topic:os-cloud-config/retire
14:03:57 <shardy> I did that ref https://review.openstack.org/#/c/450809/
14:04:05 <shardy> it turns out we are still using some elements
14:04:17 <EmilienM> oops :)
14:04:22 <shardy> so I'll iterate on that until we can remove those unused, then we can see if/where we can move things to retire the repo
14:04:30 <jrist> /o
14:04:30 <EmilienM> shardy: nice, thanks!
14:04:33 <EmilienM> * team to review chem's patches from https://etherpad.openstack.org/p/tripleo-meeting-items about upgrades: still wip?
14:04:50 <EmilienM> chem is not here but if upgrade folks need more reviews, let us know
14:04:59 <EmilienM> * EmilienM to write an email with all issues we had in Ci recently: not done yet
14:05:07 <EmilienM> * container squad to investigate downloaded vs apparent image sizes
14:05:16 <EmilienM> * container squad to continue discussion with -infra re TripleO and kolla requirements for local/cached registry
14:05:37 <EmilienM> I think dprince has a topic for that a bit later
14:05:57 <EmilienM> chem: did you get all the reviews you needed on upgrade patches ? (still catching up past action items=
14:06:24 <chem> yeap, thank to everyone :)
14:06:36 <EmilienM> #topic one off agenda items
14:06:40 <EmilienM> #link https://etherpad.openstack.org/p/tripleo-meeting-items
14:06:51 <EmilienM> dprince: go ahead
14:06:51 <marios> chem: EmilienM: we still have some pending things but can bring them later on bugs
14:07:07 <EmilienM> marios: ack
14:07:19 <dprince> EmilienM: ack.
14:07:35 <dprince> EmilienM: I would like to proposed that we disable the nonha job
14:07:49 <dprince> EmilienM: in favor of adding back the containers job.
14:08:00 <dprince> noting that the containers job alread did introspection
14:08:31 <dprince> We can make other aspects of the containers job match the previous nonha job I think fairly quickly
14:08:42 <shardy> dprince: considering we need to save time, would moving introspection to a different job be an option?
14:08:58 <shardy> I guess the HA job is already pretty close to the timeout...
14:09:05 <dprince> shardy: I'm not sure either the HA or updates jobs have extra time either
14:09:23 <EmilienM> shardy: same for ovb-updates
14:09:25 * dprince is starting to think introspection doesn't belong in our normal OVB queue jobs
14:09:26 <slagle> yea, that was my concern with adding more $stuff to the ha job
14:09:33 <shardy> yeah, hmm
14:09:37 <slagle> dprince: +1
14:09:45 <EmilienM> periodic?
14:09:46 <slagle> i'd rather test introspection in a periodic job altogether
14:09:47 <dprince> like what if introspection became a periodic job
14:09:51 <dprince> yeah
14:10:01 <shardy> Ok maybe we do introspection plus container deploy, and leave container upgrades to a multinode job as jistr is working on
14:10:08 <EmilienM> if we agree to bring attention to the periodic jobs, then ok
14:10:10 * dtantsur will not repair introspection again, if it again gets broken a week after it's out of CI...
14:10:22 <EmilienM> dtantsur: good point
14:10:37 <slagle> does tripleo frequently break introspection?
14:10:40 <EmilienM> shardy: yes, good option
14:10:51 <slagle> or is it the other way around?
14:10:54 <shardy> slagle: we removed it from CI once before and it immediately got broken, I don't recall why
14:11:09 <dprince> dtantsur: I'm hppy to run introspection jobs in our CI. But I think perhaps only a subset of the patches are related to it. Perhaps not every single t-h-t patch for example
14:11:28 <dtantsur> dprince, yes, until we use THT to configure inspector
14:11:46 <dtantsur> currently we can only run it on tripleo-common, python-tripleoclient and instack-undercloud
14:12:12 <shardy> dtantsur: Ok, so we actually don't have to run it on t-h-t changes?
14:12:15 <dprince> dtantsur: I know, But even then we could provide "less expensive" coverage for the featureset
14:12:21 <shardy> that, I think, is the main bottleneck for containers
14:12:34 <dtantsur> shardy, well, right now - we don't
14:13:27 <dtantsur> to be fair, we should have at least one job that exercise the whole flow we recommend to customers
14:13:40 <dtantsur> putting aside the fact, that I'm getting constant requests to enable cleaning by default ;)(
14:14:28 <fultonj> o/
14:15:10 <dprince> dtantsur: we have to make some hard decisions I think here. I'm happy to fit introspection in wherever we can. But at this point we've got a large number of containers patches coming in.... and no overcloud CI jobs on them at all
14:15:27 * mwhahaha is in favor of having CI actually do what customers do
14:15:40 <dtantsur> I realize it. But now we have introspection by default, and we have to cover it.
14:15:50 <EmilienM> how long does it take?
14:16:07 <dtantsur> If we stop recommending it by default - we can reduce its coverage, or stop covering it at all.
14:16:15 <mwhahaha> why not just enable it for the nonha-updates  job
14:16:22 <dprince> dtantsur: introspection is absolutely important to the overwall workflow, but I think we can still guarantee it isn't broken with and use much less resources
14:16:26 <EmilienM> mwhahaha: close to timeout already
14:16:29 <mwhahaha> or was the thought to also remove the nonha-updates job
14:16:37 <mwhahaha> what's the timeout on that one?
14:16:42 <mwhahaha> 90 mins?
14:16:52 <dtantsur> EmilienM, up to 5 minutes, usually. Not sure about TripleO CI though..
14:17:10 <EmilienM> mwhahaha: 180 afik
14:17:15 <sshnaidm> 3-4 minutes
14:17:18 <dtantsur> and with this approach we'll never be able to enable cleaning, which is something that at least storage folks want to make mandatory..
14:17:24 <mwhahaha> EmilienM: last i saw the nonha-updates was taking 80 mins
14:17:26 <dprince> and fwiw introspection is an optional feature.
14:17:30 * dprince doesn't always use it
14:17:38 <EmilienM> dprince: you're not deploying in production
14:17:41 <dprince> we may recommend it... but it is sort of optional
14:17:41 <EmilienM> customers do
14:17:50 <dtantsur> EmilienM++
14:18:01 <dprince> EmilienM: I do deploy to baremetal, and arguably more "production" than most developers
14:18:04 <dtantsur> I also don't always run introspection, but it's run nearly always in production
14:18:28 <EmilienM> dprince: what mwhahaha said is right, we need to test real scenarios and introspection is one of them
14:18:29 <dtantsur> dprince, I'm not sure if you're the most valuable customer, to be honest :D
14:18:32 <dprince> would we always use containers for Pike. I think that is the goal
14:18:40 <EmilienM> and we're talking about 3-4 minutes here
14:18:43 <dprince> right now we have 0 overcloud CI on this....
14:18:47 <dprince> lets start there
14:19:08 <dprince> and make introspection a periodic job until we get things tuned to accomidate it
14:19:13 * dtantsur does not disagree with having a container job
14:19:16 <dprince> *this* is the hard decision
14:19:18 <EmilienM> can we just try to move introspection to ovb-updates? and revert if we see it timeouts too much?
14:19:24 <dtantsur> dprince, and this is a wrong decision
14:19:36 <dtantsur> because then nobody ever will care to move it back
14:19:44 <slagle> we should just start testing conatiners with multinode, then this would be a moot point :)
14:19:53 <EmilienM> slagle: yes that
14:19:54 <dprince> EmilienM: I got comments from slagle and bnemec with concerns about it causing timeouts on the HA or updates jobs I think
14:20:03 <dprince> slagle: we should do that too
14:20:07 <EmilienM> dprince: I'm aware about this concerns, but we can try
14:20:15 <dprince> slagle: but I think then we aren't testing a full story there either
14:20:19 <dprince> slagle: we need to do both
14:20:25 <dtantsur> what about, dunno, figuring out why running puppet a few times takes so much time?
14:20:38 <shardy> slagle: jistr is already working on that via oooq-extras
14:20:51 <slagle> well, i'm not really in favor of doing anythnig that increaes the runtime of any CI jobs
14:20:51 <shardy> so hopefully soon we'll be able to do both
14:20:54 <jistr> though that's non-container -> container upgrade
14:21:02 <jistr> which is a bit different than container deploy
14:21:07 <slagle> all we're doing is kicking the can down the road
14:21:08 <dprince> I would point out that my initial proposal here is:
14:21:09 <shardy> IMO we need the multinode approach so we can get the exact same scenarios we use for not-containers now
14:21:12 <dprince> 1) disable nonha
14:21:20 <dprince> 2) enable containers job, with introspection
14:21:26 <dprince> everyone seems to be ignoring that...
14:21:39 <shardy> dprince: I think that's reasonable FWIW
14:21:39 <dtantsur> dprince, won't it hit the same timeouts?
14:21:41 <EmilienM> jistr: before upgrades, why not working on classic deployments?
14:21:49 <EmilienM> jistr: I would iterate on upgrades later, imho
14:21:52 <dprince> dtantsur: we have other ideas that will help there soon enough
14:22:00 <slagle> dprince: i would be fine with that, if the containers job also covers everything else nonha was doing
14:22:01 <jistr> EmilienM: we already had classic deployments ;)
14:22:03 <EmilienM> dprince: yes, it sounds good to me
14:22:08 <EmilienM> jistr: where? the ovb job?
14:22:10 <jistr> yes
14:22:12 <dprince> slagle: yes, that is my goal
14:22:27 <EmilienM> jistr: we think it would be better to have multinode jobs for that
14:22:34 <EmilienM> jistr: to have more coverage, like we do with scenarios
14:22:41 <slagle> dprince: that sounds good to me then
14:22:42 <dprince> I'm asking everyone to agree that we go all on in containers CI
14:22:54 <dprince> because we don't have resources to add *any* more OVB jobs ATM
14:22:56 <EmilienM> jistr: scenario001, 002, ... 004 with container deployments. I thought it was clear it was a priority
14:23:05 <dtantsur> just to clarify: containers in undercloud, in overcloud or both?
14:23:12 <shardy> EmilienM: we need deployments and upgrades tested via the multinode jobs IMO
14:23:23 <dprince> dtantsur: conatiners in the overcloud only I think is what we are talking about
14:23:25 <EmilienM> shardy: yes but deployments first. Upgrade right after
14:23:30 <dtantsur> k
14:23:34 <EmilienM> shardy: and it seems jistr is doing the other way around. Upgrade first...
14:23:52 <jistr> so weshay has this featureset010
14:23:54 <EmilienM> dprince: anything else about ovb-nonha we need to move but introspection?
14:23:54 <jistr> https://review.openstack.org/#/c/446151
14:24:03 <dprince> dtantsur: the containerized 'undercloud' is experimental. Only myself really spends time on this and I think it would be a Queen feature at this point
14:24:08 <jistr> i hope that could be used as a base for multinode container job
14:24:24 <dtantsur> I see
14:24:43 <dprince> EmilienM: I think we need to make containers use SSL too. But I don't think that would be a major blocker
14:24:46 <dtantsur> dprince, so essentially you're suggesting to change the non-ha job to use containers, right?
14:24:51 <EmilienM> shardy: could we prioritize deployments first and then upgrades for container-multinode work please?
14:24:51 <dprince> dtantsur: yes
14:24:58 <jistr> though i agree with dprince that anything else, regardless how far in it we may look to be, is currently in the air
14:25:02 <shardy> EmilienM: yeah, but we need three things (1) deploy containers (2) deploy containers and upgrade/update (3) deploy baremetal upgrade to containers
14:25:03 <jistr> and in the meantime
14:25:04 <dtantsur> ack, no objections here
14:25:11 <jistr> our major Pike feature is getting no coverage
14:25:12 <EmilienM> dprince: yes, SSL is a major blocker to me, again it's what our customers use
14:25:34 <dprince> EmilienM: I will fit ssl in somewhere. SSL can go anywhere I think...
14:25:35 <EmilienM> shardy: and you just gave them in the right order
14:26:45 <EmilienM> and ovb-nonha has UNDERCLOUD_VALIDATIONS
14:26:48 <EmilienM> but we can figure that later
14:26:57 <EmilienM> I just don't want to loose in coverage by this removal
14:27:03 <bnemec> Features in the nonha job: http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/README.rst#n102
14:27:06 <shardy> EmilienM: yeah, but honestly the oooq transition has stalled progress here a little - lets work with weshay and jistr so we can iterate quickly to all the coverage I mentioned
14:27:37 <EmilienM> shardy: ok good
14:27:59 <EmilienM> bnemec: ok thx
14:28:38 <shardy> Ok so we need to move ssl coverage to another job I guess
14:28:38 <weshay> roger
14:28:39 <EmilienM> #action dprince moves introspection from non-ha to ovb-containers and remove nonha job starting from pike (and keep it for stable/ocata and stable/newton)
14:28:45 <EmilienM> dprince: ok ? ^
14:28:59 <EmilienM> dprince: once it's done, let's enable the ovb-containers again (if it pass)
14:29:00 <dprince> EmilienM: ack
14:29:13 <dprince> and shardy and jistr are working on a containers multinode job too
14:29:20 <EmilienM> dprince: please let us know on the mailing list the progress
14:29:25 <slagle> just to be clear, we want to move all features from nonha to containers, not just introspection
14:29:29 <slagle> before we remove nonha
14:29:30 <dprince> and in the future we'll have to refactor one of these to support containers upgrades
14:29:56 <EmilienM> #action jistr / shardy / weshay / EmilienM to synchronize about the work prioritization on container / multinode CI work
14:30:08 <dprince> slagle: yes, as many as we can. Anything that is a blocker in the short term could go into the HA job too (SSL for example)
14:30:12 <sshnaidm> slagle, some of them can go to ha job
14:30:13 <jistr> dprince: container deployment vs. container upgrades can't go into single job for pike
14:30:29 <jistr> dprince: as we're not interested in container -> container upgrade right now
14:30:40 <EmilienM> jistr: right, thanks for this detail
14:30:40 <dprince> jistr: we'll argue about that in a future meeting I think
14:30:41 <jistr> it's non-container -> container we need to test
14:30:47 <EmilienM> ++ on that
14:30:49 <slagle> i guess, i'm not thrilled with that rate of change given all the birds up in the air, but ok :)
14:30:52 <jistr> ok :)
14:31:01 <dprince> jistr: this job could become containers upgrades fwiw if we need that
14:31:02 <shardy> jistr: yeah although we'll need to test container->container updates at some point
14:31:34 <jistr> yea but that becomes important for production only with Queens release
14:31:35 <dprince> these are hard decisions. We may have to give a litte in terms of what we'd like to test
14:31:54 <EmilienM> let's iterate on what we said today
14:32:03 <EmilienM> and keep the discussion going over the next weeks
14:32:04 <shardy> jistr: well, we need the architecture around updates proven before we release pike
14:32:19 <shardy> jistr: unlike major upgrades, there may be reasons to do updates very soon after the pike release
14:32:54 <jistr> shardy: right for container->container minor updates yea, that's a different story, much higher prio, i'd say
14:33:18 <shardy> jistr: yeah, sorry s/upgrades/updates
14:33:50 <EmilienM> shardy, jistr: not now because we're running out of time but later, can we work together on a document (blueprint or etherpad) with our list of CI jobs related to containers that we target for each cycle?
14:34:11 <EmilienM> so it's clear to everyone the priorities and what people should be working on
14:34:13 <shardy> EmilienM: sure I'll start one
14:34:18 <EmilienM> shardy: thank you
14:34:24 <EmilienM> dprince: can we move on?
14:34:32 <EmilienM> any question or last feedback before we go ahead?
14:35:12 <EmilienM> #topic bugs
14:35:18 <EmilienM> #link https://launchpad.net/tripleo/+milestone/pike-1
14:35:38 <EmilienM> this afternoon, I'll start moving "Triaged" bugs from pike-1 to pike-2
14:35:56 <EmilienM> the priority is to work on "In Progress" bugs for pike-1 now unless if there are critical bugs
14:36:21 <EmilienM> if I move a bug to pike-2 and you're no happy about it, please let me know. I use a script to do that, so I might miss something critical
14:36:46 <EmilienM> do we have any outstanding bugs to discuss this week?
14:36:53 <EmilienM> marios: anything about upgrades we need to discuss?
14:37:02 <marios> EmilienM: couple upgrades outstanding things yeah, sec
14:37:19 <marios> EmilienM: https://bugs.launchpad.net/tripleo/+bug/1678101 https://review.openstack.org/#/c/448602/ and also sofer (chem) with https://bugs.launchpad.net/tripleo/+bug/1679486 with https://review.openstack.org/#/c/452828/ . Note that gfidente has a related/alternate review at https://review.openstack.org/#/c/452789/ for both of those things
14:37:19 <openstack> Launchpad bug 1678101 in tripleo "batch_upgrade_tasks not executed before upgrade_tasks" [High,In progress] - Assigned to Marios Andreou (marios-b)
14:37:20 <openstack> Launchpad bug 1679486 in tripleo "N->O Upgrade, ochestration is broken." [Critical,In progress] - Assigned to Marios Andreou (marios-b)
14:37:47 <marios> EmilienM: so there has already been discussion in irc/on the reviews... i think the last one from gfidente is gaining traction to fix both bugs
14:38:07 <chem> marios: long story, we should meet with gfidente
14:38:11 <marios> EmilienM: we need this asap as both are key to the ansible upgrades workflow so our goal is end of week
14:38:11 <shardy> EmilienM: https://etherpad.openstack.org/p/tripleo-container-ci
14:38:21 <EmilienM> shardy: thx
14:38:28 <shardy> everyone feel free to hack on it, I made a first pass
14:38:31 <matbu> marios: i -1 the one from Guilio
14:38:54 <chem> we should meet matbu gfidente marios :)
14:38:55 <marios> matbu: ack, lets contienue in chan, just mentioning it as relevant/important bugs right now for the ansible upgrades
14:39:09 <matbu> yep
14:39:23 <EmilienM> ok, please let us know if you need any help
14:39:29 <EmilienM> (including reviews)
14:39:52 <shardy> yeah likewise, let me know if I can help beyond the discussions we've already had
14:39:56 <marios> EmilienM: thanks, i pointed at the key reviews above so if anyone has review cycles comments appreciated
14:40:04 <marios> thanks shardy
14:40:24 <EmilienM> marios: ok thanks
14:40:38 <EmilienM> is there any other bug that requires discussion this week?
14:41:15 <EmilienM> #topic projects releases or stable backports
14:41:22 <EmilienM> so next week is pike-1
14:41:49 <EmilienM> * I'll propose tripleo pike-1 release by Thursday morning
14:42:24 <EmilienM> * After pike-1, we should verify that upgrade jobs are working on master (it will test ocata to pike for the first time)
14:42:48 <EmilienM> we probably have a bunch of things to cleanup in the service templates, that are related to newton to ocata upgrades
14:42:59 <EmilienM> marios: ^ fyi
14:43:28 <EmilienM> * Mitaka is EOL next week
14:43:29 <marios> EmilienM: we already removed stuff from the tripleo-heat-templates
14:43:36 <marios> EmilienM: do you mean in the ci repo?
14:43:47 <EmilienM> marios: I'm not sure what you're talking about
14:43:50 <EmilienM> which repo?
14:43:59 <shardy> marios: No, te upgrade job has been broken and non-voting since we branched pike
14:44:04 <EmilienM> I'm talking about upgrade_tasks that were specific to newton to ocata upgrade
14:44:10 <shardy> marios: ref my ML discussion around release tagging etc
14:44:18 <marios> EmilienM: oh i see, i thought you were referring to removal of old upgrades scripts. didn't realise you were referring to newton to ocata
14:44:19 <shardy> we'll need to get it green and voting after pike-1
14:44:28 <EmilienM> yes that ^
14:44:38 <marios> EmilienM:  because we cleared up older upgrade scripts already
14:44:39 <EmilienM> re: Mitaka EOL - it's official on 2017-04-10
14:44:47 <marios> EmilienM: ok thanks
14:44:56 <EmilienM> I'll poll the team to ask if whether or not we want to keep the branches & CI jobs for Mitaka
14:45:11 <EmilienM> marios: yes, I reviewed the patch. cool :)
14:45:45 <EmilienM> do we have any question or feedback about stable branches & release management?
14:46:20 <EmilienM> #topic CI
14:46:38 <EmilienM> I have some updates :
14:47:10 <EmilienM> * pabelanger has been working on getting RDO AFS mirrors, so we can download packages faster
14:47:23 <leanderthal> pabelanger++
14:47:39 <leanderthal> that's awesome
14:47:41 <EmilienM> we switched puppet openstack ci to use them, it works pretty well (it was a pilot)
14:47:52 <EmilienM> now we're working to switch quickstart jobs to use the new mirrors: https://review.openstack.org/#/c/451938/
14:48:10 <EmilienM> once pabelanger gives us the "go" for tripleo, we'll start using it
14:48:16 <weshay> nice
14:48:44 <EmilienM> we need to be careful in this transition, because the mirrors are using rsync directly from rdo servers, so this thing is kind of experimental right now
14:48:57 <EmilienM> but we expect to improve the runtime of CI jobs
14:49:12 <EmilienM> pabelanger: anything you want to add on this topic?
14:49:45 <EmilienM> * pabelanger (again) has been working on a Docker registry : https://review.openstack.org/#/c/401003/
14:49:57 <EmilienM> and it follows up our discussion from last week
14:50:19 <EmilienM> where TripleO Ci could use this registry, (spread over AFS, so same thing as packaging, we would improve the container jobs runtime)
14:50:41 <EmilienM> mandre, dprince, jistr ^ fyi just to make sure you can review it and maybe investigate how tripleo ci will use it
14:51:01 <jistr> ack thanks
14:51:13 <EmilienM> adarazs: do we have any blocker on quickstart transition? Anything that needs discussion?
14:51:25 <EmilienM> panda, sshnaidm: any blocker on CI promotions this week?
14:51:26 <adarazs> EmilienM: I don't think so.
14:51:39 <sshnaidm> EmilienM, nope
14:52:01 <EmilienM> #topic specs
14:52:04 <EmilienM> #link https://review.openstack.org/#/q/project:openstack/tripleo-specs+status:open
14:52:13 <EmilienM> I sent an email last week about TripleO specs
14:52:32 <EmilienM> let me find the link
14:53:04 <EmilienM> #link http://lists.openstack.org/pipermail/openstack-dev/2017-March/114675.html
14:53:52 <EmilienM> I think one the conclusions is that please don't wait that your spec is merged to start doing PoC of the feature
14:54:06 <EmilienM> because i've been told some people were doing it
14:54:32 <EmilienM> do we have any discussion about specs this week?
14:54:46 <arxcruz> EmilienM: yes, I would like some core review to review my spec
14:54:54 <arxcruz> https://blueprints.launchpad.net/tripleo/+spec/send-mail-tool
14:55:14 <EmilienM> arxcruz: I encourage any review, core or not core
14:55:15 <arxcruz> also there's already a POC in https://review.openstack.org/#/c/423340/
14:55:40 <arxcruz> EmilienM: well, actually, you already gave me +2 on the spec at https://review.openstack.org/#/c/420878/ need one more core to review and hopefully merge
14:56:03 <EmilienM> arxcruz: my point is that when asking for reviews, we need to encourage anyone to review, even people not core yet
14:56:16 <EmilienM> I don't want the group to encourage only cores to make reviews
14:56:17 <arxcruz> EmilienM: I just saw your comments in https://review.openstack.org/#/c/423340/ and I'll work on that.
14:56:20 <arxcruz> EmilienM: gotcha :)
14:56:24 <EmilienM> arxcruz: cool
14:56:46 <arxcruz> but since pike-1 is next week, if someone can review I appreciate
14:56:55 <EmilienM> arxcruz: have you talked with infra and QA folks if they already have a mechanism to do this thing?
14:57:06 <arxcruz> EmilienM: yes, they don't
14:57:07 <EmilienM> arxcruz: I would talk with mtreinish and pabelanger and present the work you have done
14:57:23 <arxcruz> EmilienM: the idea is on zuul v3 have something similar, but it's for the future
14:57:34 <EmilienM> ok
14:57:46 <mtreinish> a mechanism to do what?
14:57:47 <arxcruz> it's not in zuul v3 agenda yet
14:58:07 <EmilienM> #topic open discussion
14:58:08 <arxcruz> mtreinish: remember we were talking about get the tempest results and send an email to key people when tempest tests fails ?
14:58:25 <panda> I have a last minute request: oooq deep dive: I voluteered to it, but since I'm PTO for the next two weeks, it can either be done the day after tomorrow (too near?), thursay 27th of april (too far?), or someone else could volunteer and do it in one of the next two weeks
14:58:28 <arxcruz> a few weeks ago, you point me to check openstack health
14:58:37 <mtreinish> arxcruz: right and I said if you put subunit streams in the log dir you can leverage openstack-health's rss feeds
14:58:55 <mtreinish> which does exactly what you want, its just rss not email
14:59:03 <trown> panda: I dont think the end of April or early May is too far...
14:59:07 <EmilienM> there are some tools that can send emails from RSS
14:59:19 <EmilienM> panda: yeah I think we can wait a little
14:59:25 <panda> ok
14:59:28 <EmilienM> unless trown wants to do it before :P
14:59:31 <panda> I'll propose the 27th then
14:59:33 <panda> :0
14:59:35 <panda> :)
14:59:36 <EmilienM> panda: sounds good
14:59:41 <mtreinish> arxcruz: for example: http://health.openstack.org/runs/key/build_queue/gate/recent/rss (which is an rss feed of all failed runs in the gate queue)
14:59:46 <trown> I could... but panda will do a better job :)
15:00:36 <EmilienM> ok thanks everyone
15:00:38 <EmilienM> #endmeeting