14:00:09 <EmilienM> #startmeeting tripleo
14:00:10 <openstack> Meeting started Tue Mar 28 14:00:09 2017 UTC and is due to finish in 60 minutes.  The chair is EmilienM. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:11 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:12 <EmilienM> #topic agenda
14:00:13 <openstack> The meeting name has been set to 'tripleo'
14:00:19 <EmilienM> * one off agenda items
14:00:20 <EmilienM> * bugs
14:00:22 <EmilienM> * Projects releases or stable backports
14:00:24 <EmilienM> * CI
14:00:26 <EmilienM> * Specs
14:00:28 <EmilienM> * Week roadmap
14:00:30 <EmilienM> * open discussion
14:00:32 <EmilienM> Anyone can use the #link, #action and #info commands, not just the moderatorǃ
14:00:34 <EmilienM> Hi everyone! who is around today?
14:00:37 <thrash> o/
14:00:38 <adarazs> yo/
14:00:40 <ccamacho> hi folks o/
14:00:42 <mwhahaha> hi2u
14:00:44 <marios> o/
14:00:45 <jpich> o/
14:00:45 <florianf> o/
14:00:46 <cdearborn> o/
14:00:50 <weshay> o/
14:00:50 <beagles> o/
14:01:14 <EmilienM> look at that crowd
14:01:15 <panda> \o/ (o) oC /o\
14:01:20 <tzumainn> \o
14:01:23 <gfidente> o/
14:01:26 <matbu> o/
14:01:37 <chem> o/
14:01:42 <owalsh> o/
14:01:56 <bogdando> o/
14:02:03 <sshnaidm> o/
14:02:16 <EmilienM> ok, let's start
14:02:18 <EmilienM> #topic review past action items
14:02:26 <EmilienM> EmilienM to look more closely the thread about periodic jobs: done, topic is still under discussion now.
14:02:55 <EmilienM> sshnaidm: like we said in our recent discussions, I guess we'll make progress over the next days and we need to continue the investigation with pabelanger
14:03:05 <EmilienM> sshnaidm: feel free to bring it during the CI topic later
14:03:12 <EmilienM> EmilienM to update https://review.openstack.org/#/c/445617/ to keep dib-utils part of TripleO for now and keep moving out DIB out of TripleO: done and positive votes by TC + DIB + TripleO folks.
14:03:22 <shardy> o/
14:03:22 <EmilienM> marios to investigate why https://review.openstack.org/#/c/446506/ is failing: done, patch is merged
14:03:29 <EmilienM> team to review https://launchpad.net/tripleo/+milestone/pike-1 and prioritize in progress work: still in progress. A lot of bugs are still untriaged, help is welcome
14:03:31 <marios> EmilienM: ack
14:03:52 <EmilienM> on my list for today ^: continue the triage on pike-1 bugs and blueprints
14:03:58 <EmilienM> team to postpone Triaged bugs to pike-2 next week: will do it this week.
14:04:07 <EmilienM> #action EmilienM to postpone pike-1 Triaged bugs to pike-2 milestone
14:04:12 <trown> o/
14:04:39 <EmilienM> flaper87 to file a bug in tripleo assigned to opstools for fluentd to read logs from containers: done
14:04:44 <EmilienM> and bogdando to followup on https://review.openstack.org/#/c/442603/ and update if needed: patch still WIP (not passing CI and no review)
14:04:53 <ansiwen> o/
14:05:06 <EmilienM> anything before we move on?
14:05:21 <EmilienM> #topic one off agenda items
14:05:27 <EmilienM> #link https://etherpad.openstack.org/p/tripleo-meeting-items
14:05:32 <EmilienM> shardy: o/
14:05:57 <shardy> EmilienM: Hey, so my question is around tripleo-image-elements, and tripleo-incubator
14:06:04 <shardy> there is a lot of stuff there we no longer use
14:06:23 <shardy> and t-i-e has caused some confusion recently as we're mostly only updating tripleo-puppet-elements
14:06:25 <EmilienM> shardy: I added os-cloud-config
14:06:31 <pradk> o/
14:06:32 <shardy> e.g even for non puppet things like the packages in the image
14:06:49 <shardy> EmilienM: yeah, it'd be good to figure out how to retire these old repos I think
14:06:57 <EmilienM> I remember slagle moved a bunch of things from t-i-e into instack-undercloud
14:07:04 <shardy> in the case of tripleo-incubator, I think there are still a couple of ci dependencies
14:07:17 <EmilienM> we should do it step by step. Probably tripleo-incubator is the safest one to start with
14:07:20 <shardy> but IMO we should just move things, and retire those repos?
14:07:24 <slagle> shardy: i removed all those deps, or at least have patches that do
14:07:43 <slagle> i dont recall if they all merged yet or not
14:07:58 <shardy> slagle: nice, Ok will look for them
14:08:09 <slagle> it looks like they did
14:08:27 <shardy> Ok so we can probably just propose the rm -fr patches and see what CI does ;)
14:08:28 <slagle> so, we could test a ci patch that deletes the git checkout from /opt/stack and see if we pass
14:08:49 <EmilienM> shardy: that would be a first good start, and only t-i-e and incubator now
14:09:08 <shardy> Ok, I'll propose the patches and we can iterate on any remaining pieces we actually need
14:09:09 <EmilienM> shardy: not o-c-c because the patch that removes the dep is still WIP
14:09:18 <shardy> EmilienM: ack, yep we can do that later
14:09:43 <shardy> I think the CI admins are still in -incubator?
14:09:44 <EmilienM> #action shardy to run CI patch that remove t-i-e and incubator projects
14:10:06 <shardy> I guess we may need some changes to move them to tripleo-ci
14:10:33 <EmilienM> #action EmilienM to remove os-cloud-config as a dependency (WIP) and follow-up on what shardy is testing for the 2 other projects
14:10:37 <shardy> Anyway, we can work out the details later unless anyone is opposed
14:10:40 <shardy> thanks!
14:10:43 <EmilienM> shardy: thanks!
14:10:50 <EmilienM> let's clean up things :)
14:10:54 <EmilienM> bogdando: o/
14:11:08 <bogdando> EmilienM: hi
14:11:48 <EmilienM> bogdando: what's up?
14:12:22 <bogdando> I have a few topics to announce and ask for ideas
14:12:32 <EmilienM> yeah go for it please, floor is yours
14:12:49 <bogdando> #topic minimal custom undercloud for containers dev/qa WIP
14:13:01 <bogdando> Custom undercloud layouts with dev branches and containers WIP (follows on the Flavio's patch)
14:13:01 <bogdando> #link https://review.openstack.org/#/c/450792/
14:13:34 <bogdando> so the idea is to deploy only the component under dev, with as minimal things as possible
14:13:48 <bogdando> your ideas in imlementation are welcome
14:14:16 <bogdando> #topic improved getthelogs (CI scripts) for CI logs parsing UX
14:14:19 <EmilienM> bogdando: cool, sounds like your asking for some reviews on quickstart patches
14:14:28 <trown> I reviewed that
14:14:43 <trown> I would prefer if the added functionality in that patch be moved to a new patch
14:14:48 <bogdando> well, yes, but not only reviews but ideas if this shall be done another way
14:15:02 <trown> that patch has been there for some time and we dont want to make it more complicated
14:15:11 <bogdando> and the 2nd item, I ask for reviews only :)
14:15:13 <bogdando> Getthelogs rework https://review.openstack.org/#/c/449552/ , an example log parsing session https://github.com/bogdando/fuel-log-parse/blob/master/README.md#examples-for-tripleo-ci-openstack-infra-logs
14:15:13 <bogdando> #link https://review.openstack.org/#/c/449552/
14:15:37 <bogdando> and you can try to use that for daily CI tshooting as well and give me feedback
14:15:48 <bogdando> that's it
14:15:51 <bogdando> go on :)
14:16:10 <bogdando> trown: I did that, it's a follow up now
14:16:15 <EmilienM> bogdando: yeah, it's hard to give feedback on the first thing since you sent the patch 30 min ago
14:16:22 <EmilienM> I don't think anyone had time to look at it
14:16:25 <trown> bogdando: cool thanks!
14:16:49 <EmilienM> chem: go ahead, seem like you need reviews on upgrade stuffs
14:17:02 <bogdando> EmilienM: np. The patch is fresh, but I strongly believe not the very idea of dev shortcomings
14:17:03 <chem> hi all
14:17:09 <EmilienM> not sure we want to paste all the links here though but it would have been nice to create a Gerrit topic
14:17:31 <chem> yes those are pending review and backport needed for N->O upgrade
14:17:45 <chem> EmilienM: hum ... willing to learn for next time
14:17:56 <EmilienM> chem: could you please create a Gerrit topic for all these patches
14:18:05 <EmilienM> so we can track them more easily
14:18:07 <chem> EmilienM: ah ... ack
14:18:10 <bogdando> (oops, s/shortcomings/shortcuts/g)
14:18:20 <chem> EmilienM: oki will do
14:18:30 <EmilienM> chem: thanks.
14:18:34 <chem> EmilienM: will put it in the etherpad when done
14:18:41 <marios> chem: so for l3agents on  https://review.openstack.org/#/c/445494/ was going to bring it up in bugs for https://bugs.launchpad.net/tripleo/+bug/1671504
14:18:41 <openstack> Launchpad bug 1671504 in tripleo "l3 agent downtime can cause tenant VM outages during upgrade" [High,In progress] - Assigned to Marios Andreou (marios-b)
14:18:43 <EmilienM> #action team to review chem's patches from https://etherpad.openstack.org/p/tripleo-meeting-items about upgrades
14:19:15 <chem> thanks
14:19:27 <EmilienM> panda: you had something to say too? I noticed the last point
14:20:04 <panda> no
14:20:19 <EmilienM> someone posted: FYI more doc on ci migration re: toci scripts https://review.openstack.org/#/c/450281/
14:20:40 <EmilienM> but no name, so I don't know who wants to talk about it
14:20:49 <EmilienM> ok moving on
14:20:51 <EmilienM> #topic bugs
14:20:56 <EmilienM> #link https://launchpad.net/tripleo/+milestone/pike-1
14:21:19 <marios> EmilienM: i have two things please https://bugs.launchpad.net/tripleo/+bug/1669714 comments 9/10 in particular... long story short, we were told to remove the openvswitch upgrade workaround. Now we need to add it with an extra flag and thoes reviews are in progress see comment #9. Q: can we use that Bug (attempt to minimize confusion) even though it is in fix-released? Or do we have to file
14:21:19 <openstack> Launchpad bug 1669714 in tripleo "Newton to Ocata - upgrade to ovs 2.5->2.6 with current workaround and lose connectivity" [High,Fix released] - Assigned to Marios Andreou (marios-b)
14:21:21 <EmilienM> do we have outstanding bugs to discuss this week?
14:21:24 <marios> new one?
14:22:18 <EmilienM> marios: re-open it and re-use it
14:22:27 <EmilienM> marios: it will avoid confusion. It's same topic anyway
14:22:28 <marios> EmilienM: (the reviews for this are in chem list fyi). ack will do
14:22:35 <marios> EmilienM: exactly for this this reason
14:22:53 <marios> EmilienM: this one was a request we are trying to get into stable/ocata for the ansible steps upgrade... i spent some time looking last week...  https://bugs.launchpad.net/tripleo/+bug/1671504
14:22:53 <openstack> Launchpad bug 1671504 in tripleo "l3 agent downtime can cause tenant VM outages during upgrade" [High,In progress] - Assigned to Marios Andreou (marios-b)
14:23:09 <marios> the review at https://review.openstack.org/#/c/445494/ does what it's meant to, by only killing one neutron-l3-agent at a time, but there is an ongoing/outstanding issue neutron-openvswitch-agent. (see comment #2 on bug for more info). Q: can we land /#/c/445494/ so we can get it to stable/ocata, even though it won't work until the packaging bug is fixed?
14:23:42 <marios> shardy: grateful for input, even if you didn't have time to check it yet/doesn't have to be right now thanks
14:23:56 <EmilienM> marios: what is ETA for packaging fix?
14:24:15 <marios> EmilienM: still tbd i mean there is no reliable fix yet at least not one i've been able to test
14:24:27 <shardy> marios: If it's been proven to solve one part of the problem I'm +1 on landing it, will review
14:25:02 <EmilienM> yeah I had a first review and I need more time to review again and vote.
14:25:21 <marios> EmilienM: shardy ack thanks appreciate anyone's reveiw time /me done on bugs
14:25:33 <EmilienM> ok thanks marios
14:26:00 <EmilienM> so about bugs, quick reminder: I'm going to move all bugs that are not In progress from pike-1 to pike-2
14:26:07 <EmilienM> except these which are critical
14:26:19 <EmilienM> any comment ^?
14:26:59 <jpich> sounds fine
14:27:08 <EmilienM> #topic projects releases or stable backports
14:27:18 <EmilienM> quick update on tripleo-validations: we have now stable/ocata
14:27:42 <EmilienM> could someone from validations investigate this problem? https://review.openstack.org/#/c/450178/
14:27:56 <EmilienM> the python jobs fail on stable/ocata
14:28:13 <EmilienM> jrist: ^ can you look this week please (or find someone)
14:28:29 <jpich> florianf is looking into it
14:28:33 <florianf> EmilienM: in both cases the failure seem to be related to some package metadata file.
14:28:33 <EmilienM> neat
14:28:57 <EmilienM> florianf: ok. so you on it?
14:29:06 <florianf> EmilienM: it looks like a race condition, since different gates fail after rechecks. yep, I'm on it.
14:29:13 <EmilienM> ok
14:29:15 <shardy> florianf: are we just missing a dependency on pyparsing or something?
14:29:26 <EmilienM> quick reminder about Pike schedule
14:29:28 <EmilienM> #link https://releases.openstack.org/pike/schedule.html
14:29:37 <EmilienM> Pike-1 milestone is on Apr 10 - Apr 14
14:29:49 <florianf> shardy: Good point, I'll check for that as well.
14:30:16 <EmilienM> I'll propose a first tag on tripleo projects on April 13 most probably
14:30:30 <shardy> florianf: Hmm, maybe not, I see what you mean as the py27 job sometimes works
14:30:52 <EmilienM> any question about releases & stable branches?
14:31:21 <EmilienM> #topic CI
14:31:26 <florianf> shardy: there's an open github issue for setuptools that look a bit like what we're seeing here. apparently other projects have started pinning down setuptools versions...
14:31:44 <shardy> florianf: ack, thanks for looking into it :)
14:32:00 <EmilienM> it would take 1 hour to describe all the issues we had in CI recently but I plan to write a post-mortem email when the saga is over
14:32:27 <EmilienM> #action EmilienM to write an email with all issues we had in Ci recently
14:32:48 <EmilienM> the current status is that things should be much more stable and we should hopefuly get a promotion today
14:33:08 <EmilienM> we need https://review.openstack.org/#/c/450481/2 and https://review.openstack.org/#/c/450756/1
14:34:00 <EmilienM> do we have any update about CI work?
14:34:10 <jpich> Is CI for stable branches getting more stable as well, or is that separate?
14:34:22 <trown> we should have that now
14:34:25 <shardy> dprince, jistr: did you want to mention the work around optimizing the container jobs?
14:34:27 <trown> hopefully
14:34:40 <EmilienM> adarazs sent a CI squad weekly status email: http://lists.openstack.org/pipermail/openstack-dev/2017-March/114634.html
14:34:43 <weshay> FYI.. panda put together a readme re: the changes to the toci scripts https://review.openstack.org/#/c/450281/ if anyone is finding it confusing
14:34:47 <shardy> that seems like a high-priority as we can't test upgrades without a much shorter walltime
14:35:18 <panda> if something is not clear, ping me and I'll add more infomrations
14:35:24 <jistr> right. dprince might know more but in case he's not around --
14:35:26 <marios> EmilienM: just mentioning an issue i saw before the meeting for docker related dependencies in the upgrade job see https://review.openstack.org/#/c/450607/
14:35:30 <jistr> we need to speed up containers CI
14:35:34 <weshay> ya
14:35:35 <jistr> both the normal one
14:35:36 <adarazs> jpich: stable branches should work, as much as the whole CI does. :)
14:35:40 <jistr> and the upgrades one
14:35:43 <trown> #info all bugs have been moved from launchpad/tripleo-quickstart to launchpad/tripleo with the quickstart tag
14:35:44 <jistr> (which is WIP)
14:35:54 <sshnaidm> dprince is working on setup of local docker registry
14:35:58 <EmilienM> trown: I noticed. Thanks for that
14:36:02 <jistr> either we could build images instead of downloading them from dockerhub
14:36:07 <weshay> not sure how we'll run a upgrade + containers in 170min
14:36:08 <jistr> which *might* be a bit faster
14:36:10 <jpich> Ok, thanks!
14:36:12 <shardy> So I had one question, the local registry is only for OVB, right?
14:36:15 <jistr> but better seems a local dockerhub mirror
14:36:23 <shardy> really we need to solve this for multinode, because those jobs are already much faster
14:36:27 <jistr> right yes, the one that dprince is setting up is in OVB cloud
14:36:34 <sshnaidm> shardy, not only
14:36:34 <weshay> matbu, may have to chat about his tool to help w/ that
14:36:58 <sshnaidm> shardy, if it has public ip, it could be used in multinode too
14:37:06 <EmilienM> shardy: faster and also because we have all the scenarios things that we want
14:37:12 <shardy> sshnaidm: but then we're still downloading a ton of stuff over the internet
14:37:19 <jistr> yea...
14:37:20 <shardy> vs a local cache
14:37:25 <sshnaidm> shardy, no, it will be local machine
14:37:26 <jistr> depends where the bottleneck is i guess
14:37:41 <sshnaidm> shardy, just updating by cron from docker hub
14:38:05 <jistr> sshnaidm: right but i think the OVB and the OS-infra clouds aren't collocated
14:38:11 <shardy> sshnaidm: For multinode, I'm not clear how that works, unless we build all the images every CI run, or just download them as we already do
14:38:17 <sshnaidm> shardy, jistr oh, right
14:38:24 <shardy> yeah the infra clouds could be one of many
14:38:44 <jistr> so for this we'd probably need support of os-infra folks?
14:38:52 <shardy> but we get a big performance improvement because the time taken to deploy the nodes via ironic etc is removed, and not considered as part of the timeout AFAIK
14:38:55 <EmilienM> pabelanger: ^
14:39:03 <shardy> jistr: yeah, I think we should start that discussion
14:39:17 <shardy> I suspect it's something which would be really useful to kolla too?
14:39:19 <EmilienM> do we want some AFS mirrors ?
14:39:53 <jistr> shardy: yea i think so re usefulness to Kolla
14:39:55 <sshnaidm> I wonder if infra already has available docker registries
14:40:00 <jistr> and maybe other projects as well?
14:40:03 <pabelanger> EmilienM: where should I be reading?
14:40:04 <shardy> EmilienM: I think we want an infra hosted local docker registry with a mirror of certain things on dockerhub
14:40:14 <shardy> for all clouds used by infra
14:40:28 <shardy> pabelanger: we're trying to speed up our container CI jobs
14:40:31 <jistr> the dockerhub pull-through cache doesn't have to be (and probably shouldn't) be tripleo-specific
14:40:45 <pabelanger> yes, that is something we'd like to do
14:40:45 <shardy> pabelanger: the bottleneck (or one of them) is downloading a ton of stuff from dockerhub
14:40:48 <sshnaidm> pabelanger, do you have available docker registries in infra?
14:40:57 <pabelanger> but need to make docker register backend to AFS
14:41:08 <pabelanger> so all images are the same across all regions
14:41:43 <shardy> pabelanger: do we have any timeline for that work at all yet?
14:42:04 <shardy> pabelanger: basically we really need this, and some other optimizations, to enable major upgrade testing
14:42:06 <pabelanger> no, execpt we want to do it
14:42:15 <shardy> otherwise we'll just run out of walltime before the timeout
14:42:39 <pabelanger> how much HDD is the containers taking?
14:43:00 <pabelanger> what kolla is doing is just publishing to tarballs.o.o for now, then downloading and building the registry themself atm
14:43:45 <EmilienM> that could be a first step for us
14:44:00 <jistr> pabelanger: just checked, about 10G
14:44:00 <EmilienM> since we have the registry on the undercloud, right?
14:44:03 <sshnaidm> shardy, building registry on undercloud will help, isn't it?
14:44:33 <jistr> sshnaidm: we already have a registry on undercloud
14:44:33 <EmilienM> well, it means pulling 10 GB each time we deploy an undercloud ?
14:44:36 <shardy> sshnaidm: yes, but we still have to download the images from somewhere, or build them, because we build a new undercloud for every CI job
14:44:44 <jistr> sshnaidm: yea what EmilienM said
14:45:21 <shardy> sounds like the tarballs approach used by kolla may be worth looking into as a first step
14:45:27 <EmilienM> I hope I'm wrong and we don't need to download 10 GB at each CI run
14:45:37 <pabelanger> the only issue is, we have so much storage on tarball.o.o
14:45:38 <jistr> :-x
14:45:51 <pabelanger> that isn't your issue, but something openstack-infra needs to fix
14:45:56 <pabelanger> I can deal with that
14:46:28 <EmilienM> am I correct when i said that we need to pull 10 GB of data when deploying the undercloud and create a local registry?
14:46:38 <shardy> jistr: does that number consider the image layering?
14:46:41 <bogdando> note, docker save deduplicates base images from the resulting tarball, we could use that trick to move all of the images
14:46:42 <shardy> it sounds like a lot
14:47:12 <jistr> the 10G is my /var/lib/docker/devicemapper on undercloud
14:47:16 <bogdando> if we have 50 images which overlap like 90%m the resulting saved artifact will be very small
14:47:21 <jistr> i don't have any containers running there
14:47:24 <jistr> so it's just the images
14:47:36 <shardy> EmilienM: I think we need to look at the actual amount downloaded vs the apparent size of the images, but I know from testing this you have to download a lot of stuff
14:47:36 <jistr> not sure if that's exactly the amount downloaded, probably not
14:47:44 <bogdando> to move around*
14:47:47 <shardy> and it's really slow as a result (slower than a normal puppet deploy)
14:47:48 <EmilienM> ok
14:47:53 <EmilienM> can we take some #action here?
14:47:56 <EmilienM> and move on
14:49:33 <EmilienM> shardy, jistr: some actions would help to track what we said
14:50:11 <shardy> #action container squad to investigate downloaded vs apparent image sizes
14:50:26 <EmilienM> shardy: thanks.
14:50:28 <EmilienM> #topic specs
14:50:32 <shardy> #action container squad to continue discussion with -infra re TripleO and kolla requirements for local/cached registry
14:50:50 <EmilienM> sounds like a good plan for short term
14:51:03 <EmilienM> #link https://review.openstack.org/#/q/project:openstack/tripleo-specs+status:open
14:51:41 <EmilienM> quick reminder, we want Pike specs merged by Pike-1 milestone otherwise they'll get postponed to Queen cycle
14:52:06 <EmilienM> a lot of them are ready for review, please take some time
14:52:24 <EmilienM> is there anyone here who want to discuss about a spec ?
14:54:44 <EmilienM> ok, let's move on.
14:54:48 <EmilienM> #topic open discussion
14:55:06 <EmilienM> if you have a question or feedback about TripleO, or bring another topic, it's the right time
14:55:49 <EmilienM> it sounds like we can close this meeting
14:55:57 <EmilienM> thanks everyone
14:56:00 <matbu> i just want to warn that the delay for the reviews in tripleo-quickstart is really huge
14:56:05 <d0ugal> thanks!
14:56:09 <shardy> thanks all!
14:56:11 <EmilienM> matbu: why?
14:56:17 <matbu> idk what we should do, but it's pretty boring
14:56:41 <EmilienM> matbu: it's a good info.
14:56:52 <EmilienM> matbu: do you think we spend enough time on reviewing oooq patches?
14:57:19 <matbu> EmilienM: i think tripleo cores don't spent enough time and btw there is not enough cores
14:57:28 <matbu> or oooq
14:57:33 <EmilienM> matbu: there is not enough cores on what?
14:57:35 <shardy> It's difficule because a lot of tripleo-cores aren't yet oooq experts
14:57:43 <shardy> we need to improve that I guess
14:57:43 <d0ugal> it would be intersting to compare different tripleo repos to see where the longest wait is
14:57:44 <matbu> shardy: yep i know
14:57:54 <matbu> d0ugal: +1
14:58:05 <sshnaidm> matbu, you can ping people, it works usually
14:58:19 <matbu> most of the time when i do a commit in tht, even if it's WIP i got a feedback pretty quick
14:58:19 <EmilienM> we have 30 Tripleo Core, I don't think it's fair to say we don't have enough core
14:58:42 <d0ugal> EmilienM: but how many of them feel like oooq cores - I don't :)
14:58:44 <matbu> for oooq even if i add peoples, i have no such chance
14:59:13 <shardy> EmilienM: how many of the 30 are experts in tripleo-quickstart though?
14:59:15 <matbu> EmilienM: hehe yep for tripleo, i wasn't aware of 30, it looks really big :)
14:59:16 <EmilienM> well, it's hard to push people to review oooq
14:59:21 <shardy> sounds like a topic for the ML
14:59:23 <EmilienM> shardy: not enough
14:59:26 <d0ugal> matbu: I find adding people to reviews doesn't work - I guess because we all get so much gerrit email spam it is easily missed
14:59:27 <shardy> maybe we could have a deep-dive
14:59:34 <panda> +1
14:59:36 <EmilienM> matbu: https://review.openstack.org/#/admin/groups/190,members
14:59:48 <EmilienM> I'm going to close the meeting
14:59:50 <d0ugal> shardy: +100 :)
14:59:55 <EmilienM> but sounds like matbu you could run the topic on the ML please
14:59:57 <adarazs> that could help.
15:00:06 <EmilienM> #endmeeting