19:03:08 <fungi> #startmeeting infra
19:03:09 <openstack> Meeting started Tue Dec 22 19:03:08 2015 UTC and is due to finish in 60 minutes.  The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:03:10 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:03:12 <openstack> The meeting name has been set to 'infra'
19:03:16 <crinkle> o/
19:03:22 <fungi> #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:03:32 <fungi> #topic Announcements
19:03:39 <fungi> #info Tentative deadline for infra-cloud sprint registrations is Friday, January
19:03:47 <fungi> grr, stray newline
19:03:49 <fungi> #undo
19:03:50 <openstack> Removing item from minutes: <ircmeeting.items.Info object at 0xad58ed0>
19:04:02 <jeblair> january is short next year i guess
19:04:13 <fungi> january is a friday
19:04:17 * anteaya calls the stray newline catcher
19:04:18 <AJaeger> jeblair: just a week long
19:04:25 <pleia2> well, last day is a sunday, I set date for friday
19:04:27 <fungi> #info Tentative deadline for infra-cloud sprint registrations is Friday, January 29, 2016.
19:04:38 <pleia2> heh, there we go
19:04:39 <fungi> #info HPE is catering lunches at the infra-cloud sprint; please update your registration with any dietary restrictions.
19:04:51 <fungi> #link http://lists.openstack.org/pipermail/openstack-infra/2015-December/003602.html
19:05:28 <fungi> #link https://wiki.openstack.org/wiki/Sprints/InfraMitakaSprint
19:05:43 <fungi> #topic Actions from last meeting
19:05:59 <fungi> #link http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-12-15-19.03.html
19:06:13 <fungi> there were none, all completed successfully
19:06:22 <fungi> #topic Specs approval
19:06:26 <nibalizer> o/
19:06:34 <fungi> none approved last week, none on the agenda as proposed this week
19:06:46 <fungi> #topic Priority Efforts: Gerrit 2.11 Upgrade
19:07:06 <fungi> #info The OpenStack Gerrit instance at review.openstack.org has been upgraded from 2.8 to 2.11
19:07:17 <fungi> #link http://lists.openstack.org/pipermail/openstack-dev/2015-December/082492.html
19:07:27 <fungi> this went great, kudos to zaro for coordinating and all the others who helped prepare for and execute the maintenance work!
19:07:52 <fungi> so, are we ready to close out the standing agenda item for this priority effort and move the spec into the completed list?
19:07:55 <jeblair> we're still running it!
19:08:03 <clarkb> fungi: +1
19:08:08 <pleia2> yay, no rollback
19:08:13 <clarkb> direct all UI complaints upstream
19:08:16 <AJaeger> ;)
19:08:17 <anteaya> I'm fine with closing the agenda item
19:08:17 <fungi> indeed, haven't given up in disgust and downgraded, or rage-coded a gerrit replacement just yet
19:08:21 <dougwig> clarkb: haha.
19:08:36 <clarkb> maybe if it isn't just a few of those crazy infra people the message will get across
19:08:49 <fungi> #action fungi remove standing Priority Effort meeting agenda item for "Gerrit 2.11 Upgrade"
19:08:59 <fungi> who wants to write that change to openstack-infra/infra-specs?
19:09:09 <dougwig> clarkb: it'll just drive more users to gertty.
19:09:09 <fungi> #link http://specs.openstack.org/openstack-infra/infra-specs/specs/gerrit-2.11.html
19:09:22 <AJaeger> fungi, I can...
19:09:37 <fungi> #action AJaeger move gerrit-2.11 spec to the implemented list
19:09:41 <fungi> thanks!
19:09:44 <nibalizer> i've set up a polygerrit again, this time pointing at review.o.o http://polygerrit.nibalizer.com/
19:10:02 <fungi> #link http://polygerrit.nibalizer.com/
19:10:03 <nibalizer> I think it would be valuable for infra to run a polygerrit sortof offiically, what do you all think?
19:10:17 <anteaya> nibalizer: that is beautiful
19:10:19 <nibalizer> I think it could help motivate people to work on the gerrit UI
19:10:36 <fungi> as long as someone (hint: you?) wants to take the lead on maintaining it and keeping it freshly updated, i don't see a downside
19:10:38 <jeblair> isn't it a bit early?
19:10:50 <nibalizer> jeblair: its way too early for it to be usable
19:10:51 <clarkb> nibalizer: it requires another (unneeded) service right?
19:11:00 <fungi> but yeah, does seem like you're likely to get cut by the bleeding edge there
19:11:14 <anteaya> I think it would raise unreasonable expectations
19:11:26 <anteaya> and just but pressure on you
19:11:28 <nibalizer> someone might show up expecting it to work... then be dissapointed
19:11:36 <anteaya> exactly
19:11:58 <anteaya> if folks are interested they can do what you did and stand one up themselves
19:12:00 <fungi> good point, we wouldn't want it to look at all official or supported if the goal is to provide it as a motivational preview
19:12:27 <anteaya> but it is beautiful
19:12:35 <nibalizer> what we found with puppetboard was that just having the public opensatck one visible drew in users and contributors
19:12:43 <jeblair> i think if we want to do it, we should be really careful in how we announce it (possibly by not announcing it), and in some way make it really clear that it's *highly* experimental
19:12:51 <fungi> but perhaps some instructional documentation on running your own polygerrit for testing purposes is a reasonable compromise, if their documentation isn't tailored to make that a simple-ish task
19:13:25 <anteaya> I favour instructions for running your own polygerrit, then people can begin there
19:13:28 <fungi> rather, documentation on pointing your own polygerrit at our gerrit server
19:13:34 <nibalizer> ok
19:13:36 <anteaya> then reassess after that has been out a while
19:13:44 <fungi> clearly instructions just about running polygerrit belong upstream instead
19:13:48 <anteaya> fungi: yes, that
19:14:17 <jeblair> nibalizer: perhaps circulating a blog post like "here's how i set up a polygerrit to point at openstack and started hacking on it"
19:14:19 <anteaya> I can't see us getting bitten nearly as bad by telling folks how to run their own
19:14:26 <nibalizer> jeblair: oh thats a good idea
19:14:33 <fungi> could even just start out as an ml thread about how to test polygerrit with our gerrit and then get refined into a short howto later
19:14:44 <fungi> er, yeah, what jeblair said if you s/blog/ml/
19:14:52 * fungi forgets some people have blogs
19:14:52 <nibalizer> okay, I'll do that stuff
19:15:05 <jeblair> nibalizer: or a series of 500 instructional tweets ;)
19:15:18 * jeblair may not have the knack of social media yet
19:15:23 <clarkb> jeblair: thats how all the cool kids do it these days
19:15:26 <nibalizer> jeblair: I think you've got it
19:15:29 <jeblair> awesome
19:15:41 <nibalizer> steal this one https://twitter.com/corvus
19:15:45 <fungi> if you compress and then uuencode your tweets you might be able to get it down to 300
19:16:09 * fungi may be mistaking twitter for usenet
19:16:09 <jeblair> that may or may not be me, hard to tell
19:16:13 <pleia2> hehe
19:16:16 <nibalizer> ok, I will make some kind of blog post or ML post trying to get the polygerrit buzz going
19:16:48 <pleia2> I'll help spread the word, as I do
19:16:59 <anteaya> you do a good job at that
19:16:59 <fungi> yes, enthusiasm for polygerrit and community insertion during the design phase seems like a net win to me, so however we can do that without, as anteaya says, setting unreasonable expectations with our users
19:18:03 <nibalizer> cool
19:18:09 <fungi> okay, anything else in this, our hopefully final, installment of the gerrit upgrade priority effort agenda topic? (until 2.12!)
19:18:30 <jeblair> i wondered how close we are to being able to CD
19:18:35 <fungi> i guess it wouldn't hurt to prognosticate a little about gerrit release timelines
19:18:40 <fungi> yeah, cd might also be interesting
19:18:41 <jeblair> but probably no one here can answer that
19:19:17 <jeblair> i guess online reindex is required..
19:19:18 <anteaya> I can not answer that
19:19:24 <jeblair> so maybe it's update to 2.12 then talk about cd?
19:19:39 <anteaya> when is 2.12 scheduled to be out?
19:19:53 <clarkb> we probably also need to invest in better upstream tests to keep broken out of our pipeline
19:19:54 <jeblair> yesterday?
19:19:57 <fungi> also should we consider updating to the latest 2.11 point release (and dropping some of our patches if fixed there)?
19:20:08 <anteaya> oh did it?
19:20:11 * clarkb worries about another occurence of major jgit but that no one acknowledges exists \
19:20:18 <jeblair> don't know, but that was the plan
19:20:24 <Clint> anteaya: https://gerrit-documentation.storage.googleapis.com/ReleaseNotes/ReleaseNotes-2.12.html
19:20:40 <jeblair> 16 hours ago
19:20:44 <clarkb> I think they do have CI now, so probably just a matte rof writing tests
19:21:01 <fungi> how late in the cycle do we feel is too late to do an upgrade to 2.12? are we past that point and should plan for an upgrade in early niborg?
19:21:01 <cody-somerville> \o_
19:21:24 <anteaya> Clint: thank you
19:21:27 <fungi> nihilism
19:21:34 <fungi> (whatever name you voted for)
19:21:40 <clarkb> newton
19:22:08 <fungi> negatory
19:22:18 <anteaya> fungi: I'm for reviewing the feature set and seeing if there is anything that can't wait until early niborg
19:22:28 <anteaya> but leaning toward early niborg
19:23:24 <fungi> yeah, we're getting into the busy timr for this cycle once the holidays wrap up, so i feel like we should plan for may-ish
19:23:33 <fungi> s/timr/time/
19:23:52 <jeblair> probably 1st week of jan is as far as i'd want to stretch it
19:24:32 <fungi> and that seems like a pretty short planning window, especially with most of us awol between now and then
19:25:00 <pabelanger> Clint: signed commits looks nice
19:25:26 <anteaya> yeah, I'm kind of tired from supporting this one, not that I did much, not sure I how much energy I could scrape together with another upgrade week of Jan 1
19:25:30 <fungi> you can push signed commits into gerrit now (and some people already do), but you can't easily enforce them yet
19:25:33 <anteaya> I vote for may-ish
19:26:17 <fungi> and i'm unconvinced signing every commit is worth the additional overhead, though that's a discussion for another time
19:26:21 <jeblair> dropping the merge queue is nice
19:26:27 <jeblair> as is online reindex
19:26:31 <clarkb> fungi: right and nothing prevents you from doing it now aiui
19:26:38 <clarkb> jeblair: can we not drop the merge queue with 2.11?
19:26:38 <fungi> yeah, online reindex is going to be great
19:26:47 <jeblair> that will make our downtimes, both for renames and the upgrado to 2.12 faster
19:26:54 <jeblair> er shorter :)
19:27:07 <nibalizer> signing every commit would be :( for me
19:27:15 <jeblair> clarkb: erm, i don't think they removed the merge queue until 2.12
19:27:24 <nibalizer> having our release pipeline verify sigs against a keyring would be cool (though unrelated to this)
19:27:33 <clarkb> jeblair: maybe I misunderstood, I meant the zuul merge pipeline
19:27:36 <clarkb> jeblair: we can remove that with 2.11
19:27:56 <jeblair> clarkb: yes, if someone adds mergeability as a pipeline requirement to gate
19:28:00 <fungi> nibalizer: yeah, you end up with your signing key perpetually hot in the gpg agent, if you're constantly committing stuff, or typing your passphrase continually
19:28:17 <clarkb> jeblair: right needs extra checks to implement but doable with 2.11
19:28:18 <fungi> neither of which is necessarily great for keeping the key secured
19:28:23 <krotscheck> o/
19:28:29 <jeblair> clarkb: the merge queue in gerrit is the thing that caused 'submitted, merge pending' to happen
19:28:35 <clarkb> jeblair: ah
19:28:42 <jeblair> clarkb: in 2.12 merges are instantaneous success/failure
19:29:05 <jesusaurus> does the gerrit mergeability report back quickly, or will the change keep a +1 until it is approved?
19:29:29 <fungi> nibalizer: and tag signature validation is a planned follow-on to the current artifact signing spec
19:29:31 <clarkb> jesusaurus: I beleive it happens as part of the processing for a push
19:29:57 <jesusaurus> imo its useful to get the immediate feedback of a -1 as soon as the patch is in conflict
19:30:16 <clarkb> with the one shot background task as part of upgrading to fill in that data
19:30:42 <nibalizer> fungi: excellent
19:30:46 <jeblair> jesusaurus: i'm not sure about the time.  it is intended to be fast, however, it is a background task.  it is probably faster than zuul.
19:31:02 <jesusaurus> jeblair: clarkb awesome
19:31:20 <jeblair> clarkb: it still has to test all open changes, just like zuul does
19:31:34 <jeblair> clarkb: so it's implemented with a background process and queue
19:31:41 <clarkb> jeblair: thtas only for conflicts with right?
19:31:52 <clarkb> oh wait I see whta you mean
19:31:55 <clarkb> when master changes
19:31:58 <jeblair> yep
19:32:10 <jeblair> (well, when a branch changes)
19:32:28 <fungi> i wouldn't be surprised if gerrit background-testing for merge conflicts via jgit ends up being slower than zuul's battalion of cgit-based merger workers
19:32:48 <jeblair> fungi: it would be if it weren't for zuul's hundreds of thousands of refs :)
19:32:57 <fungi> true
19:33:09 <fungi> i keep forgetting about the cruft
19:33:40 <fungi> ref expiration might be a nice thing to look at down the road after zuul/nodepool v3
19:33:47 <jeblair> yeah
19:34:21 <fungi> okay, dead horse now thoroughly beaten? or anything else on the gerrit upgrade topic?
19:34:55 <fungi> we don't have anything else on the agenda besides open discussion, so can continue gerrity things there as well if anyone thinks of something late
19:35:10 <clarkb> I will be approving the nodepool image builders changes shortly
19:35:30 <clarkb> and restart nodepool to pick that up, then build and upload an image by hand
19:35:33 <clarkb> to make sure its all happy
19:35:39 <fungi> #agreed We should plan for our next Gerrit major upgrade to happen around May, 2016.
19:36:14 <fungi> #topic Open discussion
19:36:20 <fungi> anteaya: i really enjoyed the virtual holiday party last week, thanks for organizing!
19:36:24 <clarkb> the way that should work is nodepoold will run a gear server and image builder for us by default
19:36:29 <jeblair> me too!
19:36:31 <anteaya> thanks all for attending
19:36:34 <pleia2> yes, thanks anteaya!
19:36:39 <anteaya> it was a nice way to spend a few hours
19:36:44 <clarkb> so the internal interfaces for building will change but the config and all that should all remain the same in one place
19:36:47 <anteaya> glad it worked
19:37:32 <jeblair> the gerritforge folks have .deb packages of gerrit
19:37:51 <jeblair> might be worth looking into at some point
19:37:51 <anteaya> clarkb: do you have time next week to look at kibana4 upgrade at all?
19:38:12 <fungi> here's an interesting (at least to me) discussion item... working on a new and somewhat complex slave script in project-config, it would be nice if it were self-tested. in this case it gets used in a job which runs against project-config changes so that's not too hard to arrange as a conditional on ZUUL_PROJECT, but should we consider broader options for self-testing our slave scripts?
19:38:12 <clarkb> anteaya: the logstash 2.0 upgrade is still not done, needs reviews last I checked
19:38:31 <anteaya> clarkb: okay so logstash 2.0 first, thanks
19:38:47 <jeblair> fungi: have fewer slave scripts? :)
19:38:47 <clarkb> fungi: jjb updates are not safe to test like that
19:39:03 <fungi> jeblair: yeah, certainly an option
19:39:10 <clarkb> fungi: if we could somehow isolate just the script itself then we could do something like that, however having tests for the script is probably a better solutiin in that case
19:39:19 <anteaya> fungi: what options were you thinking of?
19:39:55 <clarkb> anteaya: logstash-worker20 is still running logstash2.0 just fine so basically just need reviews then I can deploy across the board
19:40:08 <fungi> anteaya: dunno, my (likely terrible) train of thought was to make teh slave script invocations relative and then add different search path entries if the job was running against project-config instead of another repo
19:40:12 <jesusaurus> fungi: thats something I've thought about as well, but haven't come up with a reasonable way to do it
19:40:17 <pabelanger> I could use some help with zuul and python-apschedule 3.0+: https://review.openstack.org/#/c/260637/ Just making sure the patch is sane
19:40:29 <anteaya> clarkb: okey dokey, I'll try to review those today (hopefully I can get some others to pile on too)
19:40:41 <pabelanger> now that I'm a fedora packager I'm trying to help package all the infra stuff :)
19:41:05 <anteaya> pabelanger: I missed that you are now a fedora packager
19:41:06 <jesusaurus> clarkb: what makes jjb tests not safe to test like that?
19:41:16 <anteaya> pabelanger: congratulations and my condolences :)
19:41:23 <clarkb> jesusaurus: jenkins has access to secrets, jjb jobs by definition have access to said secrets
19:41:26 <jeblair> pabelanger: i think apsched is well tested in zuul so should be good
19:41:35 <jesusaurus> clarkb: oh true
19:41:36 <pabelanger> anteaya: thanks, fun times ahead
19:41:38 <clarkb> jesusaurus: this means you can't run arbitrary changes to jenkins jobs without review
19:41:44 <anteaya> pabelanger: true that
19:41:53 <clarkb> jesusaurus: this is particularly true of many of our scripts as wlel
19:41:59 <jeblair> pabelanger: maybe check with hashar and see if that makes life hard for him?
19:42:12 <hashar> hello
19:42:16 <pabelanger> jeblair: good point!
19:42:19 <pabelanger> hashar: ohai!
19:42:19 <jeblair> hashar: https://review.openstack.org/260637
19:42:40 <fungi> clarkb: ahh, yeah though proposed changes don't (at least currently) have any jobs that run on long-term workers which house credentials, right?
19:42:41 <clarkb> jesusaurus: fungi this is why having tests for the changes that don't touch our actual systems would be the best solution imo
19:42:42 <anteaya> fungi jesusaurus that train of thought may be terrible but I have nothing better to offer
19:42:47 <crinkle> should we select a time for wiki/paste/nodepool downtime for puppet-mysql upgrade?
19:43:06 <hashar> pabelanger: jeblair: I noticed the apscheduler version bump. Will comment on it right now. Thanks for the ping!
19:43:08 <clarkb> fungi: right, but the case this came up in does run on long term workers aiui
19:43:16 <clarkb> fungi: so to address the use case we have toconsider this
19:43:34 <fungi> clarkb: oh, you're likely talking about a different case than i was considering
19:43:50 <clarkb> ah maybe
19:44:03 <pabelanger> hashar: thanks
19:44:19 <fungi> i was mostly wanting to make changes to the bindep fallback test job's slave script self-testing so that i don't have to wait for an image rebuild cycle to find out for sure that i don't need to roll back a modification
19:44:39 <anteaya> fungi: that is a good use case
19:44:45 <fungi> which would be easy since that job runs on changes proposed to the repo where the script resides
19:45:06 <anteaya> not having to wait on an image rebuild cycle to evaluate a change would be helpful
19:45:36 <jeblair> honestly, i don't think most of our slaves scripts need to be slave scripts anymore
19:45:54 * anteaya is interested in jeblair's point
19:45:56 <jeblair> that's a good pattern when the alternative is a text field in jenkins
19:46:08 <fungi> what do we do with them instead? e.g. for the bindep jobs you asked me to move the logic out of the job definitions and into slave scripts that could be preinstalled on the workers
19:46:30 <jeblair> fungi: maybe i don't understand which thing you're talking about
19:46:58 <jeblair> fungi: i thought you were talking about a script used in a single job the "bindep fallback test job's slave script"
19:47:17 <clarkb> the use case I had in mind was related to requirements updates and translation update type thing swhich do run on long lived slaves
19:48:06 <fungi> jeblair: well, the script is used by that job, but also used in a job macro to install dependencies of bindep-using jobs, so that the same logic can be exercised on the fallback rather than having a more rarefied test
19:48:51 <hashar> pabelanger: I +1ed the Zuul python-apscheduler version bump ( https://review.openstack.org/#/c/260637/ ).  In short Wikimedia uses dh_virtualenv to complete modules not provided by the distro.
19:49:03 <fungi> jeblair: example job templates are in the experimental-workers.yaml jjb config
19:49:38 <angdraug> mwhahaha talked to me about the problem with too many fuel plugin repos: http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2015-12-22.log.html#t2015-12-22T15:20:07
19:49:43 <fungi> showing the same script used for a tox-based unit test job and for testing that changes to bindep and the fallback packages list don't break under the same script
19:50:05 <angdraug> personally I'd like to keep the plugins ecosystem a free-for-all instead of burdening with too much governance
19:50:10 <fungi> so having changes to that script also covered by the job would close a bit of a loophole
19:50:54 <angdraug> how much of a problem or nuisance is the current situation with proliferation of poorly maintained plugins?
19:51:24 <fungi> angdraug: we generally don't want to host anything which is poorly maintained
19:51:44 <angdraug> we can move most or even all of these repos to review.fuel-infra.org
19:51:54 <fungi> high quality assurance expectations are a big part of our community
19:52:19 <fungi> just sort of curious why anyone would want to host poorly-maintained software
19:52:41 <fungi> assuming your assertion is that the software in question is poorly-maintained
19:52:46 <angdraug> most of the poorly maintained plugins come from a one-off need to deploy a single cloud
19:52:47 <AJaeger> angdraug: we now have 82 repos and every week comes one more, that puts a lot of review burden on us. Especially if I have to ask the same questions with every review (is this part of fuel? Part of big tent?)
19:53:02 <AJaeger> angdraug: why not create *one* repo for this?
19:53:25 <angdraug> having one repo would be a mess from ownership and merge rights perspective
19:53:27 <AJaeger> one-offs do not belong as separate repos in our infrastructure IMHO
19:53:37 <anteaya> AJaeger: agreed
19:53:52 <angdraug> agreed, but they're still potentially useful to other people who would need to deploy a similar configuration later
19:53:56 <anteaya> the maintenance burden for them falls to project-config reviewers
19:54:01 <AJaeger> angdraug: most have mescanef has owner already ;)
19:54:02 <angdraug> so I don't want to keep them private
19:54:21 <angdraug> it won't be a problem for us to move it to f-i.o
19:54:23 <fungi> AJaeger: also curious where your 82 figure comes from. my count this morning was 88 repos matching a "fuel" substring
19:54:45 <fungi> just shy of 10% of our total repository count
19:54:45 <angdraug> fungi: are you excluding non-plugin repos?
19:54:48 <AJaeger> fungi: I just quickly checked what I checkd out - since I need to update my local repos again...
19:54:54 <hashar> if repos are very similar, you could use a zuul template that adds generic jobs to a project.  That eases configuration
19:55:03 <AJaeger> checked out locally I mean
19:55:16 <fungi> angdraug: just getting an upper bound on how many repositories the fuel ecosystem subset entails
19:55:23 <anteaya> AJaeger: ah so 6 more were created since you last updated locally
19:55:23 <clarkb> fwiw tripleo image elements collects a bunch of pplugin like things into a central area an that seems to work well enough
19:55:31 <AJaeger> exactly, anteaya
19:55:32 <jeblair> or contribute to reviewing the project-config repo
19:55:47 <angdraug> for the record I'm not saying all of them are poorly maintained, only that we currently don't have any kind of requirement on how well maintained it is
19:56:05 <anteaya> angdraug: perhaps you could begin
19:56:20 <anteaya> angdraug: rather than defering the burden to the project-config reviewers
19:56:49 <angdraug> anteaya: begin reviewing fuel-plugin-* repo creation requests?
19:57:04 <fungi> begin reviewing all repo creation requests
19:57:18 <anteaya> begin to have requirements for how well maintained a fuel repo is
19:57:23 <fungi> if people only reviewed their own code, nothing would ever merge
19:57:26 <dougwig> at the risk of being burned alive for suggesting a non-open tool, it really sounds like an open-source github org would better fit the needs you're trying to serve.
19:57:32 <angdraug> fungi: point taken
19:57:52 <AJaeger> angdraug: thanks for showing up and raising this, greatly appreciated!
19:57:59 <angdraug> dougwig: we'd rather put it on r.f-i.o because it has gerrit on it
19:58:45 <fungi> yeah, if these repos are going to have semi-regular contributions and the people managing those contributions want to peer review them, then they seem like a fit for our infrastructure
19:58:51 <jeblair> fungi: ++
19:59:04 <angdraug> I will keep pushing for more infra contribution from fuel folks, that's very high on my list of priorities, honest :)
19:59:22 <jeblair> (if one wants to dump some code onto the internet and abandon it, i fully endorse the use of github)
19:59:32 <fungi> i think there's just a hesitance on the part of the currentl primary project-config repo reviewers given the volume of new repos they see being requested by the fuel ecosystem, and confusion over their relationship to the fuel project-team
19:59:42 <anteaya> wooo a github endorsement from jeblair!
19:59:53 <angdraug> in the meanwhile, I'll carefully consider some objective maintainence level criteria for fuel plugins
19:59:59 <jeblair> (bonus points for omitting a license)
20:00:06 <angdraug> and also propose internally to use f-i instead of o.o as the default dumping ground
20:00:07 <dougwig> jeblair: lol
20:00:10 <anteaya> angdraug: thank you, I appreciate that
20:00:21 <fungi> especially if an active member of the fuel project-team is proposing these new repos, but won't be the one maintaining them (and if they are, then why aren't they part of fuel?). i get that it's a complex situation
20:00:29 <angdraug> anteaya: yw, our mess to begin with )
20:00:32 <AJaeger> fungi: 18 of 88 fuel repos are part of the governeance repo
20:00:34 <anteaya> angdraug: I find even asking the question, so what are your intentions? to be very enlightening
20:00:36 <fungi> oops, we're out of time
20:00:43 <anteaya> thanks fungi
20:00:44 <fungi> thanks all!
20:00:47 <fungi> #endmeeting