19:01:36 <fungi> #startmeeting infra
19:01:37 <openstack> Meeting started Tue Dec  1 19:01:36 2015 UTC and is due to finish in 60 minutes.  The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:01:38 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:01:41 <openstack> The meeting name has been set to 'infra'
19:01:42 <ruagair> o/
19:01:46 <Shrews> ahoy
19:01:47 <fungi> #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:01:58 <fungi> #topic Announcements
19:02:14 <fungi> i'm not aware of any important announcements. anyone have anything pressing here?
19:02:35 <olaph> nope
19:02:40 <jesusaurus> log_processor has been split out into its own project
19:02:40 <fungi> #topic Actions from last meeting
19:02:47 <fungi> #link http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-11-24-19.01.html
19:02:51 <fungi> there were none
19:03:04 <zaro> o/
19:03:07 <fungi> #topic Specs approval
19:03:28 <fungi> also none for this week, though dhellmann has a release automation one which will likely be on the proposed list next week
19:03:48 <fungi> #topic Priority Efforts: Nodepool provider status (jeblair)
19:03:57 <jeblair> howdy
19:04:05 <yolanda> o/
19:04:06 <fungi> this is our specless/ongoing priority effort
19:04:09 <mmedvede> o/
19:04:13 <jeblair> so we have ovh providing 160 nodes now...
19:04:22 <jeblair> they're still hoping to add more as they are able
19:04:23 <asselin> o/
19:04:26 <fungi> that's an impressive addition
19:04:30 <fungi> thanks ovh!
19:04:48 <jeblair> from what i can see, runtimes are generally between rax and hpcloud, but occasionally worse than hpcloud
19:04:54 <fungi> #info OVH nodepool worker count is now up to 160
19:05:03 <jeblair> probably due to our oversubscription
19:05:28 <jeblair> so my questions for the group are: are we happy with that level of performance so far?  and do we want to consider them in production?
19:05:40 <fungi> signs point to yes
19:06:19 <clarkb> other clouds have variance as well so the occasional job being slower isn't abnormal
19:06:21 <fungi> i haven't heard any complaints about jobs which have run there, at least
19:06:35 <anteaya> +1 yes
19:06:36 <clarkb> the only complaints I hvae seen have been related to disk size iirc
19:06:47 <fungi> this is 160 nodes between a couple regions in ovh, yeah?
19:06:48 <nibalizer> jeblair: can you point us to a graphite/grafana graph showing this data?
19:06:48 <jeblair> clarkb: i think we got a larger disk along the way
19:06:49 <clarkb> as there is no large ephemeral drive for the jobs to rely on
19:06:52 <krotscheck> o/
19:06:53 <jeblair> we're up to 80 now
19:07:09 <clarkb> jeblair: I believe it was the ansible folks assuming that a large ephemeral drive was mountable
19:07:12 <fungi> yeah, i think we determined that was plenty for grenade at least
19:07:32 <jeblair> nibalizer: not yet, i'm working on asking our providers for permission to make such a thing public
19:07:53 <jeblair> nibalizer: so far that's looking promising
19:07:57 <jeblair> so i hope we'll have that soon
19:08:05 <jeblair> (having said that, the data *are* in graphite)
19:08:13 <pabelanger> jeblair: impressive
19:08:25 <nibalizer> jeblair: fair
19:08:29 <nibalizer> thanks
19:08:54 <fungi> so sounds like we're agreed ovh is successfully in production in nodepool for us?
19:09:08 <jeblair> cool, i'll let them know...
19:09:12 <jeblair> quick update on the others:
19:09:21 <fungi> #agreed OVH nodepool implementation is successfully in production
19:09:27 <anteaya> yay!
19:09:32 <pleia2> :)
19:09:56 <jeblair> we just got enough floating ips from bluebox to use the full cpu capacity; we might see runtimes drop there, but we're also expecting to get replacement hardware which is faster so expect them to increase afterwords
19:10:14 <anteaya> again yay!
19:10:27 <fungi> what's the total number of ip addresses/anticipated capacity for that deployment?
19:10:30 <jeblair> that's 39 nodes; i'm also hoping that will increase, but that's at the "desire" stage rather than "implementation"
19:10:35 <fungi> got it
19:10:51 <clarkb> fungi: 316 vcpus/ 8 per VM
19:11:07 <jeblair> and i think we're about ready to dip our toes into internap, waiting to be able to confirm my changes to nodepool work there
19:11:15 <jeblair> EOT
19:11:17 <nibalizer> awesome
19:11:24 <anteaya> yay
19:11:34 <nibalizer> thanks bluebox, ovh, and internap!
19:11:38 <fungi> #info Nodepool use of Bluebox is progressing
19:12:02 <fungi> #info Nodepool use of Internap resources is on the way soon
19:12:34 <fungi> #topic Priority Efforts: Gerrit 2.11 Upgrade (zaro, et al)
19:12:51 * notmorgan is here too.
19:13:08 <fungi> mostly want to make sure we hammer at this and don't let the reschedule slip any further than we can help
19:13:10 <zaro> #link https://etherpad.openstack.org/p/test-gerrit-2.11
19:13:15 <clarkb> notmorgan: you said you had an apache workaround for the openid thing?
19:13:23 <notmorgan> clarkb: for the double-slash thing
19:13:25 <anteaya> fungi: yes
19:13:34 <notmorgan> the redirections are not really fixable in apache
19:13:42 <clarkb> notmorgan: right
19:13:48 <clarkb> notmorgan: can you link it here?
19:13:50 <notmorgan> they seem to be missing important data from the query-string
19:13:56 <anteaya> two patches up for the double slash redirect issue are at the bottom of the etherpad zaro linked to
19:13:59 <notmorgan> #link https://review.openstack.org/#/c/249714/
19:14:02 <zaro> redirect seems pretty minor though. it's only when user is not logged in.
19:14:06 <notmorgan> that removes the double slashes
19:14:57 <fungi> yeah, while it's an annoying regression i'm of the opinion we could go forward and just let the dev community know it's a rough patch while upstream works through a fix
19:14:57 <notmorgan> the only option to restore redirect (afaict) is to use proxypass instead of mod_rewrite+mod_proxy
19:15:13 <clarkb> notmorgan: or fix gerrit
19:15:13 <notmorgan> and that would hurt a lot of offload we do
19:15:25 <notmorgan> clarkb: that was assuming not getting a fix from upstream :)
19:15:55 <fungi> or rather as an interim workaround while upstream gerrit continues the debate on how they want it fixed
19:15:55 <zaro> revert #link https://review.openstack.org/248411 would fix both
19:16:35 <clarkb> I think reverting the breaking change in our gerrit makes sense as long as we are confident that the issue (what was it by the way) the breaking change fixes won't affect us
19:17:10 <notmorgan> zaro: ++
19:17:27 <fungi> i agree it's always good to try to help fix these things, though at the moment the delay seemed (from the upstream bug) more one of deciding what was an acceptable solution rather than the actual writing it
19:17:34 <notmorgan> clarkb: agree. a more complete fix as long as it doesn't cause major issues is better.
19:17:42 <jeblair> if we think it's going to get fixed upstream, sure; but otherwise, we'll just have this conversation again in a year when we've forgotten everything.  :)
19:17:43 <anteaya> fungi: from what zaro tells me the redirect double // issue is the last work item, if you feel it isn't a blocker perhaps we can move to discussing a time for the upgrade and contine to discuss the fix in the interm
19:17:51 <zaro> breaking change was supposed to fix dropped tokens.  it like it keeps the tokens in other situations but drops it for our situation.
19:17:59 <zaro> *situation/configuration
19:18:14 <zaro> I haven't noticed any bad side affects from the revert.
19:19:18 <fungi> running with backport fixes from newer upstream releases is one thing, running with our guess as to how upstream will fix an outstanding bug is a lot further down the road to running a fork again
19:19:30 <fungi> so just want to make sure we consider that situation carefully
19:19:41 <clarkb> I don't think we expect upstream to revert do we?
19:19:56 <zaro> clarkb: very unlikely
19:20:13 <notmorgan> clarkb: i wouldn't expect it, i would expect a further-down-the-road fix that takes a stab at addressing this new case
19:20:14 <anteaya> they have -2 the revert patch have they not?
19:20:45 <zaro> -1 from one of the cores (hugo) i think.
19:21:16 * anteaya tries to find
19:21:34 <zaro> attempted revert at bottome of ehterpad
19:21:44 <jeblair> basically, i really don't want to ask notmorgan to spend another holiday doing this again next year, so if we have a workaround that isn't the revert, i have a slight preference for that; unless we're really sure it's going to get fixed upstream for realz by the next time we upgrade.
19:22:33 <notmorgan> jeblair: i appreciate the sentiment :)
19:22:45 <zaro> i agree with jeblair, notmorgan workaround would be preferable atm.
19:22:46 <fungi> i too am inclined to take notmorgan's partial workaround and just make it clear in the upgrade announcement that redirects on login aren't quite how they're supposed to be
19:22:59 <clarkb> fungi: well its worse than that
19:23:02 <clarkb> fungi: they don't work at all
19:23:14 <clarkb> you will end up back at /
19:23:21 <clarkb> regardless of where you started
19:23:25 <notmorgan> clarkb: correct.
19:23:28 <jeblair> that's :(
19:23:37 <clarkb> which to me as a user of the web interface that logs in a bnuch because gerrit likes to log me out will make me very unhappy
19:23:45 <notmorgan> i have an approach that could fix it, but it requires layering in another proxy point.
19:23:50 <clarkb> which is why my preference is the revert
19:23:52 <anteaya> #link of zaro's revert patch that is -1'd by hugo (the author of the breaking patch) https://gerrit-review.googlesource.com/#/c/72720/
19:23:52 <notmorgan> because it's the only way to address a QSA
19:24:05 <notmorgan> in mod_rewrite post proxy
19:24:08 <notmorgan> it's not pretty
19:24:19 <notmorgan> (or using something that can act on L7 like HAProxy)
19:24:31 <notmorgan> but i could add the correct token back in that way
19:24:40 <notmorgan> i would *prefer* to not go down that path
19:25:43 <zaro> anteaya: hugo was not the author
19:26:29 <anteaya> zaro: wasn't this the patch that created the problem? https://gerrit-review.googlesource.com/#/c/57800
19:26:39 <notmorgan> anteaya: that looks like the right number
19:27:05 <zaro> yes. owner is hugo, author is simon
19:27:07 <anteaya> sorry don't mean to create noise
19:27:12 <anteaya> zaro: ah sorry
19:27:36 <clarkb> so sounds like we should maybe push upstream a bit more on this and maybe offer our own fix or strategy for one that isn't a revert?
19:27:44 <fungi> i didn't consider that gerrit has a tendency to logout some users often. it tends to leave me logged in pretty consistently but my case may not be representative
19:27:57 <clarkb> fungi: I get logged out probably 20 times a day
19:28:00 <fungi> ouch
19:28:09 <ruagair> +1 clarkb
19:28:14 <notmorgan> fungi: i am regularly logged out
19:28:21 <anteaya> well no redirect will be a pain for me as I often have several patches open before I am logged in
19:28:33 <anteaya> but once logged in I'm logged in for the day
19:28:35 <notmorgan> if you want me to spin a patch that will re-add the token back in, i can do that on top of my current one
19:28:40 <notmorgan> just to show how it would work
19:28:47 <nibalizer> clarkb: ouch how?
19:28:52 <notmorgan> but i really don't think you'll like it
19:29:30 <notmorgan> there is a 3rd option
19:30:02 <clarkb> nibalizer: when you switch between tabs it decides the new tab needs to log in and the old tab is logged out
19:30:11 <notmorgan> we can layer in L7 routing and handle the offload there instead of direct in mod_rewrit e(aka haproxy or another proxy tier like i have been describing) then proxypass to gerrit directly
19:30:44 <notmorgan> it seems proxypass handles the querystring correctly fwiw.
19:30:48 <fungi> clarkb: ahh, yeah i work around that by backing up to the original page again and doing a refresh before retrying to click something on it
19:31:08 <fungi> that seems to then notice it's logged in anew
19:31:24 <fungi> rather than logging me out
19:31:28 <notmorgan> fungi: interesting.
19:31:45 <anteaya> yes if I have one tab open and log in with that tab then all other tabs have me logged in
19:32:00 <anteaya> if I don't it logs me out all the time
19:32:00 * SotK is also rarely logged out automatically
19:32:15 <SotK> it tends to only happen if I log in on a different machine
19:32:16 <zaro> can i ask what's everybody's concern regarding the revert?
19:32:17 <pabelanger> clarkb: nibalizer: Yup, learned that the hard wall long ago.  It stinks
19:32:54 <jeblair> zaro: it's technical debt that makes it harder for us to upgrade.  we can accept that, but someone needs to pay it down before the next time we upgrade.
19:32:56 <notmorgan> zaro: i think the biggest concern is "what is the real upstream fix going to be down the line? and will this revert make it significantly harder to follow future releases"
19:33:11 <notmorgan> zaro: jeblair said it better ;)
19:33:23 <fungi> zaro: concern being that we don't know (and have reason to expect) that's not how upstream will fix it if ever, so we're carrying a divergence for an indeterminate/indefinite timeframe
19:33:35 <anteaya> fwiw the gerrit folks said they would like to help us but they can't reproduce the issue
19:33:39 <zaro> do you mean to the gerrit 2.12? or 2.11.x?
19:33:47 <anteaya> in the comments on https://gerrit-review.googlesource.com/#/c/57800
19:33:53 <clarkb> anteaya: I thought it was reproduceable using our mod rewrite rules?
19:33:54 <zaro> if i had to guess it  should be fixed in 2.12
19:34:01 <anteaya> clarkb: they cant' reproduce
19:34:08 <fungi> anteaya: reproduction was clarified in the bug
19:34:11 <fungi> #link https://code.google.com/p/gerrit/issues/detail?id=3365
19:34:17 <notmorgan> i was unable to reproduce complete broken-ness locally
19:34:29 <anteaya> according to the comments on https://gerrit-review.googlesource.com/#/c/57800 they can't reproduce, unless that is old information
19:34:45 <notmorgan> but i could repro the // issue, it just worked.
19:34:54 <zaro> fwiw, i'm going to work on it and it seems like hugo is willing to help
19:34:58 <fungi> at least david commented that he was able to reproduce
19:35:10 <fungi> (in the bug)
19:36:06 <anteaya> bug date is more recent than comments on 57800
19:36:12 <zaro> dimtruck is able to repro.  i'm working on setting up the same ENV
19:37:02 <zaro> from what i hear, hugo is on paternatify leave for next few weeks
19:37:27 <fungi> so anyway, i guess we need to arrive at a decision on how we're addressing the login redirect, or continue to defer the upgrade maintenance
19:38:45 <zaro> i'm +1 for notmorgan workaround, but sounds like ppl are unhappy with that so +1 for revert as well.
19:39:03 <fungi> we have carrying a fork with the broken change reverted as option 1, leaving login redirects broken as option 2, or switching to something with mod_proxy as option 3
19:39:03 <jeblair> i'm okay with the revert if zaro's going to continue to work on the real fix
19:39:51 <anteaya> I'm fine with whatever is decided
19:39:58 <jeblair> we didn't really discuss option 3 much, but i feel like adding in haproxy for this is maybe a bit heavyweight
19:40:03 <fungi> according to hugo's reply on the proposed revert there are other dependent commits which also need reverting?
19:40:49 <zaro> i think there was only 1 and i don't think it affects us.
19:41:11 <notmorgan> ping me if any further apache work is needed, happy to dig in on that front.
19:41:13 <zaro> is option 3 a no go either?
19:41:21 <fungi> i was more asking whether we need to rever that too, or if it'll still build sanely
19:41:25 <notmorgan> can we afford to lose the mod_rewrite offload?
19:41:30 <fungi> er, to revert
19:41:51 <notmorgan> if we can, then... the proxypass is the lowest impact change.
19:42:21 <zaro> fungi: still builds no problem, it's on review-dev.o.o now
19:42:32 <fungi> i guess the /p/ rewrite is the main thing we get there. /fakestore is really only relevant to review-dev and we could probably live without a robots.txt
19:42:52 <jeblair> the /p/ rewrite is heavily used
19:43:29 <fungi> by zuul?
19:43:31 <notmorgan> hm.
19:43:37 <pleia2> and people
19:43:56 * mordred waves - apologizes for blow calendar timezones somehow
19:43:58 <fungi> we've tried really hard to get people to stop using that local mirror, if memory serves
19:44:12 <notmorgan> so maybe i can maintain the /p/ rewrite and still mod_proxy
19:44:20 <notmorgan> now that i think about it
19:44:23 <pleia2> fungi: I suppose breaking it may get them to stop for good :)
19:44:27 <fungi> also now it's unable to serve shallow clones because of git smart http on trusty
19:44:33 <jeblair> notmorgan: oh that would be neat
19:44:44 <notmorgan> jeblair: i'll circle up on that here shortly.
19:45:00 <notmorgan> it might be less pretty but it might be workable. i think a couple spare <location> blocks will suffice
19:45:20 <notmorgan> but let me 2x check, if i can't lets fall back to the revert or punt while zaro works on this.
19:45:34 <notmorgan> i'll have an answer on if i can do this by tomorrow-ish fwiw
19:46:15 <jeblair> fungi: i'm not sure what's using it
19:46:25 <fungi> so it sounds like maybe the consensus is that we'd like to do option 3 (mod_proxy) if it can still support our needs, but that a fallback to option 2 (revert the breaking change and work upstream on a real fix) is still preferable to option 1 (leave it broken and work with upstream on a real fix)
19:46:45 <anteaya> fungi: I can live with that
19:47:11 <zaro> +1
19:47:14 <fungi> in that case i'd be inclined to try to pick a maintenance window and assume we'll do 3 if possible otherwise 2
19:47:23 <zaro> +1
19:47:31 <anteaya> I'm for that too
19:48:00 <jeblair> wfm
19:48:20 <anteaya> to stay away from a christmas rollback I think we should do something prior to the 19th
19:48:21 <fungi> assuming no other objections, when do people like for the window. another wednesday like the last one (so that we can have fairly immediate confirmation under load rather than waiting two days to find load breaks us)?
19:48:23 <notmorgan> ok i'll commit to having an answer/update tomorrow then
19:48:35 <notmorgan> so we have windows ASAP for the way forward
19:48:42 <anteaya> I'm available any day between now and the 19th
19:48:46 <anteaya> any day is fine with me
19:49:32 <jeblair> < dec 19 seems to work with the release schedule
19:49:43 <fungi> also, one thing i want to keep in mind is that we still have the openstack-attic/akanada typo to clean up. i have a feeling that needs to be fixed before our cruft repo cleanup step in the maintenance
19:49:43 <jeblair> we don't really have anything big there until mid-jan
19:49:44 <zaro> i'm available, except new years weekend.
19:49:46 <nibalizer> so december 16th?
19:49:55 <pleia2> I'm around all month
19:50:14 <anteaya> the 16th is two weeks from tomorrow
19:50:22 <jeblair> dec16 wfm
19:50:23 <anteaya> I'm fine with that
19:50:34 <notmorgan> zaro: is your proxypass review still up?
19:50:40 <fungi> 16th works for me. what was the start time for the previously scheduled maintenance? we could just shoot for that again
19:50:45 <notmorgan> zaro: so i don't need to go re-creating from scratch
19:50:46 <jeblair> mordred: how's dec16 for you?
19:50:51 <mordred> looking
19:51:04 <fungi> nibalizer: you want to have another go at the maintenance announcements for this?
19:51:10 <jeblair> mordred: (hopefully your rollback is self-sufficient at this point, but in case we need some of your brain, would be nice to have)
19:51:16 <nibalizer> fungi: i do
19:51:16 <mordred> jeblair: I may be driving/roadtrip - but I may also be available - or could make myself so
19:51:24 <nibalizer> what time are we gonna do it
19:51:28 <mordred> jeblair: oh. wait. you may want my BRAIN?
19:51:32 <fungi> #action nibalizer send announcement about rescheduled gerrit upgrade maintenance
19:51:33 <jeblair> mordred: i can volunteer to call you and tell you to pull over
19:51:33 <zaro> notmorgan: #link https://review.openstack.org/#/c/243879/
19:51:38 <notmorgan> zaro: thnx
19:51:44 <anteaya> 1700 utc: http://lists.openstack.org/pipermail/openstack-dev/2015-November/078113.html
19:52:05 <fungi> thanks anteaya
19:52:08 <anteaya> welcome
19:52:13 <fungi> nibalizer: so starting at 1700 utc on wednesday december 16th
19:52:18 <nibalizer> woot
19:52:23 <mordred> jeblair: sandy can drive - I can tether
19:52:29 <zaro> nibalizer: same time as last time
19:52:45 <fungi> #agreed Gerrit 2.11 upgrade rescheduled to 17:00 UTC Wednesday December 16th
19:52:50 <mordred> woot
19:52:58 <pleia2> \o/
19:53:17 <fungi> i think krotscheck/greghaynes have one topic on the agenda as well, if we can switch to that for the remaining few minutes
19:53:30 <anteaya> tc meeting is also canceled today
19:53:34 <krotscheck> I put it on the agenda without checking with greghaynes.
19:53:36 <greghaynes> I do?
19:53:38 <Clint> ha
19:53:40 <fungi> #topic Mirror efforts (krotscheck, greghaynes)
19:53:40 <greghaynes> Oh
19:53:41 <fungi> heh
19:53:49 <krotscheck> Surprise!
19:53:58 * ruagair has maniphest updates too.
19:54:08 <jeblair> greghaynes: okay what's this all about?
19:54:12 <jeblair> ;)
19:54:14 <krotscheck> We've got a bunch of mirror things in queue. First is greghaynes's wheel-mirror work.
19:54:18 <fungi> what's all this then?!?
19:54:19 <mordred> (the TC meeting isn't happening, so if we go over, we won't break anything)
19:54:23 <greghaynes> Hehe
19:54:43 <jeblair> (i quickly replaced the tc meeting with another meeting so will be half-here)
19:54:54 <fungi> i think i'm +2 on all the wheel mirror patches for whatever that's worth
19:55:00 <krotscheck> Is there any coordination that needs to be done to land that other than landing the patches, jobs, and spinning up the instances?
19:55:34 <greghaynes> The instances exist, hiera data needs to get added for ssh host keys though
19:55:37 <fungi> i can help spin up the mirror build instances for that, though more than happy for someone else to volunteer
19:55:43 <krotscheck> #link https://review.openstack.org/#/q/status:open+branch:master+topic:feature/wheel-mirror,n,z
19:55:49 <fungi> oh, we have the instances? even better
19:55:58 <greghaynes> I thought you made them ;)
19:56:09 <krotscheck> Yay meetings!
19:56:32 <fungi> er, heh
19:56:51 <krotscheck> greghaynes: The one patch I seem to be msising is one that actually starts using our wheel mirrors, is that somewhere?
19:56:51 <mordred> what are they called?
19:56:59 <fungi> greghaynes: if i made them, then they should respond to ping via whatever the dns names are
19:57:15 <krotscheck> mordred: /.*wheel-mirror-.*\.openstack\.org/
19:57:23 <mordred> no. they do not exist
19:57:32 <mordred> I can help make them
19:57:41 <greghaynes> fungi: ya, I might have misinterpreted something we chatted about when we were figuring out host keya
19:57:43 <fungi> krotscheck: i believe using the wheel mirrors is automagical, or seemed to be last time we tried to do this
19:58:02 * krotscheck likes magic.
19:58:04 <fungi> some magicsauce in pip that knows to check specific paths for possible wheels?
19:58:15 <krotscheck> dstufft would know
19:58:18 <fungi> wheelpeople presumably know more than i do about this, yeah
19:58:19 <greghaynes> krotscheck: there isn't one AFAIK
19:58:30 * nibalizer has to bounce right at noon, sorry
19:58:32 <greghaynes> extra-index-url
19:58:39 <greghaynes> Is what you want
19:58:53 <krotscheck> In the interest of coordination, can we schedule a date/time to babysit these patches through and make sure the world doesn't explode? I'm available before noon PDT most days.
19:59:00 <fungi> oh, so we do need an extra-index-url with the platform name encoded?
19:59:10 <greghaynes> fungi: yep
19:59:27 <krotscheck> Unless someone has a major issue with them, that is.
19:59:35 * krotscheck spends the rest of the day sitting on actual babies.
19:59:46 <mordred> mmm. sitting
19:59:55 <fungi> seems like they're pretty much all in shape last i looked, though i and other reviewers could certainly have missed something
19:59:59 * mordred can help with the root portions of this - is also availabe in mornings
20:00:19 <krotscheck> mordred, greghaynes: How does tomorrow morning around 10AM PDT sound?
20:00:32 <fungi> do we have a volunteer to build the instances who isn't me? otherwise i'll try to do that after the meeting now that i don't have a tc meeting to lurk
20:00:51 * krotscheck is assuming mordred is that volunteer.
20:00:54 <fungi> and also sounds like we need to update hiera for ssh keys
20:00:56 <greghaynes> krotscheck: works for me
20:01:22 <mordred> fungi: yes. I will do that
20:01:43 <mordred> krotscheck: tomorrow morning I will be on an aeroplane
20:01:47 <mordred> krotscheck: can we do friday morning?
20:01:48 <fungi> thanks mordred. in that case i'll work on the stable maintenance electorate list stuff instead since that's also urgent
20:01:59 <anteaya> yes I want to vote
20:02:00 <greghaynes> Friday also works
20:02:06 <fungi> as it turns out we do have >1 candidate for ptl
20:02:09 <krotscheck> greghaynes, mordred: Ditto. 10AM PDT Friday
20:02:28 <mordred> fungi: also, I need electorate lists for N and O name elections while you're at it
20:02:40 <Clint> krotscheck: i think you mean PST
20:02:59 * mordred stabs daylight savings time in the face
20:03:02 <fungi> mordred: get up with me after this and remind me what i gave you last time (or whether you got it from the foundation site admins instead)
20:03:19 <krotscheck> Clint: You are correct.
20:03:23 <krotscheck> 10AM PST Friday
20:03:45 <krotscheck> 1800 UTC Friday
20:03:59 <krotscheck> I'll leave the remaining mirror things on the agenda for next week.
20:04:11 <krotscheck> I figure that we'll do a similar coordination thing there.
20:04:28 <fungi> sounds good
20:04:58 <mordred> krotscheck, fungi: do we have a doc of what server(s) we need?
20:05:12 * krotscheck defers to greghaynes
20:05:40 <fungi> mordred: theu
20:05:42 <fungi> grrr
20:05:54 <fungi> they're in one of the changes but yes get greghaynes to point you to the list
20:05:59 <mordred> kk
20:06:03 <fungi> i don't recall the exact one
20:06:09 <fungi> we're over time, but if people want to stick around ruagair had some maniphest things to mention since we have the room what with there being no tc meeting this week
20:06:12 <greghaynes> mordred: Not a doc, should be findable in the change but I need to mentally re-page in where
20:06:37 * anteaya is willing to stick around to listen to ruagair
20:06:43 <krotscheck> EOT for me.
20:06:50 <krotscheck> Go go maniphest things
20:06:52 <fungi> #topic Priority Efforts: maniphest migration (ruagair)
20:06:56 <ruagair> \o/
20:07:01 <ruagair> Lot's of updates.
20:07:28 <ruagair> Phab + OpenID (login.ubuntu.com) works nicely using mod_auth_openid
20:07:37 <krotscheck> ncie
20:07:59 <ruagair> Only snag is #link https://github.com/bmuller/mod_auth_openid mod_auth_openid is abandonware.
20:08:30 <ruagair> So we'd need to consider whether we want to adopt it.
20:09:02 <ruagair> I'm currently working on the last piece of the migration process:
20:09:30 <ruagair> Scraping OpenIDs from launchpad to insert into Pah.
20:10:06 <fungi> it's too bad clarkb had to drop out, but sync up with him since i know he's looked at it too
20:10:06 <ruagair> Once I've completed that, I'm happy to open up instance more broadly that I already have.
20:10:27 <fungi> on mod_auth_openid being abandonware that is
20:10:28 <ruagair> I think clarkb promised to adopt it for fun, fungi :-)
20:10:37 <fungi> heh
20:11:18 <yolanda> hi, wanted to show a change for glean, for infra-cloud efforts: https://review.openstack.org/#/c/252037/
20:11:20 <ruagair> II'm re-writing this #link https://review.openstack.org/#/c/240795/ into python to have a more complete process.
20:11:36 <yolanda> sorry ruagair, i interrupted :(
20:11:48 <ruagair> Some of it will get spun off into ansible and puppet, of course.
20:11:49 <fungi> yeah, no updates of substance for 18 months, and no real activity for over 2 years
20:11:54 <ruagair> No probs yolanda :-D
20:11:58 <anteaya> wasn't that one of the questions from last week about phabricator use?
20:12:08 <ruagair> There was anteaya.
20:12:17 <anteaya> as in who is currently using it and what is their understanding for doing so?
20:12:33 <ruagair> Currently I have two users, yolanda and GheRivero who are using a stable instance I have up.
20:12:51 <yolanda> ruagair, no much activity for that on the latest weeks
20:12:57 <yolanda> i believe GheRivero was poking a bit more
20:13:09 <ruagair> :-)
20:13:33 <ruagair> sdague is rather keen to use Phab as soon as I think OpenID is integrated in a prod fashion.
20:13:45 <ruagair> Which is not far off.
20:14:38 <fungi> aside from the lack of activity or upstream maintenance on mod_auth_openid were there any other concerns with it? does it have any missing features/bugs that you found?
20:14:50 <ruagair> Not that I have found.
20:15:07 <ruagair> It worked rather trivialy.
20:15:20 <fungi> if not, then we could just use it and revisit adopting it if it still lacks an upstream once we find something we need to improve with it
20:15:37 <ruagair> That's what my thoughts were.
20:15:58 <ruagair> It's time I put up an etherpad on this I think.
20:16:04 <ruagair> List off the status and issues.
20:16:22 <fungi> sounds good. anything else on this topic before i give yolanda the floor for infra-cloud needs?
20:16:34 <ruagair> EOT.
20:16:36 <jeblair> o/
20:16:48 <yolanda> fungi, i have fat fingers, the interruption was not intented... so now, infra-cloud: https://review.openstack.org/#/c/252037/
20:16:52 <jeblair> i just wanted to make sure we're on the same page about usage --
20:17:12 <jeblair> that we definitely want people to be able to poke at test instances and stuff
20:17:19 <fungi> yolanda: jeblair still had a point for the current topic
20:17:28 <fungi> i'll switch the topic once we're ready
20:17:29 <jeblair> but that we don't want to do ad-hoc hosting projects in phab
20:17:32 <yolanda> ok
20:17:39 <ruagair> I agree jeblair.
20:17:43 <jeblair> cool
20:17:50 <fungi> right, there was a question last week on that
20:18:13 <ruagair> Yes, I was fighting kangaroos at that time :-/
20:18:14 <fungi> something in an earlier status update about people already using maniphest in anger because of not wanting to continue on lp any longer
20:18:42 <jeblair> right, i don't think that was happening but we were jumping to conclusions based on lack of data :)
20:18:45 <fungi> crocodile wrestling is no longer the national pasttime? it's roofights now?
20:18:59 * fungi updates his notes
20:19:17 <fungi> #topic Priority Efforts: infra-cloud (yolanda)
20:19:24 <ruagair> No, that's not a reality fungi. We have yolands and GheRivero poking and that's it. sdague *wants* to use it seriously but is not at this point.
20:19:35 <fungi> #link https://review.openstack.org/252037
20:19:47 <yolanda> ok so we got some races on glean
20:20:02 <yolanda> related on the events where glean was executed
20:20:19 <yolanda> rcarrillocruz created that fix, we have tested that intensively on our environment, and we got rid of the races
20:20:49 <yolanda> i wanted to show that, and ask for reviews
20:21:16 <yolanda> greghaynes, can you add more details to that review?
20:21:27 <sdague> fungi: fwiw, my declared desire for maniphest is to get the kanban board for visualizing and tracking cross project efforts, which launchpad is not really suitable for. And our current approach is ghetto kanban in etherpad.
20:21:28 <fungi> okay, so that addresses a bug which is impeding infra-cloud deployment in one of our regions
20:21:44 <fungi> thanks for the info and the fix rcarrillocruz, yolanda
20:21:48 <rcarrillocruz> in other news, we consistently deploy 90 out of 93 machines in East
20:22:07 <greghaynes> yolanda: I dont remember all the details - I just put a review asking for them ;)
20:22:27 <lifeless> sdague: fwiw have you looked at the lp->kanban thing ?
20:22:33 <lifeless> sdague: which renders bugs as cards ?
20:22:58 <yolanda> greghaynes, so the issue is that glean was executed when a network interface was detected
20:23:20 <yolanda> in our deployments, it showed a constant race, causing the vlan information to don't be created if that interface was not detected
20:23:42 <greghaynes> yolanda: ah, right, it wasnt a race - we just dont do the dependency detection
20:23:46 <rcarrillocruz> greghaynes: no, it's not about the vlan interfaces. The fix i pushed and got merged already created vlan interfaces attached to physical interfaes
20:23:53 <greghaynes> oh?
20:23:57 <greghaynes> then I am confused
20:23:58 <yolanda> switching that event, to start networking event, and configuring all interfaces there at same time , proved to solve our issue
20:24:03 <greghaynes> anyhow, we can chat out of meeting?
20:24:07 <rcarrillocruz> sure
20:24:10 <rcarrillocruz> i can stay a bit online
20:24:38 <SotK> sdague: we've also merged a first pass of kanban board stuff in SB now, with patches to make it more usable in review, that you could check out if you like
20:25:01 <fungi> okay, so seems like we've made good use of our extra meeting time. anything else we need to cover this week before i #endmeeting?
20:25:07 <SotK> (its not very discoverable yet though, since its not entirely merged)
20:25:56 <anteaya> thanks fungi
20:26:02 <fungi> thanks everybody!
20:26:08 <fungi> #endmeeting