19:00:13 <clarkb> #startmeeting infra
19:00:14 <openstack> Meeting started Tue Oct 17 19:00:13 2017 UTC and is due to finish in 60 minutes.  The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:00:16 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:00:18 <openstack> The meeting name has been set to 'infra'
19:00:20 <jlvillal> o/
19:00:22 <ianw> o/
19:00:43 <clarkb> #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:00:53 <pabelanger> hello
19:01:17 <clarkb> #topic Announcements
19:02:07 <clarkb> TC election is happening now. Don't forget to vote
19:02:23 <clarkb> And late Sunday we put zuulv3 back into production (where it has been since)
19:02:44 <pabelanger> \o/
19:02:54 <AJaeger> and running nicely!
19:02:57 <dmsimard> It went better than the first time :D
19:03:01 <dmsimard> Props to all the hard work
19:03:22 <fungi> yes, it's running amazingly!
19:03:24 <dmsimard> Thanks for beta testing zuul v3 so we don't run into that kind of problem when we roll it out for RDO :D
19:04:00 <clarkb> yup much much happier this time around
19:05:53 <clarkb> there weren't any actions from last meeting
19:06:02 <clarkb> and I don't think there are any specs to talk about so skipping ahead
19:06:10 <clarkb> #topic Priority Efforts
19:06:15 <clarkb> #topic Zuul v3
19:06:31 <clarkb> there were a couple items we wanted totalk about post productioning.
19:06:44 <clarkb> The first is whether or not we should be backing up secret keys for v3 secrets
19:07:23 <clarkb> jeblair: ^ you added this one to the agenda
19:07:32 <clarkb> basically people are wodnering what the bus factor of their secrets in zuul is
19:07:47 <pabelanger> seems like a good idea
19:07:53 <clarkb> infra roots have the ability to decrypt these secrets so that helps bus factor but if we lose the private keys we won't be able to decrypt the secrets anymore
19:08:12 <pabelanger> would that be just adding it to bup?
19:08:16 <clarkb> pabelanger: ya I'm leaning towards backing them up simply because it will help avoid functionality outages if we lose them for some reason (everyone will have to rekey)
19:08:24 <tobiash> I experimented a bit about deriving them from a master key (because I want to avoid a backup)
19:08:55 <jeblair> sorry i'm late
19:09:45 <jeblair> yeah.  we could tell folks that if we lose the server, they'll need to update their secrets.
19:09:51 <ianw> it is an additional "data at rest" exposure ... but there's already interesting stuff "at rest" in there anyway?
19:10:07 <jeblair> however, i think that looks kind of bad, so i lean toward us backing them up
19:10:07 <persia> Do I understand correctly that the data that is being considered for backup has already been posted to gerrit reviews when added (in encrypted form)?  If so, I would argue backing it up is no less secure.
19:10:10 <jeblair> ianw: that's true
19:10:15 <fungi> i suppose that puts infra in the (unfortunate?) position of being the keeper of everyone else's keys, in case they forget them?
19:10:32 <clarkb> persia: its the secret keys needed to decrypt the data that has already been put in git
19:10:35 <SamYaple> persia: no this would be the secret keys to decret that data
19:10:35 <jeblair> tobiash: i really like that idea -- if it's just one key, that opens up more options (offline storage, etc)
19:10:46 <clarkb> fungi: its already that way, zuul generates the keys not the user aiui
19:10:54 <jeblair> fungi: yes; we sort of already are though.  because of what clarkb says
19:11:16 <pabelanger> persia: no, this data is just stored on HDD of zuulv3.o.o currently
19:11:33 <jeblair> persia: so it's an additional exposure.  currently the secret keys are only on the zuul server; they would additionally be in our backups
19:11:46 <tobiash> I have a script which takes a private key an main.yaml and generates the keys for every repo
19:11:54 <fungi> clarkb: right, i mean you have a team responsible for access to some service, they give us an encrypted copy of that, then they have a lot of team turn-over and one day they discover they need to make some adjustments to the service but no longer know the (plaintext) password/whatever
19:12:04 <persia> Given that architecture, I think it should be backed up, as one HDD is an unreliable store.  Ideally, it is encrypted for backup, with the decryption keys more widely known (maybe infra-root private keys stored on private infra?)
19:12:06 <pabelanger> what about moving zuul keys in to hiera, and having ansible manage them? Like any other secret we add to a server
19:12:07 <tobiash> I can share it tomorrow if there is interest
19:12:30 <jeblair> pabelanger: we need a new key for every project.
19:12:32 <fungi> they'll end up asking the infra team to decrypt our copy and provide it to someone because they didn't keep track of it themselves well enough
19:12:39 <clarkb> fungi: thats a good point
19:13:15 <clarkb> it might be worth backing up the keys just to avoid zuul "outages" if we lose them forcing everyone to reencrypt but at the same time ask teams ot have some other method of data retention for their own purposes?
19:13:16 <fungi> i don't know that's necessarily something we can (or want to?) avoid
19:13:21 <jeblair> tobiash: yeah, if you could post that, that'd be great; let's take a look at it and see if we should redesign the key generation in zuul for that.
19:13:37 <pabelanger> jeblair: yah, i can also see that being a bottleneck for adding new projects
19:13:40 <dmsimard> fungi mentions turn over which is an interesting bit -- if we revoke an infra-root access, do we have to rotate the key used to decrypt secrets ? How does that work ?
19:13:55 <fungi> but we may still want to remind people very plainly in documentation that we're not taking on the responsibility of being the long-term storage for their account credentials in case they lose them
19:14:07 <jeblair> clarkb, fungi: yes, as a matter of policy, i don't think we should decrypt anything for folks.  we don't really have a good trust path.  i think we should mostly focus on how this affects our service.  i mostly don't want egg on our faces.  :)
19:14:20 <persia> For mass outages, decrypting from backup is nice.  For individual outages, probably best to advise the project to rekey (unless they have a really good reason not to do so), to reduce the number of times infra folk have to see project secret keys.
19:14:31 <jeblair> dmsimard: only if we decide we don't trust the person any more.  that hasn't happened yet.
19:14:37 <fungi> we just happen to have copies so their jobs can use them
19:14:46 <jeblair> dmsimard: however, we *do* want to be able to do key rotation eventually.  zuulv3 doesn't support that yet.
19:14:49 <pabelanger> dmsimard: true, but we have that problem today with our cloud credentials too
19:14:50 <dmsimard> jeblair: fair enough
19:14:54 <ianw> dmsimard: i don't think we've rotated all the cloud keys etc on removal, but i don't think anyone has left in circumstances they're considered compromised
19:15:13 <clarkb> ok how about we make the assertion in documentation thta fungi suggests. Then also backup the keys so that we don't break people's jobs if we derp on our end
19:15:36 <ianw> ++ and investigate the single key thing from tobiash
19:15:38 <jeblair> clarkb: ++.  then let's look at tobiash's idea and see if we want to change how we do this later.
19:15:44 <fungi> persia: yes, i agree. mostly wanting to avoid the situation where they'velocked themselves out of some account and want us to help them recover from that because we have a copy of the credentials they've forgotten
19:15:56 <pabelanger> ++
19:16:05 <persia> fungi: I fully support your recommendations for documentation :)
19:16:06 <SamYaple> i only brought it up for the hit-by-a-bus scenario
19:16:32 <clarkb> #agreed Document that we are not serving as backups for $project secrets. Backup the keys so that breakage on infra end doesn't force everyone to reencrypt their secrets.
19:16:45 <clarkb> if anyone disagrees with ^ let me know and I can undo
19:16:47 <persia> If nothing else, I suspect infra-root has no reliable way to identify whether a key requestor represents an identity that should be permitted the key.
19:17:13 <jeblair> persia: yep, that's a substantial blocker
19:17:19 <ianw> persia: if they can't keep the key, i bet they can't keep a gpg key or something to authenticate anyway :)
19:17:27 <jeblair> clarkb: ++
19:17:50 <persia> Yes, hence the documentation should assert that infra-root *will not* provide secret keys to others, rather than just that this isn't the purpose of the backup.
19:17:56 <fungi> SamYaple: yes, if your project lead is hit by a bus and is the only one who knew the password, that's bad (and not something you should expect us to protect you from)
19:18:48 <SamYaple> fungi: no argument here. it was a question of if we could rely on infra for that. the answer is "no". perfectly acceptable :)
19:18:54 <persia> If a project is concerned about bus-factor, presumably the project lead can (securely) transmit the secret to a delegate.  Worst case, project rekeys.
19:19:31 <pabelanger> not to derail, but which password are we talking about? I don't think our keys support that today, right?
19:19:52 <jeblair> pabelanger: the "password" that a user has encrypted as a secret
19:19:59 <fungi> i'll leave myself a todo to draft up a blurb for the openstack-specific job documentation we have
19:20:07 <pabelanger> jeblair: Ah, thank you
19:20:19 <clarkb> fungi: thanks
19:20:45 <ianw> i can look at the backups if nobody else wants too
19:20:47 <clarkb> the other topic that fungi brought up was how safe is it to modify scripts that v2 was using so that they function with v3
19:20:51 <fungi> pabelanger: basically discouraging abusing people relying on zuul as an encrypted safe for their service credentials in case they forget them later (because we can't safely be that)
19:20:57 <jeblair> ianw: thanks
19:20:57 <clarkb> ianw: thanks, I think that would be helpful
19:21:08 <ianw> ok, will do
19:21:13 <pabelanger> fungi: yah, I personally don't want to know what that data is :)
19:21:14 <clarkb> #action fungi write up docs for secret backup policy
19:21:21 <fungi> thanks clarkb
19:21:24 <clarkb> #action ianw look into backing up zuul secrets
19:21:38 <jeblair> i think there's 2 related questions:
19:21:48 <jeblair> 1) when do we demolish the v2 servers
19:21:53 <jeblair> 2) when do we demolish the v2 config
19:22:11 <fungi> i think 2 happens before 1
19:22:32 <fungi> because 2 is easier to undo (it's "just" a matter of git reverts)
19:22:46 <fungi> once we 1, going back is _much_ harder
19:23:00 <clarkb> if we want ot be ultra conservative we could say after the feedback session in sydney just in cse there is some really important thing brought up
19:23:00 <fungi> (it's "just" a matter of rebuilding a bunch of servers)
19:23:10 <clarkb> but ya worst case we just rebuild servers and git revert
19:23:35 <jeblair> i think the configs have diverged so much that we can't really entertain the idea of supporting both any more
19:23:43 <ianw> i was finding the v2 config helpful as a comparison during initial transition, but not so much now
19:23:54 <clarkb> ianw: ya that was useful for the thing funig was debugging today too
19:24:01 <clarkb> but git log -p is a hammer to get that data back
19:24:12 <pabelanger> we also have new servers to build, like migrating nodepool-builders to python3
19:24:21 <jeblair> so honestly, i lean toward saying keep the servers up until next week, but probably within a few days, declare the config dead and remove it.  (it's in the git history for reference)
19:24:23 <fungi> do we really expect that if we're still on v3 by the time the forum rolls around (seems likely this time) then is there anything anyone is likely to say there to cause us to go back to v2?
19:24:37 <clarkb> fungi: no
19:24:43 <pabelanger> jeblair: ++
19:24:47 <jeblair> i personally think we passed that point yesterday :)
19:24:49 <fungi> clarkb: yes, i can easily git checkout an older commit ;)
19:24:54 <AJaeger> ;)
19:24:58 <fungi> if a similar question arises
19:25:03 <jeblair> but am willing to entertain the idea that something catastrophic could still happen this week that would warrant us going back
19:25:06 <pabelanger> Yah, I'd be shocked if we decided to rollback now
19:25:09 <dmsimard> fungi: unless someone force pushes :P
19:25:28 <ianw> jeblair's plan ++
19:25:29 <pabelanger> like we had a large gap in testing
19:25:37 <fungi> dmsimard: yeah, let's not (is my answer to that)
19:26:06 <clarkb> jeblair: ya that is reasonable. We should probably send an email with those dteails to the dev list with a strong "speak up now" message if someonebelieves we need to revert
19:26:14 <pabelanger> next week also works well for me, as I am traveling to conference for next 3 days
19:26:41 <fungi> oh, so here's one potential impact... is jjb still testing against our project-config data?
19:26:49 <jeblair> fungi: you okay cutting off our retreat (or at least making it really difficult) next week?
19:26:56 <clarkb> mostly concerned that there seemed to be more fuming than communication the last time around so want to be careful to give everyone a chance to speak up
19:27:03 <AJaeger> fungi: no, not anymore
19:27:03 <dmsimard> The only thing I could see considering a rollback could be a security concern of sorts (like a nodepool or executor exploit), but it'd be hard to believe it could not be mitigated quickly or we shut things down temporarily while we address the issue
19:27:03 <fungi> jeblair: yes, i'm full steam ahead
19:27:14 <AJaeger> fungi: jjb testing was not enabled with v3.
19:27:32 <AJaeger> fungi: I can make the freeze test from non-voting to voting
19:27:33 <jeblair> AJaeger: shouldn't there be a legacy job?
19:27:38 <fungi> AJaeger: oh, they're not running jobs on jjb changes any longer?
19:27:56 <AJaeger> AFAIK those don't run currently - but we should be able to do that...
19:28:06 <pabelanger> clarkb: yah, I haven't heard of any fuming this time around
19:28:25 <AJaeger> I can patch - either freeze them or gate on jjb. What's the preference?
19:28:58 <fungi> anyway, the jjb team may want to copy a representative sample of configuration from a pre-deletion point in the project-config history, for use in their testing (assuming they still rely on it)
19:29:06 <jeblair> AJaeger: we're talking about jobs that run on the jjb repo, right?
19:29:49 <AJaeger> jeblair: no, I talk about jobs running on project-config/jenkins/
19:30:08 * fungi is specifically talking about jobs running against changes to the openstack-infra/jenkins-job-builder repo
19:30:18 <AJaeger> then ignore me...
19:30:32 <jeblair> fungi: so to the original question, i think maybe avoiding changing the jenkins scripts (at least in an incompatible way) until week would be good?
19:31:04 <jeblair> fungi: though of course, those jobs really just need to be replaced anyway... so maybe the v3 fix should be to copy the script into a role and iterate from there or something...
19:31:06 <AJaeger> just checked, jenkins-job-builder has jobs running - python-jobs basically
19:31:06 <fungi> they did, at least at one point, test against a copy of our configuration to avoid breaking us, which if we're past the point of no return on v3 is no longer necessary for us but deleting those files could disrupt their jobs if they're still doing it that way
19:31:38 <fungi> but sounds like maybe they stopped after we pinned to an old release?
19:32:37 <clarkb> jeblair: the copy idea makes sense to me
19:32:48 <clarkb> jeblair: as that sounds like a prereq to deleting everything anyways
19:32:57 <AJaeger> fungi: I don't see such a job currently for the jjb repo
19:32:59 <fungi> yeah, i don't see the old validation jobs running against jjb changes in the v3 config
19:33:07 <fungi> so this is probably a moot point
19:33:53 <jeblair> clarkb: indeed
19:34:03 <pabelanger> yah, we could start moving jenkins scripts directly into playbooks, that worked well when then started migrating them to ansible
19:34:41 <clarkb> So keep old servers and config around for another week. Migrate scripts used from jenkins/scripts into playbooks. Send email now telling people we plan to completely delete v2 (as much as you can with git) next week and to speak up now with concerns?
19:35:05 <jeblair> ++
19:35:09 <fungi> #link https://review.openstack.org/378046
19:35:17 <fungi> that's when they were removed, looks like
19:35:51 <pabelanger> so, would we be moving them in to ozj? And if so, it would make iterating on them much easier
19:35:54 <clarkb> #agreed Keep old servers and config around for another week. Migrate scripts used from jenkins/scripts into playbooks. Send email now telling people we plan to completely delete v2 (as much as you can with git) next week and to speak up now with concerns.
19:36:02 <pabelanger> as long as they are not needed in trusted context
19:36:05 <clarkb> again let me know if you don't agree with ^ :)
19:36:30 <dmsimard> clarkb: could we make a tag before deletion ?
19:36:36 <dmsimard> clarkb: like jenkins-eol or something
19:36:44 <clarkb> jeblair: do you want to send the its done/dead Jim email?
19:36:45 <dmsimard> to make things easier if we want to look for it.
19:36:52 <jeblair> heh, have we ever tagged project-config?
19:37:07 <clarkb> jeblair: I don't think so, but I suppose we can, we can also tag it at any point in the future too
19:37:31 <jeblair> clarkb: sure i'll send it, but i won't be able to say "It's dead, Jim."  :(
19:37:36 <dmsimard> a tag is just a friendly label on a commit sha1
19:37:40 <dmsimard> and easier to find :)
19:38:26 <jeblair> i'm okay having our first-even tag on project-config
19:38:30 <clarkb> ya makes diffing easier to have a friendly name
19:38:33 <jeblair> first ever even
19:38:39 <clarkb> git diff HEAD..jenkins-eol
19:38:45 * dmsimard nods
19:39:29 <dmsimard> Switching from laptop to phone for a bit, will follow along.
19:39:31 <fungi> farewell-jenkins
19:39:48 <pabelanger> +1
19:40:19 <jeblair> like 'farewell sydney monorail'
19:40:31 <fungi> heh
19:40:38 <clarkb> ok sounds like we have a plan. Any other v3 related items?
19:40:48 <clarkb> we are running out of time and there are some other things on the agenda
19:41:03 <jeblair> ++
19:41:19 <clarkb> ok moving on
19:41:26 <clarkb> #topic General Topics
19:41:34 <clarkb> #topic Ansible swap setup
19:41:43 <clarkb> #link https://review.openstack.org/#/c/499467/
19:41:51 <clarkb> ianw: ^ this is yours, are you just looking for reviews?
19:42:03 <ianw> well i was thinking, why does only d-g want swap?
19:42:14 <ianw> is it something we should move to the base job, behind a variable flag?
19:42:18 <ianw> is that a thing we are doing?
19:42:42 <ianw> we sort of have one flag in the "generate ara on failure" atm?
19:42:54 <jeblair> i think we should make it a role in one of the zuul*-jobs repos, but not put it in the base job
19:43:04 <pabelanger> we'll, I've wanted to use it for launch-node.py too
19:43:17 <jeblair> ianw: that flag isn't meant to be user-serviceable
19:43:48 <jeblair> i don't think our base jobs should end up being driven by variables; i think we should structure playbooks to use appropriate roles
19:44:06 <pabelanger> +1
19:44:15 <jeblair> (in other words, try to keep things more like ansible and less like zuul magic)
19:44:45 * fungi agrees
19:44:52 <ianw> ok, the only thing was didn't want people missing it, and causing skew in jobs running with and without it
19:45:01 <dmsimard> I kind of want to put it in the base job
19:45:22 <dmsimard> Because it otherwise means that the config is not the same for all jobs
19:45:26 <jeblair> i can see how swap is certainly in a gery area.
19:45:29 <dmsimard> Depending on the provider
19:45:30 <fungi> i think people should only add it to jobs that need it
19:45:41 <clarkb> fungi: ya for swap I think ideal is jobs never swap
19:45:46 <fungi> since swap setup will add some time and i/o
19:45:47 <clarkb> so we only opt jobs into it if they need it
19:46:09 <dmsimard> Fair, it can be opt in
19:46:37 <ianw> so move it out of d-g into o-z-j seems like the plan ... and then ensure that d-g is turning it on
19:46:43 <clarkb> ianw: yup
19:46:44 <fungi> yes
19:46:51 <fungi> sgtm
19:46:58 <pabelanger> +1
19:47:01 <ianw> alright, will do /eot
19:47:07 <clarkb> #topic Unbound setup
19:47:08 <jeblair> well
19:47:13 <clarkb> oh do I need to undo?
19:47:16 <jeblair> pls
19:47:18 <clarkb> #undo
19:47:19 <openstack> Removing item from minutes: #topic Unbound setup
19:47:32 <jeblair> there's a change to move the v3-native devstack job into the devstack repo
19:47:33 <clarkb> apparnetly that doesn't reset the channel topic
19:47:42 <jeblair> it probably has a fork of the swap role
19:47:46 <clarkb> ah right
19:47:54 <jeblair> or maybe not, i'm not sure
19:48:04 <ianw> jeblair: it's not committed yet, and i don't think it was copied in when i looked
19:48:17 <clarkb> ok so also need to check if it needs to be in devstack or update devstack
19:48:23 <jeblair> at any rate, i think the thing to do is to put that role into ozj, use it in v3-native devstack job in devstack repo, and ignore devstack-gate.
19:48:44 <jeblair> basically, don't spend any more time polishing ansible in devstack-gate; focus on v3-native devstack instead
19:49:02 <clarkb> wfm
19:49:04 <ianw> ok, that sounds like a plan
19:49:07 <fungi> good plan
19:49:08 <clarkb> ok next topic then
19:49:13 <clarkb> #topic Unbound setup
19:49:21 <ianw> just another quick one from me
19:49:23 <clarkb> #link https://review.openstack.org/#/c/512153/
19:49:28 <ianw> this *was* in base ... caused problems
19:49:42 <ianw> do we want it back?  i mean, we're running without it and is it causing issues?
19:49:53 <ianw> i know there was a lot of pain figuring it out in the first place
19:50:05 <fungi> unbound was added to deal with intermittent issues
19:50:06 <jeblair> i think we needed it for... rax?  because of their dns blacklist?
19:50:07 <clarkb> we have had dns resolution failures in some clouds
19:50:13 <clarkb> jeblair: yes that was the origination of it
19:50:24 <AJaeger> sorry, late to the party: https://review.openstack.org/#/c/508906/ is merged and is the change that moves devstack job into devstack
19:50:28 <clarkb> I think we likely do want to make sure unbound is present on test nodes and caching
19:50:48 <ianw> so this is the dynamic setup bit, the ipv6/ipv4 resolvers
19:50:58 <fungi> and i think we're not yet back to a point where we have similar levels of openstack-health and elastic-recheck history on v3 to be able to say it's unneeded
19:51:06 <clarkb> ianw: ya that was to address problems with osic where we resolved via v4 through nat and that was flaky
19:51:21 <jeblair> i don't think i've heard anything about that problem being fixed... in fact, last i heard was still "yep, that's a problem".  so we may want to assume it's still (when we least expect it) necessary.
19:51:22 <clarkb> I'm not aware of any clouds with that setup today but we may have that setup in the future ?
19:51:50 <ianw> ok, do we want to merge it then?  i'm just wary of adding things to base jobs in this transition period
19:52:14 <clarkb> I'm in favor of it since that was a really annoying dns problem to debug and fix. I think if we have a fix we should keep applying it to avoid the problem in the future
19:52:15 <ianw> i've put in testing notes, if people belive them
19:53:00 <fungi> thanks ianw!
19:53:06 <jeblair> +2 from me
19:53:08 <clarkb> I'll review that today if I have time before the dentist
19:53:19 <clarkb> #topic Nominated new config cores
19:53:39 <ianw> i've unworkflowed it since i just wanted to discuss it, review at leisure then, thanks
19:53:56 <clarkb> Just a heads up I nominated frickler jlk dmsimard and mnaser to be project-config and ozj and zuul-jobs cores
19:54:11 <AJaeger> welcome new cores. There're quite a few open reviews waiting for you ;)
19:54:11 <clarkb> I'd like to give them the +2 power sooner than later so if you want please respond to that thread
19:54:14 <jeblair> huzzah!
19:54:19 <fungi> welcome and thanks to our new job config core reviewers!
19:54:39 <clarkb> I have not yet added them to groups but response here seems to indicate I should just get that over with :)
19:54:40 <dmsimard> thread is here: http://lists.openstack.org/pipermail/openstack-infra/2017-October/005613.html
19:54:41 <jeblair> oh that's me.  /me sends email now
19:54:42 <AJaeger> clarkb: really more time? Go for it ;)
19:54:54 <fungi> clarkb: you're being so public about additions--i just added them in the past and announced it ;)
19:55:00 <clarkb> fungi: ha
19:55:07 <clarkb> ok I'll try to sneak that in before the dentist too
19:55:17 <clarkb> #topic Project Renames
19:55:21 <AJaeger> clarkb: do you want to cleanup the list as well? Some people are "retired"...
19:55:31 <clarkb> AJaeger: I think fungi was working on that, I will sync up with him on that
19:55:53 <fungi> ahh, yeah. it's still on my to do list
19:55:54 <clarkb> ok last week we tentatively said lets aim for 10/20 for projects renames and see where we are at post zuulv3 deployment
19:56:15 <fungi> i'm still good with 10/20
19:56:33 <clarkb> Even though the deployment went well I'm feeling swamped with general pre summit stuff, v3 job fixes and other fixes. I worry that I won't have to ability to get prep work done.
19:56:38 <clarkb> if others are able to take that on that would be great
19:56:57 <ianw> if we're stopping gerrit, i might try that swap-out of infra-specs repo too
19:57:04 <clarkb> (also I'm told no TC meeting today so I think we'll go a few minutes long to cover all the things)
19:57:06 <fungi> nova-specs?
19:57:13 <ianw> the corrupt one
19:57:15 <dmsimard> Oh wow Sydney is Nov 6th, I didn't realize it was so soon
19:57:19 <ianw> what time is it scheduled?
19:57:21 <jeblair> something-specs
19:57:45 <clarkb> ianw: last meeting we said plan for 1600UTC but we can move that to accomodate you
19:58:31 <pabelanger> 10/20 should work for me
19:58:48 <jeblair> i don't have bandwidth to drive the rename but can support it.
19:59:05 <clarkb> jeblair: same here. I can be around on friday and help push buttons but worried I won't have time to help with prep
19:59:24 <clarkb> do we have volunteers to do the prep work and be ready for friday to do it?
19:59:41 <clarkb> sounds like maybe fungi ianw and pabelanger ?
20:00:04 * pabelanger checks calander
20:00:05 <ianw> i can help with prep, but it's like 3am for me, but pabelanger can drive?
20:00:16 <fungi> yeah, i can pick it up
20:00:19 <clarkb> ianw: well I think we can shift it to later in the day too (also it is your saturday)
20:00:35 <clarkb> fungi: cool thanks
20:00:42 <pabelanger> I don't think I'll be able to start prep until 20:00 UTC, due to flight
20:01:31 <clarkb> I guess last question then is what time should we do it?
20:01:52 <clarkb> ianw: would you be around for fixing nova-specs if it was later in the day (no pressure since esaturday)
20:02:20 <ianw> if it's before about 9am my time that's ok (after that i have to play taxi for kid stuff)
20:02:27 <jlvillal> If anyone is bored and wants to review: https://review.openstack.org/#/c/509670/  This allows seeing the CRITICAL log level messages, if someone selects a log level to view. Has one +2 :)
20:02:50 <clarkb> ianw: what is that in UTC? 20:00?
20:02:56 * clarkb is bad at timezone math
20:03:47 <ianw> yeah we just switched to daylight saving which throws me out
20:03:55 <ianw> before 22:00UTC
20:04:34 <dmsimard> Are we 4 minutes overtime ?
20:04:45 <clarkb> dmsimard: we are, but no TC meeting so hping to go over a few minutes and get this done
20:04:51 <dmsimard> ack
20:04:58 <clarkb> fungi: pabelanger ianw should we say 2000UTC then?
20:05:05 <clarkb> on Friday 10/20?
20:05:09 <fungi> that wfm
20:05:14 <dmsimard> making sure there's no one looking through the window of the meeting room with menacing eyes :)
20:05:15 <ianw> ok
20:05:17 <pabelanger> yah
20:05:29 <jeblair> wfm
20:05:33 <clarkb> #agreed project renaming to commence at 2200UTC Friday 10/20
20:05:43 <clarkb> #topic open discussion
20:05:51 <clarkb> ianw had an item for this
20:05:52 <ianw> clarkb: 2000 right?
20:06:01 <fungi> ianw: yes
20:06:02 <clarkb> oh yes
20:06:07 <clarkb> can I double undo?
20:06:10 <clarkb> #undo
20:06:10 <openstack> Removing item from minutes: #topic open discussion
20:06:12 <clarkb> #undo
20:06:13 <openstack> Removing item from minutes: #agreed project renaming to commence at 2200UTC Friday 10/20
20:06:22 <clarkb> #agreed project renaming to commence at 2000UTC Friday 10/20
20:06:27 <clarkb> #topic open discussion
20:06:38 <ianw> it's not important, but if we're not getting kicked out
20:06:54 <fungi> there's nobody waiting in the hallway
20:07:01 <ianw> conference schedule seems full, but is there interest in an infra-specific evening?
20:07:19 <clarkb> I think it would be nice to get out for dinner/drinks one evening
20:07:19 <pabelanger> always
20:07:35 <dmsimard> I'll take a sec to thank you for core nomination, means a lot to me :D
20:07:40 <clarkb> we can put up an etherpad and people can mark down availability
20:08:01 <clarkb> ianw: is that something you want to help organize being local?
20:08:12 * fungi trusts ianw to recommend australian cuisine
20:08:19 <ianw> yeah, i was thinking maybe something more on the move if people want
20:08:31 <ianw> catch a ferry, walk across the bridge, hit a pub in the rocks type thing?
20:08:38 <ianw> or, i can just find a sit-down option too
20:08:41 <fungi> yes please
20:08:45 <fungi> to either
20:08:46 <clarkb> ianw: I trust your judgement, that sounds awesome
20:09:10 <dmsimard> nothing with venomous animals or critters :P
20:09:13 <fungi> i'm good with the "hit a pub" option especially, but happy to do whatever
20:09:21 <jeblair> dmsimard: let's not overly limit our options!
20:09:35 <clarkb> why don't we put up an etherpad to list availability and ianw can list options there?
20:09:39 <ianw> alight, i'll do an etherpad and send something
20:09:42 <fungi> dmsimard: that may eliminate much of that country
20:09:44 <clarkb> awesome thanks!
20:09:47 <dmsimard> I unfortunately won't be at the summit, but you guys have fun :)
20:09:54 <clarkb> dmsimard: we'll miss you
20:10:00 <clarkb> and with that our meeting is 10 minutes over and I think done
20:10:04 <clarkb> thanks everyone
20:10:07 <fungi> thanks clarkb!
20:10:07 <clarkb> #endmeeting