19:01:11 <clarkb> #startmeeting infra
19:01:11 <openstack> Meeting started Tue Apr  2 19:01:11 2019 UTC and is due to finish in 60 minutes.  The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:01:12 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:01:14 <openstack> The meeting name has been set to 'infra'
19:01:23 <clarkb> #link http://lists.openstack.org/pipermail/openstack-infra/2019-April/006299.html
19:01:33 <jroll> \o
19:01:35 <clarkb> #topic Announcements
19:01:56 <clarkb> I don't have any announcements. Anyone else have announcements?
19:03:04 <clarkb> #topic Actions from last meeting
19:03:07 <corvus> clarkb: i think you just made an announcement :)
19:03:29 <clarkb> #link http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-03-26-19.01.txt minutes from last meeting
19:03:41 <clarkb> Thank you ianw for running the meeting last week
19:03:56 <clarkb> frickler: had an action to look into disk monitoring of nodepool builders
19:03:59 <clarkb> frickler: ^ any update on that?
19:04:25 <frickler> https://review.openstack.org/648365
19:04:32 <corvus> #link http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=66463&rra_id=all
19:04:38 <frickler> that seems to be solved
19:05:05 <frickler> should work for optional volumes on all of our servers
19:05:05 <corvus> ++ thanks!
19:05:16 <clarkb> cool
19:05:33 <clarkb> and I know shrews has a change up to fix the underlying issue with the builders
19:05:43 <clarkb> we should be well covered on that item going forward
19:06:21 <frickler> the usage looked pretty stable in the graphs
19:06:42 <Shrews> yep: https://review.openstack.org/647599 and parent
19:08:00 <clarkb> thank you for everonye that has helped make that better
19:08:08 <clarkb> #topic Priority Efforts
19:08:23 <clarkb> I'm skipping specs approvals as I don't think we have any to look at and we have a fairly full meeting otherwise
19:08:31 <clarkb> #topic Update Config Management
19:08:46 <clarkb> Before I took off on vacation I got a bunch more hosts upgraded to puppet-4
19:08:53 <clarkb> thank you to cmurphy for continuing to help and push on that
19:09:03 <cmurphy> o7
19:09:06 <corvus> hear hear
19:09:35 <clarkb> There are still more hosts to be updated under topic:puppet-4 if you have time to review those. I'm happy to approve as I can watch them and fix issues that arise
19:11:00 <clarkb> Anything new on the docker side of things?
19:11:25 <clarkb> Seems like we've got a pretty solid foundation for both and now its mostly a matter of pushing the conversions? Not surprising those are moving slowly as we also tackle opendev things
19:11:56 <corvus> yeah, nothing new from me on that, and probably not until summit...
19:12:05 <corvus> but i'm looking forward to working on that at ptg
19:12:42 <clarkb> Considering ^ lets talk OpenDev
19:12:46 <clarkb> #topic OpenDev
19:13:20 <clarkb> How are we looking for the transition on the 19th
19:13:55 <clarkb> are we down to building a script/tool/playbook to do the actual transition and collecting lists of renames?
19:14:15 <corvus> #link https://storyboard.openstack.org/#!/story/2004627
19:14:42 <corvus> fungi: how's the redirect stuff coming?
19:15:03 <corvus> (though i haven't seen fungi in this meeting yet)
19:15:32 <fungi> i started playing around with the hard-coded list of repositories idea for the non-openstack.org sites and it seems to work, i'll get something up others can test later today i hope
19:15:46 * fungi has been lurking and hacking on changes at the same time
19:16:25 <fungi> the rabbit hole on the script to do the repository changes is continuing to deepen though
19:16:41 <fungi> another scenario came to mind which i don't know if we need to be concerned with
19:17:18 <fungi> as we're also doing project renames, i expect rather a lot of namespace changes to embedded checkout file paths in zuul job definitions
19:17:55 <fungi> should this task also grow to attempt altering more than just the domain name in those?
19:18:21 <corvus> i'm inclined to say 'no' because the potential for error is high
19:18:26 <fungi> and is there anywhere else we should be similarly concerned about namespace changes within repo content?
19:19:20 <corvus> we can make sure that devstack, grenade, etc are fixed up quickly after the switch; that should solve a lot of problems
19:19:56 <ianw> fungi: is that different to what we mentioned a few meetings ago, things like the requirements files checkouts?
19:20:05 <clarkb> ya and the unittest type jobs should use the implicit $thisproject checkout
19:20:22 <fungi> ianw: yes, insofar as it's the namespaces, not just the domains
19:20:27 <corvus> but in general, trying to figure out if 'openstack/nova' is an active string constant reference or just some text is a lot of work
19:21:40 <fungi> so for example if a job defniition refers to git.openstack.org/openstack-infra/zuul-jobs the difference between rewriting that to opendev.org/openstack-infra/zuul-jobs vs opendev.org/zuul/zuul-jobs
19:22:07 <fungi> the former will be broken and need replacing with the latter after the maintenance
19:22:10 <ianw> fungi: ahh, so you're meaning (as corvus above says) places where 'openstack/nova' is found, but without a leading "openstack.org"?
19:22:28 <fungi> also possible yes, without the domain present at all
19:22:51 <corvus> we could probably safely include 'git.openstack.org/openstack/foo' in the translation...
19:23:18 <fungi> yes, however that means incorporating the list of namespace changes for all repositories into the routine if we do
19:23:28 <fungi> which is why i raise it as a point of additional complexity
19:23:30 <ianw> oh, i thought we were going to do that full rewrite in the jobs
19:23:42 <corvus> we need that in order to write .gitreview anyway
19:24:36 <corvus> so, yeah, i think regardless the script that generate our force-merge patches should perform the full rewrite (domain + org)
19:24:39 <fungi> er, that wasn't what i was expecting, no. i thought this script was replacing review.openstack.org with review.opendev.org in .gitreview files and then the normal project renaming playbook was taking care of the namespace portion
19:25:04 <fungi> since it would do that anyway
19:25:10 <corvus> the normal project renaming playbook makes .gitreview changes?
19:25:42 <fungi> oh, you're right, it doesn't. just moves files on disk and alters database rows
19:25:51 <fungi> okay, i'll plan to include that too
19:25:57 <corvus> kk
19:26:34 <jroll> side question: for the openstack namespaces, do you folks want those changes in the ethercalc sheet, or would something computer-readable (and writeable!)be easier?
19:26:35 <fungi> so if the hostname portion is present in a string in playbooks/roles we can assume we should also alter the namespace appearing within that string
19:27:02 <clarkb> #info Rename script should modify .gitreview and zuul job content to update hostnames and repo names
19:27:15 <clarkb> fungi: ya as the other hostname won't work (particularly in the case of gitreview and zuul jobs)
19:27:40 <clarkb> jroll: I believe we can export the ethercalc to csv which should be simple enough to then convert to something more structured if necessary
19:27:43 <fungi> jroll: the ethercalc is more for one-off/small groups of namespace changes. for large adjustments based on structured data it's probably easier to just let us know what you want done and we can generate that set ourselves
19:27:52 <corvus> jroll: i yield to fungi on that since it's likely to be input to his script, but i see no reason to use the spreadsheet if you want to arrange something else
19:28:08 <jroll> fungi: corvus: excellent, thanks :)
19:28:17 <jroll> fungi: I'll sync with you once our governance change merges
19:28:54 <corvus> and yeah, i expect we'll just export ethercalc to csv and translate/merge with the other data to input to the script
19:28:59 <fungi> thanks. less likely to end up with typos if i script up the list of openstack->whatever-catch-all namespace edits based on the governance projects.yaml
19:29:38 <jroll> yep
19:29:53 <fungi> given it's probably something like 1k renames
19:31:16 <clarkb> Alright anything else on the opendev subject?
19:31:35 <ianw> just for reference
19:31:37 <ianw> #link http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-03-26-19.01.log.html#l-81
19:31:44 <ianw> was the last meeting where we talked about that rename ^ too
19:32:30 <clarkb> #topic Storyboard
19:32:46 <clarkb> fungi diablo_rojo_phon SotK how are storyboard things?
19:33:09 <fungi> trove just relocated to storyboard.o.o
19:33:37 <fungi> (roughly two months after we stopped using trove for storyboard.o.o, the irony)
19:34:32 <fungi> i've been sort of buried under other things for the past week so not really sure what else is going on... i think the outreachy flood may have died down now that the deadline has passed
19:36:37 <clarkb> Ok sounds like we should moev on
19:36:41 <fungi> yup
19:36:46 <clarkb> #topic General Topics
19:36:57 <fungi> i think diablo_rojo_phon is sucked into conference travel anyway
19:36:57 <clarkb> First up: PTG Planning
19:37:03 <clarkb> #link https://www.openstack.org/ptg#tab_schedule Schedule has us Thursday and Friday
19:37:21 <clarkb> It is my understanding that the scheduel above is the "final" scheduel and shows us thursday and friday
19:37:32 <clarkb> corvus: that means you don't have an idle day and we avoid the TC conflict for fungi
19:37:42 <clarkb> #link https://etherpad.openstack.org/2019-denver-ptg-infra-planning Agenda brainstorming
19:38:00 <clarkb> As we start to really dig into the opendev stuff feel free to add items ^ there for things we should followup on after the transition
19:39:42 <clarkb> next up is Letsencrypt progress
19:39:48 <clarkb> #link https://review.openstack.org/#/c/636759/ Ready for review
19:40:13 <clarkb> ianw says ^ is ready for review. That will end up being a big piece of transitioning further services over to opendev so reviews much appreciated
19:40:15 <ianw> yes please, and a few minor changes below that
19:41:16 <clarkb> Next is trusty server upgrade status
19:41:22 <clarkb> #link https://etherpad.openstack.org/p/201808-infra-server-upgrades-and-cleanup
19:41:28 <clarkb> #link https://etherpad.openstack.org/p/lists.o.o-trusty-to-xenial lists.openstack.org in place upgrade notes
19:41:38 <clarkb> I've been picking up the work to test an in place upgrade for lists.openstack.org
19:41:58 <clarkb> The do-release-upgrade takes about 45 minutes which should be plenty quick for people doing smtp
19:42:18 <clarkb> The new xenial server seems to not break our vhosting of mailman 2
19:42:30 <clarkb> I've created a test mailing list and sent mail through it successfully
19:43:13 <clarkb> The one big issue we've found is nothing I try results in mailman preserving original body content and instead it gets base64 encoded because the default is to use utf8 and python email lib converts utf8 email to base64
19:43:30 <clarkb> This is only really an issue if considered in the context of dkim/dmarc
19:43:50 <clarkb> because email won't verify if mailman is converting body content to base64 and the dmarc policy checks the body
19:44:27 <corvus> it's also.. well.. weird
19:44:38 <clarkb> I have confirmed that overriding /usr/lib/mailman/Mailman/Defaults.py to force the en language to ascii instead of utf8 fixes this behavior
19:44:42 <corvus> i mean, even aside from dmarc, it makes messages larger than necessary
19:45:59 <corvus> whether that should be a blocker for us... <shrug>?
19:46:26 <clarkb> ya I'm sort of running out of ideas at this point short of actually learning how the deep internals of mailman work
19:46:53 <clarkb> we could force the list back to ascii which has downsides too
19:47:31 <clarkb> other options could be to use mailman's new email munging features in the mailman on xenial to deal with dmarc though we've avoided those previously
19:48:22 <clarkb> At this point I think it would be good if someone else looked at it without my direct help (just to make sure I've not missed anything with pebkac)
19:48:49 <fungi> yeah, we at least got a glimpse of how those work in reality since kata-dev turned it on to deal with dmarc-signed messages (particularly from microsoft employees, if memory serves)
19:49:24 <clarkb> and if we don't turn up any good fixes after a second person looks at it then we should probably consider alternatives to dmarc handling
19:49:48 <clarkb> Let me know if you can help or have ideas
19:49:53 <corvus> i think deciding on an answer about whether the b64 encoding is expected and normal and okay regardless of dmarc is something we should do
19:50:02 <fungi> i think the method they enabled is the one which rewrites the from address on messages from domains which claim to require dkim enforcement, and wipes the dkim signatures from the headers
19:50:25 <corvus> perhaps by spinning up a new MM instance (without any of our config mgmt) and examining its behavior
19:50:38 <clarkb> corvus: the impression I got from mailman and python was that this is at least normal and expected from their side
19:50:40 <fungi> but yes, i agree, the base64 reencoding is also a concern on its own, dkim aside
19:51:16 <clarkb> note you'll need to use ubuntu for that as they patch in the default to utf8
19:51:29 <clarkb> so I guess mailman as an upstream still avoids this when lang is set to en
19:51:35 <corvus> clarkb: well, i mean... we don't know what changed?
19:52:02 <clarkb> corvus: correct I still haven't narrowed down why openstack-discuss for example doesn't do this
19:52:21 <clarkb> spinning up an unmodified mailman seems like a good next step
19:52:23 <clarkb> to confirm the behavior
19:52:40 <fungi> i do have a mailman3 instance up and running which wasn't built with our configuration management, but it's not mailman2 obviously and also hackily upgraded to bionic rather than xenial, so likely not useful for this exercise
19:53:10 <clarkb> fungi: maybe you can help with the mailman2 equivalent of that? I'm assuming it isn't as simple as apt-get install mailman on xenial? or maybe it is?
19:53:21 <clarkb> (I can probably drive that, will just have lots of questions)
19:54:09 <fungi> it basically is as simple as that, yes
19:54:17 <fungi> #link https://etherpad.openstack.org/p/mm3poc
19:54:19 <clarkb> cool I'l take a look at that after lunch then
19:54:30 <fungi> that's mostly what i did for mailman 3 (used distro packages)
19:55:04 <fungi> it's also basically what our puppet-mailman module does too, it just also happens to do a lot of configuring
19:55:04 <clarkb> corvus: on figuring out what changed the two versions (ignoring ubuntu patches) are 2.1.16 on trusty and 2.1.20 on xenial
19:55:37 <clarkb> I did bzr clone and diff between those versions which produces a fairly large diff but maybe I should do moer than a quick skim
19:55:48 <clarkb> also do a review of the ubuntu pathcset I suppose
19:56:06 <clarkb> likely to not be a fast process unless I happen to get lucky
19:56:22 <clarkb> in any case that gives us two things to try
19:56:27 <clarkb> #topic Open Discussion
19:57:13 <clarkb> Wanted to quickly note that the i18n team seems to think we don't need to take urgent action re zanata. Would be good if we could talk to fedora about their plans though
19:57:29 <clarkb> anyone know current fedora people? (Maybe robyn can point us in the right direction)
19:57:45 <ianw> i don't but i can see what ican find out
19:58:15 <ianw> maybe give me an action item so i don't forget :)
19:58:47 <clarkb> #action ianw check in with fedora on post zanata plans
19:59:04 <fungi> it would be swell to find out if fedora plans to continue translating things, and if so, how
19:59:25 <clarkb> And we are at time. Thank you everyone
19:59:40 <fungi> thanks clarkb!
19:59:43 <clarkb> #endmeeting