19:01:08 <clarkb> #startmeeting infra
19:01:09 <openstack> Meeting started Tue May 14 19:01:08 2019 UTC and is due to finish in 60 minutes.  The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:01:10 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:01:12 <openstack> The meeting name has been set to 'infra'
19:01:15 <clarkb> link http://lists.openstack.org/pipermail/openstack-infra/2019-May/006369.html
19:01:18 <clarkb> #link http://lists.openstack.org/pipermail/openstack-infra/2019-May/006369.html
19:02:04 <clarkb> #topic Announcements
19:02:41 <clarkb> This didn't end up on the agenda but I think people are still in travel/life spin cycle so don't be surprised if things are a bit slow for a bit
19:02:47 <clarkb> Today is my first real day back
19:03:28 <clarkb> Good news is the side of my house is back on and I haven't observed any new laeking \o/
19:03:44 <clarkb> #topic Actions from last meeting
19:03:53 <clarkb> #link http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-05-07-19.01.txt minutes from last meeting
19:04:03 <clarkb> I had an action to send out a ptg summit forum recap
19:04:10 <clarkb> #link http://lists.openstack.org/pipermail/openstack-infra/2019-May/006367.html Denver 2019 Summit/Forum/PTG Recap
19:04:26 <clarkb> As mentioned feel free to followup with any questions that might arise out of that or need for clarification
19:04:44 <clarkb> As expected it was an exhausting trip
19:05:04 <clarkb> I think cross project efforts like infra and qa etc suffered under the newer ptg format
19:05:22 <clarkb> (there were far fewer people in our room(s) than in the more siloed efforts)
19:05:28 <clarkb> but it was still fairly productive
19:05:57 <clarkb> #topic Priority Efforts
19:06:06 <clarkb> Lets dive right in to the fun stuff
19:06:13 <clarkb> #topic Update Config Management
19:06:39 <clarkb> We are super close to finishing the puppet 4 upgrades. After this meeting I want to approve the zuul puppet-4 upgrade change
19:06:47 <cmurphy> \o/
19:06:48 <clarkb> then we can rebase the lists.o.o change without needing to rebase the zuul change
19:07:01 <clarkb> and depending on how zuul change goes can likely get lists.o.o upgrade in today as well
19:07:55 <fungi> sounds good to me
19:07:58 <clarkb> ianw do you want to talk about the ansibling of mirrors and how that will enable us to do tls which should improve docker now or when we get to the official spot for it on the agenda?
19:08:06 <fungi> i'll be around to help fix if it goes sideways
19:08:26 <clarkb> (I think that effort is fairly tightly cuopled to the config management updates at this point as we are looking at converting to ansible for those nodes and improving docker test job success rates)
19:08:37 <ianw> i don't mind :)
19:09:23 <clarkb> lets talk about that now then as I think it is useful to have in this context
19:09:38 <clarkb> yseterday you mentioned the change isn't quite ready for review yet. THis morning I noticed that the puppet jobs are unhappy with it
19:09:51 <clarkb> Anything we can do to help or will you send up the signal flare when we should look at it?
19:10:51 <ianw> ok; so to catch people up, we only support letsencrypt for .opendev.org domain names, because we use domain authentication and openstack.org ishosted in rax which needs a different path
19:11:15 <clarkb> #link https://review.opendev.org/#/c/658281/ Ansible deployed mirrors so they can be LetsEncrypted more easily
19:11:15 <ianw> so to https mirrors we need .opendev.org names for them
19:11:47 <ianw> my first thought was that we could do something like point records to the existing mirrors, and deploy certs
19:12:11 <ianw> but it turned out to not really be KISS but puppet hacking and just going what felt like the wrong way
19:12:43 <ianw> so i'd propose we just start a new mirror server in each region and manage it fully via ansible
19:13:21 <ianw> which is ultimately 658281 above
19:13:30 <ianw> there's some intermediate steps below that
19:13:47 <ianw> #link https://review.opendev.org/658930 -- signs fake letsencrypt cert with a fake CA so apache can start unmodified in test environment
19:13:59 <ianw> #link https://review.opendev.org/652801 -- wires in handlers to be called when letsencrypt renews certs
19:14:08 <clarkb> plan seems reasonable. Then we can have the new mirrors running with tls enabled then update the site var for zuul to talk to the opendev mirrors instead
19:14:13 <ianw> fungi would particularly like your eyes on 658930
19:14:38 <clarkb> oh there is a new ps on the handlers change (I'll rereview it)
19:14:39 <fungi> starred, thanks
19:14:56 <ianw> yep, and as mentioned in the CL, it should be ok to CNAME the existing mirrors to the new mirrors, it's setup as http/https
19:15:24 <clarkb> cool that gives us extra betls and suspenders
19:15:27 <ianw> i wasn't quite sure about the other ports; would they be switched to ssl, or do we need to have a similar thing where ssl is a separate port?
19:16:01 <clarkb> ianw: I think we can safely update the docker ports to be ssl because docker doesn't take urls
19:16:08 <clarkb> instead you just give it a flag if it is ok to not do ssl
19:16:30 <clarkb> but for the main mirror at 80 we probably want a corresponding 443 so that stuff can opt into https for that
19:16:45 <ianw> i also considered docker for deployment, but i didn't see where it really fit here.  the mirrors must be in each region specifically; i mean that's the whole point
19:16:48 <clarkb> then after the initial switch we can make 80 redirect to 443
19:17:16 <ianw> the main thing this is doing is being an afs client, something that doesn't make sense in a container as it installs kernel modules to the host system
19:17:28 <clarkb> that and run a very stable web server
19:17:35 <clarkb> I agree that ansible alone is probably sufficient for this
19:17:41 <clarkb> (and likely preferable)
19:18:00 <ianw> at that point, the container is just a webserver with a lot of stuff mapped into it, which doesn't seem any different to just "apt-get install" ing it
19:18:25 <ianw> right, yeah we don't even have complex cgi dependencies or anything; just that one vhost
19:18:48 <clarkb> I think if we wanted to build apache or $webserver ourselves containers would make sense but the distro version has been fine so far
19:18:51 <fungi> yeah, i think we should push to wrap things in containers when it makes sense to do so, and just rely on distro packages for stuff on the server when it doesn't
19:19:33 <fungi> don't container merely for containering's sake
19:19:33 <ianw> cool; that was my only concern, that people might think it needs to be wrapped up
19:20:07 <ianw> so in short, should be ready for review.  if it's looking good, we can bring up a single node first and check it out, and even switch just one node in with a CNAME from openstack.org and monitor it for a bit
19:20:20 <clarkb> oh I like that idea
19:20:27 <fungi> me too
19:20:37 <clarkb> +1 from me on that. I can review the stack while waiting on zuul puppet 4 things to happen
19:21:26 <clarkb> alright anything else on this or should we move one?
19:21:29 <clarkb> s/one/on/
19:21:33 <ianw> nope, that's it, thanks!
19:22:21 <clarkb> #topic OpenDev
19:23:10 <clarkb> If you haven't already, please read my ptg summary as it covers some the thoughts we had during the PTG for opendev next steps (largely around how to delegate responsibility for stuff to users
19:23:53 <clarkb> The other major todos we have are the cgit cluster cleanup (which I've got on my todo list), just trying to convince myself we don't have a major gitea issue with the 404s mriedem points out before I commit to that
19:24:25 <clarkb> Also we want to udpate gitea to 1.8.x but our tests are failing on what appears to be a valid issue pointed out by the tests
19:24:43 <clarkb> if anyone is able to look at that it would be helpful (I can probably get to it after cgit farm is cleaned up)
19:25:39 <clarkb> I think we are still likely to be on track for a end of month downtown to do fixup project renames
19:25:54 <clarkb> we should prep rename changes for the projects under infra/opendev control that were missed
19:26:47 <clarkb> (I say all that realizing we are still somewhat short staffed for a bit so we'll get what we get done done)
19:27:19 <clarkb> graham (kata dev) had good feedback on some gitea UI weirdness
19:27:30 <clarkb> I'll try to push that feedback upsteram if graham chooses not to
19:28:02 <clarkb> Anything else from the crowd on opendev related items?
19:29:22 <clarkb> Sounds like no. Lets move on
19:29:25 <clarkb> #topic Storyboard
19:29:47 <clarkb> diablo_rojo: still here? I noted last week that there was a painpoints session with good feedback and that you all had a bug cleaning sprint
19:30:01 <clarkb> Anything else to add re storyboard since the summit/ptg/forum?
19:30:06 <fungi> diablo_rojo posted a denver summary for it to the... openstack-discuss ml?
19:30:28 <fungi> yeah that's it
19:30:38 <diablo_rojo_phon> Yes the discuss list
19:30:40 <diablo_rojo_phon> Finally.
19:30:43 <clarkb> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006214.html
19:30:48 <diablo_rojo_phon> Basically nothing was a surprise.
19:31:15 <fungi> it's prompted some follow-up discussion on the ml as well
19:31:22 <diablo_rojo_phon> The largest outcome was that we should do two different onboardings. One for users.
19:31:45 <diablo_rojo_phon> And one for development
19:32:14 <clarkb> ya we've had similar ideas for infra users and admins/devs
19:32:33 <fungi> also the telemetry team is ready to move their tracking to sb.o.o now that the new maintainers have obtained control over the existing lp projects to be able to wind them down there
19:33:10 <diablo_rojo_phon> Yay!
19:33:27 <diablo_rojo_phon> Swift also wants a test migration.
19:33:36 <diablo_rojo_phon> Which is on my to-do list for this week.
19:33:45 <clarkb> is the -dev server happy yet?
19:34:03 <clarkb> fungi: ^ you were probably the last person looking at that. Did you decide to just start over?
19:35:08 <fungi> as in manually renaming the projects in it? i think we should probably just stop having puppet autocreate project-groups on sb-dev but no i haven't found time to work on that yet
19:35:38 <fungi> stop having puppet autocreate projects on sb-dev too probably
19:35:45 <clarkb> the db migration puppetry is broken currently. Not sure if t hat will impact diablo_rojo_phon's ability to do a test migration for swift
19:36:06 <diablo_rojo_phon> ..sounds like it would
19:36:23 <fungi> it shouldn't. just have to manually create any projects and project-groups you want to try importing into
19:36:44 <fungi> which would have been the case regardless because they're not flagged to use-storyboard in projects.yaml anyway
19:36:45 <diablo_rojo_phon> Oh okay.
19:36:53 <clarkb> got it
19:36:55 <diablo_rojo_phon> I think that's how I was doing it before.
19:37:27 <fungi> what i'm talking about abandoning is a puppet exec we currently run which creates any missing projects and project-groups in sb-dev for projects marked as using (production) storyboard already
19:37:42 <fungi> that's of very limited utility anyway given how we've actually been using sb-dev
19:38:05 <clarkb> should update the project rename playbook too to remove the renames there?
19:38:35 <fungi> we never included them in the project rename playbook, which is why the puppet exec is now broken
19:38:54 <fungi> (unless someone added it very recently)
19:40:00 <clarkb> my change (whcih I thought had merged) https://review.opendev.org/#/c/655476/3/playbooks/rename_repos.yaml added it as the alternative fix for this problem
19:40:09 <fungi> ahh
19:40:20 <clarkb> The change hasn't merged though (we should really get that in well before we try to do more project renames...)
19:40:34 <clarkb> I can remove it if you prfer and push a new ps. Then you can remove the puppet exec safely
19:40:57 <fungi> well, given that the rename process simply updates database rows when it finds matches, i don't think it will hurt (it just won't usually do anything for nonexistent ones)
19:41:16 <clarkb> ok I'll leave it as is then I guess
19:43:11 <clarkb> Sounds like that may be it for storyborad
19:43:14 <clarkb> #topic General Topics
19:43:26 <clarkb> #link https://etherpad.openstack.org/p/201808-infra-server-upgrades-and-cleanup Trusty server upgrades
19:43:46 <clarkb> I'm hoping that I'll be able to dedicate a good chunk of next week to this once I've climed out of the everything else backlog
19:43:52 <clarkb> we are very near the end
19:44:01 <clarkb> fungi said he can delete the old groups servers in the near future too
19:44:13 <fungi> yup, already on my to do list
19:44:28 <clarkb> Has anyone been working on the other servers still on the list? if so anything we can help with?
19:45:43 <ianw> no but i really should look at ask ... i think it's ready to go
19:46:01 <clarkb> ianw: that would be great
19:46:34 <clarkb> #topic Open Discussion
19:46:46 <clarkb> We have ~14 minutes for other topics if we want to keep talking here
19:47:25 <fungi> on opendev topics, did we mention the next batch project rename schedule?
19:47:39 <fungi> last week we said last friday of may?
19:47:52 <clarkb> fungi: yup and yup
19:47:57 <clarkb> the 31st
19:48:14 <clarkb> I think we are still on track for that assuming we get the rename fixes merged and can generate the list of things we need to rename
19:48:45 <fungi> cool
19:48:59 <fungi> that weekend is a holiday in au, ie and nz
19:49:28 <clarkb> my kids have a fourth birday on the 2nd
19:49:36 <clarkb> but the size of this would be much smaller
19:49:48 <clarkb> so I'm more comfortable doing it friday and nnot needing to work all weekend like last time
19:49:55 <fungi> weekend before it is a holiday in uk and us, weekend before that is a holiday in ca ;)
19:50:12 <fungi> so yeah, gonna be a holiday weekend no matter what
19:50:54 * fungi needs all the countries and religions of the world to sync their holiday schedules for his convenience
19:51:53 <clarkb> That would be convenient
19:52:24 <clarkb> Anything else?
19:52:34 <clarkb> I'll end the meeting in a minute or two if not
19:54:07 <clarkb> Thank you everyone!
19:54:10 <clarkb> #endmeeting