19:01:12 <clarkb> #startmeeting infra
19:01:13 <openstack> Meeting started Tue Sep 29 19:01:12 2020 UTC and is due to finish in 60 minutes.  The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:01:14 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:01:16 <openstack> The meeting name has been set to 'infra'
19:01:16 <fungi> oh, is that why i was here
19:01:33 <clarkb> #link http://lists.opendev.org/pipermail/service-discuss/2020-September/000101.html Our Agenda
19:01:42 <clarkb> #topic Announcements
19:01:43 <ianw> o/
19:02:14 <corvus> o/
19:02:48 <clarkb> The Summit and PTG are both being held virtually in a few weeks. If you plan to attend now is a great time to register (neither has fees associated but they want to be able to communicate event details as well as get code of conduct agreement)
19:03:13 <diablo_rojo> o/
19:03:43 <clarkb> If you need pointers to reg pages let me know, though I think google/DDG took me directly to them when I did it
19:03:46 <fungi> also virtual ansiblefest is on the way
19:03:55 <fungi> which is likely of interest to at least some of us
19:04:38 <fungi> zuul is going to have a "booth" there, though still not entirely clear what that entails (from descriptions it's more like an interactive poster with contact info)
19:04:44 <clarkb> it will probably be a fun one with the 2.10 release of ansibel as well
19:05:00 <corvus> does the booth need personnel?
19:05:06 <clarkb> I think we learn more about zuul booth details thursday
19:05:17 <corvus> like, should we sign up for chat duty?
19:05:17 <corvus> k
19:05:33 <fungi> corvus: they didn't spring for the live q&a tier, so they're just listing contact e-mail addresses from what i understand
19:06:08 <corvus> interesting.
19:06:22 <clarkb> we can let #zuul know after the thursday thing if it sounds like more synchronous help will be useful
19:06:33 <fungi> it was a bit of an upcharge, sounded like, and not small bit
19:07:13 <clarkb> #topic Actions from last meeting
19:07:17 <clarkb> #link http://eavesdrop.openstack.org/meetings/infra/2020/infra.2020-09-22-19.01.txt minutes from last meeting
19:07:26 <clarkb> There were no recorded actions last meeting
19:07:48 <clarkb> #topic Priority Efforts
19:08:00 <clarkb> #topic Update Config Management
19:08:12 <clarkb> #link https://review.opendev.org/#/q/status:open+project:opendev/system-config+branch:master+topic:graphite-final Finalize graphite docker conversion
19:08:21 <clarkb> ianw has ^ those changes up to finish converting graphite over to docker
19:09:09 <clarkb> those changes appear straightforward and just need a second reviewer
19:09:19 <clarkb> Are there any other config management related updates?
19:11:02 <clarkb> #topic OpenDev
19:11:45 <clarkb> Last week was a productive one for gitea. We upgraded to 1.12.4, fixed a bug in our startup timeout config, and set project descriptions so they are updated in gitea
19:12:24 <clarkb> thank you for all the help with that. Really shows the benefits of being able to test our deployments in CI too
19:12:37 <ianw> #link https://review.opendev.org/754070
19:12:51 <ianw> that fixes the gitea apache proxy too, which fell out of some of the issues we had with syncing
19:13:31 * clarkb adds that to the todo list
19:14:19 <clarkb> On the gerrit side of things I've been swamped with fires and other things. But I looked at my gerrit change backlog today and it is really tiny so I think this week I'll be able to look at review-test
19:14:24 <clarkb> maybe even this afternoon
19:14:47 <fungi> on a related note, today i followed a url to gitea which would have required symlink traversal and it came up as a 500 isr
19:14:56 <fungi> not sure if that used to work
19:15:32 <clarkb> fungi: I want to say it did, there is likely a traceback in the gitea docker log if we want to debug further
19:15:51 <clarkb> upstream is pretty responsive too if we just end up filing a bug
19:16:33 <clarkb> Any other opendev related items to bring up?
19:17:48 <clarkb> #topic General topics
19:17:55 <clarkb> #topic PTG Planning
19:18:10 <clarkb> As mentioned earlier please register if you plan to join us
19:18:14 <clarkb> #link https://etherpad.opendev.org/opendev-ptg-planning-oct-2020
19:18:35 <clarkb> I'm sure I'm not the only person with thoughts and ideas :) feel free to add yours on that etherpad
19:18:57 <clarkb> in particular if you want to be involved in a particular discussion please note that and I'll do my best to accomodate it with timezones
19:20:37 <clarkb> also note that we'll use meetpad again. I'm happy to do test calls between now and the PTG to help people get local setups working well if necessary
19:21:05 <corvus> beer optional on test calls?
19:21:08 <fungi> should we consider scaling it back up again?
19:21:18 <clarkb> corvus: I won't complain :)
19:21:35 <fungi> corvus: i thought beer was mandatory on test calls and optional during the conference, but maybe i got that backwards
19:21:35 <clarkb> fungi: yes, but maybe not as much as last time. I'm thinking possibly just a single jvb being added
19:22:02 <clarkb> fungi: I think what we found last time is most of the scaling limits seem to be in client browsers and we hit that well before the server side shows problems
19:22:35 <fungi> makes sense, but yes i agree having more than one jvb server for call balancing seemed to be beneficial
19:23:31 <fungi> (as in a dedicated jvb in addition to the aio)
19:23:41 <clarkb> fungi: I think we still have a dedicated jvb fwiw
19:23:49 <clarkb> I'm suggesting we have 2 for the ptg
19:24:02 <fungi> oh, or did we split the jvb off the main server? yeah i guess so
19:24:06 <clarkb> but if that seems unnecessary I'm happy to see how it does as is with aio + jvb and scale up if necessary
19:24:57 <fungi> er, nope there's also a jvb running on the main meetpad.o.o
19:25:24 <fungi> in addition to the one on jvb01
19:25:36 <fungi> so i guess we'd have a third jvb running
19:25:40 <fungi> cool
19:26:06 <clarkb> last time we had 5
19:26:38 <fungi> okay, yes that seems like plenty of capacity if utilization is in the same neighborhood as last time
19:27:28 <clarkb> #topic Rehoming tarballs
19:27:53 <clarkb> ianw: I'll admit I'm not super up to speed on this one beyond "we were/are publishing tarballs to the wrong locations"
19:28:26 <clarkb> in particualr things were/are going into tarballs.opendev.org/openstack/ when they should go into opendev/ zuul/ x/ etc ?
19:28:33 <ianw> umm, i guess we're not publishing to the wrong location, but we never moved the old location
19:28:49 <ianw> so now some things have tarballs in openstack/ and opendev/
19:29:06 <clarkb> gotcha
19:29:27 <ianw> so basically we should move everything to where it is homed now
19:29:46 <ianw> #link https://review.opendev.org/#/c/754257/
19:29:49 <fungi> also there was a vos release outage yesterday which we at first thought might be related to the rehoming work... turned out afsdb02 was simply hung
19:30:17 <ianw> that is a script that makes a script, that script then makes a script to move things that need moving
19:30:42 <clarkb> ianw: have the jobs been updated as well?
19:30:51 <clarkb> seems like that needs to be done otherwise we'll end up moving stuff again
19:31:39 <ianw> #link http://paste.openstack.org/show/798368/
19:31:43 <ianw> is the list of things to move
19:31:58 <ianw> the jobs have been updated; that's why we have some things already at opendev
19:32:28 <ianw> e.g.
19:32:30 <ianw> https://tarballs.opendev.org/openstack/bindep/
19:32:38 <ianw> https://tarballs.opendev.org/opendev/bindep/
19:33:10 <ianw> so for the peanut gallery
19:33:32 <ianw> a) does the script / results of the script look ok to run (linked above)?
19:33:47 <fungi> only skimmed so far but looks right to me
19:34:27 <clarkb> ya skimming it looks fine to me. The one thing we may want to check is if there are conflicts wtih the dest side?
19:34:28 <ianw> b) what do we want to do about the old directories?  seems we could either 1) symlink to new 2) do apache redirects (but won't show up on tarballs 3) just move and notify people of the new location with a list post
19:34:31 <clarkb> I don't expect there to be any
19:35:00 <clarkb> corvus: zuul is in the list ^ do you have an opinion on the symlinks vs redirects vs do nothing?
19:35:06 <ianw> sorry i mean 2) won't show up if something happens to be looking on AFS
19:35:24 <fungi> seems like we could generate a redirect list from the renames metadata in opendev/project-config
19:35:58 <fungi> we've not published afs path references that i'm aware of, but projects have included the urls in things like release announcements and documentation
19:36:05 <corvus> i feel like #3 would be acceptable for zuul
19:36:28 <corvus> our primary distribution channels are pypi and dockerhub
19:37:23 <fungi> #3 is certainly the minimum we should do, i agree. #2 might be nice if someone has the time. #1 is unlikely to be useful to anyone, and could make search results weird since there would appear to be duplicate copies in two locations
19:37:53 <corvus> fungi: agree
19:38:01 <ianw> ok, i think I can do #2 fairly easily by updating the script, so we'll go with that
19:38:09 <clarkb> wfm
19:38:17 <ianw> seems we're on board with the general idea, so i'll update it and work on it during the week
19:39:54 <clarkb> #topic Zuul tarball publishing
19:39:58 <clarkb> This is the related topic
19:40:15 <clarkb> the issue here is we're extracting the js tarball into afs right?
19:40:22 <clarkb> so the tarballs page for zuul is the dashboard?
19:40:42 <corvus> link?
19:40:45 <ianw> that's right, well i've deleted it now but it will do it again at release
19:41:11 <fungi> i want to say we had that going on with storyboard-webclient tarballs for a while and turned out we were using the wrong job, though i no longer recall the details
19:41:35 <ianw> right, there are different jobs
19:41:38 <ianw> #link https://review.opendev.org/#/c/754245/
19:41:55 <ianw> that changes it to use the -tarball job
19:42:11 <ianw> but the question is, is there any use for the expanded javascript contents any more?
19:42:39 <corvus> i don't understand why anything was ever getting extracted
19:43:11 <ianw> corvus: that's what the "-deployment" job does ... takes the web tarball and extracts it
19:43:26 <corvus> into afs?  that doesn't make sense to me
19:43:56 <ianw> well then it sounds like 754245 is the right thing to do then
19:44:16 <corvus> to my knowledge, there was never any intent to extract the contents of the js build to afs.  the only intent was to publish a tarball to afs that we would download and then extract on our zuul web server.
19:44:27 <corvus> sounds like wires got crossed somewhere
19:44:30 <clarkb> ya I think the job just didn't quite do the right thing
19:45:15 <ianw> ok, so i was wondering if we even really need to publish the web tarball at all as well, with containers now
19:45:38 <corvus> ianw: probably worth a query on zuul-discuss to see if anyone is using it, or if we were the last.
19:45:59 <corvus> (i suspect we were the last and dropping it entirely would be okay)
19:46:04 <fungi> there was a command for deploying the tarball which we referenced in zuul docs right? may want to double-check that's not still present
19:46:15 <ianw> ok, i can send a mail.  754245 can probably go in regardless
19:46:48 <ianw> so with it sorted that we really had no intention of expanding the tarball on disk, part 2 was the historical publishing
19:46:52 <corvus> fungi, ianw: yes, it's worth a pass through the docs first too
19:47:12 <ianw> as noted before we have https://tarballs.opendev.org/openstack/zuul/
19:47:30 <ianw> but the opendev publishing jobs are not publishing to AFS, only to PyPi
19:47:44 <ianw> it seems from prior discussion today that may be intention
19:47:53 <ianw> <corvus> our primary distribution channels are pypi and dockerhub
19:48:13 <ianw> but i'm not sure?  do we want zuul to publish tarballs to afs as well, and it just didn't get done?
19:49:43 <corvus> i don't recall a strong opinion
19:50:01 <corvus> i think maybe we mostly don't care?
19:50:04 <clarkb> I wonder if the fedora pacakgers would have an opinion?
19:50:16 <clarkb> I think historically the tarballs we (opendev) publish have been to support packaging efforts
19:50:25 <clarkb> but if the packagers are looking at git or pypi then we don't need to do that for zuul?
19:51:26 <fungi> these days most distros pull tarballs for python packages from pypi
19:51:44 <ianw> oh, the javascript is back anyway @ https://tarballs.opendev.org/zuul/zuul/
19:51:46 <fungi> or they use upstream git repositories
19:52:14 <ianw> it publishes the master @ https://tarballs.opendev.org/zuul/nodepool/
19:54:24 <clarkb> Anything else?
19:54:56 <ianw> i guess if we fix 754245, i'll bring it up on the discuss list and we can decide if we keep the jobs at all
19:55:07 <clarkb> sounds good, thanks
19:55:14 <clarkb> #topic Splitting puppet else into specific infra-prod job
19:55:23 <clarkb> I don't think any work has started on this yet, but wanted to double check
19:55:48 <ianw> no, but i wanted to get rid of graphite first; and also mirror-update is really close with just the reprepro work
19:56:00 <ianw> i figured a good way to reduce it is to remove bits :)
19:56:19 <clarkb> ++
19:56:41 <clarkb> We're basically at the end of our hour so I'll skip ahead to open discussion
19:56:44 <clarkb> #topic Open Discussion
19:56:51 <clarkb> if there is anything else you want to bring up now is a good time for it :)
20:00:02 <fungi> thanks clarkb!
20:00:07 <clarkb> Sounds like that may be it. Thank you everyone!
20:00:12 <clarkb> #endmeeting