19:01:12 #startmeeting infra 19:01:13 Meeting started Tue Sep 29 19:01:12 2020 UTC and is due to finish in 60 minutes. The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:01:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:01:16 The meeting name has been set to 'infra' 19:01:16 oh, is that why i was here 19:01:33 #link http://lists.opendev.org/pipermail/service-discuss/2020-September/000101.html Our Agenda 19:01:42 #topic Announcements 19:01:43 o/ 19:02:14 o/ 19:02:48 The Summit and PTG are both being held virtually in a few weeks. If you plan to attend now is a great time to register (neither has fees associated but they want to be able to communicate event details as well as get code of conduct agreement) 19:03:13 o/ 19:03:43 If you need pointers to reg pages let me know, though I think google/DDG took me directly to them when I did it 19:03:46 also virtual ansiblefest is on the way 19:03:55 which is likely of interest to at least some of us 19:04:38 zuul is going to have a "booth" there, though still not entirely clear what that entails (from descriptions it's more like an interactive poster with contact info) 19:04:44 it will probably be a fun one with the 2.10 release of ansibel as well 19:05:00 does the booth need personnel? 19:05:06 I think we learn more about zuul booth details thursday 19:05:17 like, should we sign up for chat duty? 19:05:17 k 19:05:33 corvus: they didn't spring for the live q&a tier, so they're just listing contact e-mail addresses from what i understand 19:06:08 interesting. 19:06:22 we can let #zuul know after the thursday thing if it sounds like more synchronous help will be useful 19:06:33 it was a bit of an upcharge, sounded like, and not small bit 19:07:13 #topic Actions from last meeting 19:07:17 #link http://eavesdrop.openstack.org/meetings/infra/2020/infra.2020-09-22-19.01.txt minutes from last meeting 19:07:26 There were no recorded actions last meeting 19:07:48 #topic Priority Efforts 19:08:00 #topic Update Config Management 19:08:12 #link https://review.opendev.org/#/q/status:open+project:opendev/system-config+branch:master+topic:graphite-final Finalize graphite docker conversion 19:08:21 ianw has ^ those changes up to finish converting graphite over to docker 19:09:09 those changes appear straightforward and just need a second reviewer 19:09:19 Are there any other config management related updates? 19:11:02 #topic OpenDev 19:11:45 Last week was a productive one for gitea. We upgraded to 1.12.4, fixed a bug in our startup timeout config, and set project descriptions so they are updated in gitea 19:12:24 thank you for all the help with that. Really shows the benefits of being able to test our deployments in CI too 19:12:37 #link https://review.opendev.org/754070 19:12:51 that fixes the gitea apache proxy too, which fell out of some of the issues we had with syncing 19:13:31 * clarkb adds that to the todo list 19:14:19 On the gerrit side of things I've been swamped with fires and other things. But I looked at my gerrit change backlog today and it is really tiny so I think this week I'll be able to look at review-test 19:14:24 maybe even this afternoon 19:14:47 on a related note, today i followed a url to gitea which would have required symlink traversal and it came up as a 500 isr 19:14:56 not sure if that used to work 19:15:32 fungi: I want to say it did, there is likely a traceback in the gitea docker log if we want to debug further 19:15:51 upstream is pretty responsive too if we just end up filing a bug 19:16:33 Any other opendev related items to bring up? 19:17:48 #topic General topics 19:17:55 #topic PTG Planning 19:18:10 As mentioned earlier please register if you plan to join us 19:18:14 #link https://etherpad.opendev.org/opendev-ptg-planning-oct-2020 19:18:35 I'm sure I'm not the only person with thoughts and ideas :) feel free to add yours on that etherpad 19:18:57 in particular if you want to be involved in a particular discussion please note that and I'll do my best to accomodate it with timezones 19:20:37 also note that we'll use meetpad again. I'm happy to do test calls between now and the PTG to help people get local setups working well if necessary 19:21:05 beer optional on test calls? 19:21:08 should we consider scaling it back up again? 19:21:18 corvus: I won't complain :) 19:21:35 corvus: i thought beer was mandatory on test calls and optional during the conference, but maybe i got that backwards 19:21:35 fungi: yes, but maybe not as much as last time. I'm thinking possibly just a single jvb being added 19:22:02 fungi: I think what we found last time is most of the scaling limits seem to be in client browsers and we hit that well before the server side shows problems 19:22:35 makes sense, but yes i agree having more than one jvb server for call balancing seemed to be beneficial 19:23:31 (as in a dedicated jvb in addition to the aio) 19:23:41 fungi: I think we still have a dedicated jvb fwiw 19:23:49 I'm suggesting we have 2 for the ptg 19:24:02 oh, or did we split the jvb off the main server? yeah i guess so 19:24:06 but if that seems unnecessary I'm happy to see how it does as is with aio + jvb and scale up if necessary 19:24:57 er, nope there's also a jvb running on the main meetpad.o.o 19:25:24 in addition to the one on jvb01 19:25:36 so i guess we'd have a third jvb running 19:25:40 cool 19:26:06 last time we had 5 19:26:38 okay, yes that seems like plenty of capacity if utilization is in the same neighborhood as last time 19:27:28 #topic Rehoming tarballs 19:27:53 ianw: I'll admit I'm not super up to speed on this one beyond "we were/are publishing tarballs to the wrong locations" 19:28:26 in particualr things were/are going into tarballs.opendev.org/openstack/ when they should go into opendev/ zuul/ x/ etc ? 19:28:33 umm, i guess we're not publishing to the wrong location, but we never moved the old location 19:28:49 so now some things have tarballs in openstack/ and opendev/ 19:29:06 gotcha 19:29:27 so basically we should move everything to where it is homed now 19:29:46 #link https://review.opendev.org/#/c/754257/ 19:29:49 also there was a vos release outage yesterday which we at first thought might be related to the rehoming work... turned out afsdb02 was simply hung 19:30:17 that is a script that makes a script, that script then makes a script to move things that need moving 19:30:42 ianw: have the jobs been updated as well? 19:30:51 seems like that needs to be done otherwise we'll end up moving stuff again 19:31:39 #link http://paste.openstack.org/show/798368/ 19:31:43 is the list of things to move 19:31:58 the jobs have been updated; that's why we have some things already at opendev 19:32:28 e.g. 19:32:30 https://tarballs.opendev.org/openstack/bindep/ 19:32:38 https://tarballs.opendev.org/opendev/bindep/ 19:33:10 so for the peanut gallery 19:33:32 a) does the script / results of the script look ok to run (linked above)? 19:33:47 only skimmed so far but looks right to me 19:34:27 ya skimming it looks fine to me. The one thing we may want to check is if there are conflicts wtih the dest side? 19:34:28 b) what do we want to do about the old directories? seems we could either 1) symlink to new 2) do apache redirects (but won't show up on tarballs 3) just move and notify people of the new location with a list post 19:34:31 I don't expect there to be any 19:35:00 corvus: zuul is in the list ^ do you have an opinion on the symlinks vs redirects vs do nothing? 19:35:06 sorry i mean 2) won't show up if something happens to be looking on AFS 19:35:24 seems like we could generate a redirect list from the renames metadata in opendev/project-config 19:35:58 we've not published afs path references that i'm aware of, but projects have included the urls in things like release announcements and documentation 19:36:05 i feel like #3 would be acceptable for zuul 19:36:28 our primary distribution channels are pypi and dockerhub 19:37:23 #3 is certainly the minimum we should do, i agree. #2 might be nice if someone has the time. #1 is unlikely to be useful to anyone, and could make search results weird since there would appear to be duplicate copies in two locations 19:37:53 fungi: agree 19:38:01 ok, i think I can do #2 fairly easily by updating the script, so we'll go with that 19:38:09 wfm 19:38:17 seems we're on board with the general idea, so i'll update it and work on it during the week 19:39:54 #topic Zuul tarball publishing 19:39:58 This is the related topic 19:40:15 the issue here is we're extracting the js tarball into afs right? 19:40:22 so the tarballs page for zuul is the dashboard? 19:40:42 link? 19:40:45 that's right, well i've deleted it now but it will do it again at release 19:41:11 i want to say we had that going on with storyboard-webclient tarballs for a while and turned out we were using the wrong job, though i no longer recall the details 19:41:35 right, there are different jobs 19:41:38 #link https://review.opendev.org/#/c/754245/ 19:41:55 that changes it to use the -tarball job 19:42:11 but the question is, is there any use for the expanded javascript contents any more? 19:42:39 i don't understand why anything was ever getting extracted 19:43:11 corvus: that's what the "-deployment" job does ... takes the web tarball and extracts it 19:43:26 into afs? that doesn't make sense to me 19:43:56 well then it sounds like 754245 is the right thing to do then 19:44:16 to my knowledge, there was never any intent to extract the contents of the js build to afs. the only intent was to publish a tarball to afs that we would download and then extract on our zuul web server. 19:44:27 sounds like wires got crossed somewhere 19:44:30 ya I think the job just didn't quite do the right thing 19:45:15 ok, so i was wondering if we even really need to publish the web tarball at all as well, with containers now 19:45:38 ianw: probably worth a query on zuul-discuss to see if anyone is using it, or if we were the last. 19:45:59 (i suspect we were the last and dropping it entirely would be okay) 19:46:04 there was a command for deploying the tarball which we referenced in zuul docs right? may want to double-check that's not still present 19:46:15 ok, i can send a mail. 754245 can probably go in regardless 19:46:48 so with it sorted that we really had no intention of expanding the tarball on disk, part 2 was the historical publishing 19:46:52 fungi, ianw: yes, it's worth a pass through the docs first too 19:47:12 as noted before we have https://tarballs.opendev.org/openstack/zuul/ 19:47:30 but the opendev publishing jobs are not publishing to AFS, only to PyPi 19:47:44 it seems from prior discussion today that may be intention 19:47:53 our primary distribution channels are pypi and dockerhub 19:48:13 but i'm not sure? do we want zuul to publish tarballs to afs as well, and it just didn't get done? 19:49:43 i don't recall a strong opinion 19:50:01 i think maybe we mostly don't care? 19:50:04 I wonder if the fedora pacakgers would have an opinion? 19:50:16 I think historically the tarballs we (opendev) publish have been to support packaging efforts 19:50:25 but if the packagers are looking at git or pypi then we don't need to do that for zuul? 19:51:26 these days most distros pull tarballs for python packages from pypi 19:51:44 oh, the javascript is back anyway @ https://tarballs.opendev.org/zuul/zuul/ 19:51:46 or they use upstream git repositories 19:52:14 it publishes the master @ https://tarballs.opendev.org/zuul/nodepool/ 19:54:24 Anything else? 19:54:56 i guess if we fix 754245, i'll bring it up on the discuss list and we can decide if we keep the jobs at all 19:55:07 sounds good, thanks 19:55:14 #topic Splitting puppet else into specific infra-prod job 19:55:23 I don't think any work has started on this yet, but wanted to double check 19:55:48 no, but i wanted to get rid of graphite first; and also mirror-update is really close with just the reprepro work 19:56:00 i figured a good way to reduce it is to remove bits :) 19:56:19 ++ 19:56:41 We're basically at the end of our hour so I'll skip ahead to open discussion 19:56:44 #topic Open Discussion 19:56:51 if there is anything else you want to bring up now is a good time for it :) 20:00:02 thanks clarkb! 20:00:07 Sounds like that may be it. Thank you everyone! 20:00:12 #endmeeting