Friday, 2022-06-17

ianwanyway, before all this nb03 was borked again.  in a bit i'll try to see if i can get anything else there00:08
ianwi think it is leaked loop images01:04
ianwroot@nb03:/tmp# losetup --list | wc -l01:04
ianw901:04
ianwthat ain't right01:04
ianw# mount /dev/loop7 /tmp/mnt01:07
ianwmount: /tmp/mnt: /dev/loop7 already mounted or mount point busy.01:07
ianwroot@nb03:/opt/dib_tmp/dib_image.7BsVnO2D# mount | grep loop701:07
ianw... nothing01:07
ianwso it's mounted/busy, but not mounted?!01:07
ianwahh, hrm, /dev/mapper01:10
ykarelianw, frickler can you please review https://review.opendev.org/c/zuul/zuul-jobs/+/846206 and https://review.opendev.org/c/zuul/zuul-jobs/+/84620104:01
opendevreviewMerged zuul/zuul-jobs master: Fix two testing problems  https://review.opendev.org/c/zuul/zuul-jobs/+/84620604:23
*** pojadhav is now known as pojadhav|ruck04:25
ykarelianw, can you also check other one too please04:26
ykarelhttps://review.opendev.org/c/zuul/zuul-jobs/+/84620104:26
ianwykarel: sorry, that one doesn't make any sense to me.  that variable has nothing to do with venv installation04:35
ianwi haven't quite figured out how dib is dying and leaving the dm devices around.  it is the centos 9-stream build.  i think it has to do with our current mirror woes04:43
ianwi'm going to clear out and reboot it again04:43
ykarelianw, i replied, i agree with you on this04:43
ykarelbut can we merge this to resolve issue with translation update jobs04:43
ykareland solve the concerns with follow ups as i fixed what was wrong with the installations04:44
ykarelseems the var was just reused for venv package installation instead of introducing some duplicate04:45
ianwit seems like this came in via -> https://opendev.org/openstack/project-config/commit/c7d42980c649273eacf9361cfe20e5500ac3b6e704:47
ianwor somewhere around there04:47
ykarelyes right04:47
ianwi actually feel like reverting this change in zuul-jobs might be the most correct thing, rather than trying to fix it.  whatever it breaks, should be fixed another way04:47
ykarelactually this seems to be mess there were series of patch to fix propose-updates job04:48
ykareland then that broke translation-updates job which inherit from ^ as that have to run on bionic04:48
ykarelfrickler, may have more context around orignal ones, /me just checked translation ones04:49
opendevreviewIan Wienand proposed zuul/zuul-jobs master: Revert "Install venv for all platforms in ensure-pip"  https://review.opendev.org/c/zuul/zuul-jobs/+/84624804:52
ykarelk Thanks, let's see04:53
ianwi would prefer we do something like ^ and then go back and re-evaluate what's going on04:53
ykarelk sure04:53
ianwi get the feeling that this comes down to having packaged python3.8 and python3.9 installed on the same system, something that ensure-pip was not written to understand04:53
ykarellikely this will break jobs those were using it04:55
ykarelwhere default python is something else04:55
ianwit might be that ensure-pip needs another argument like ensure_pip_python_package_versions: [python3.8, python3.9] to pull in all the right packages04:55
ianwbut that would be orthogonal to the ensure_pip_from_upstream* arguments (it's *not* upstream, it's packaged)04:56
* ykarel agrees04:57
ianwi am in no way defending anything about ensure-pip.  it has grown from a mess of mixed python2/3 systems, and mixing in installing upstream pip globally and using pip packages locally04:58
ianwall we can try and do is not make it any worse04:58
ykarelianw, i think https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-python/tasks/main.yaml would be more suitable05:04
ykarelfor installinv -venv packages05:04
ianwthere is probably an argument for doing it in there, in that venv could be considered part of any working python system so pulling the package in is reasonable05:06
ianwand also consistency, pretty sure rh distros install it by default with python, only debuntu splits it into the separate package05:07
ykarelyes ^ true for ubuntu05:07
ykarelso would need to install explicitly in this case05:07
ianwit's interesting that role uses the variable "python_version".  that feels a little generic as a name to me05:09
*** chkumar|rover is now known as chandankumar05:10
*** elodilles_pto is now known as elodilles06:46
*** jpena|off is now known as jpena07:01
*** arxcruz is now known as arxcruz|rover07:08
fricklerI must admit I focused on getting reqs updates working and didn't care much about translations. maybe split the common parent job into different ones now that they diverge so much07:31
fricklerbut then the zanata setup is doomed anyway07:31
fricklerotoh splitting reqs updates into one job per python version and a finel merge job would be a good project, too07:32
fricklerthe job needs python3.8 + python3.9 installed, which is where the current mess started07:35
frickler#status log pushed openstack/requirements 846277,1 to gate in order to unblock neutron07:44
opendevstatusfrickler: finished logging07:44
fricklerralonsoh: prometheanfire: ^^07:44
*** ysandeep|out is now known as ysandeep|afk08:10
*** rlandy|out is now known as rlandy10:30
*** ysandeep|afk is now known as ysandeep11:09
*** ysandeep is now known as ysandeep|afk11:47
fungiianw: if it helps, there's now a python3-full virtual package (and corresponding python3.9-full, et cetera) in debian as of bullseye and ubuntu as of impish (so also jammy), which does include -venv et cetera12:09
fungidoesn't solve bionic jobs though, of course12:13
Clark[m]ianw: ykarel: fungi: I don't think we can merge that revert without potentially breaking users. This being zuul-jobs we try to avoid doing that. We should fix the code in a way that doesn't change expected behavior. Then we can send email about potential breaking cleanups and land them after a waiting period12:15
Clark[m]I think that looks like ykarel's change to fix the loop, a follow-up to ensure-python to do similar, email telling people ensure-python needs to do it, then removing the code from ensure-pip later. But not upfront removal12:16
ykarel^ sounds good to me12:17
*** dviroel|out is now known as dviroel13:00
*** ysandeep|afk is now known as ysandeep13:16
BlaisePabon[m]1IDK much about zuul, but I am really into python.... (full message at https://matrix.org/_matrix/media/r0/download/matrix.org/DxqyHTfTrRbIzkvKAKGqzwiN)13:18
*** pojadhav|ruck is now known as pojadhav|dinner13:18
fricklercan one instruct the matrix bridge not to hyperlink long messages? I really don't like having to use a http client every time someone send something with "full message at https://matrix.org/..."13:21
ianwClark[m]: i don't want to say too much about the original as i'm missing the context of what happened, but it didn't update any documented behaviour -- if anything it broke the documented behaviour of the variable it's looping on because that was never supposed to be a key for venv packages13:31
ianwso imo it's more of a bug fix than a deprecation.  that said, if yourself or others want to swizzle things around i won't argue.  i do think now we've taken a closer look it's important we rework it, though13:32
Clark[m]Whether or not it is documented I think anyone relying on the -venv package being present for a single interpreter would break with that change. Multiple interpreters never worked due to the install failure13:36
opendevreviewLajos Katona proposed openstack/project-config master: Add github svinota/pyroute2 to project list  https://review.opendev.org/c/openstack/project-config/+/84636413:37
Clark[m]Blaise Pabon @blaisepabon:matrix.org: in many cases we're actually interested in the test platform as a whole including the shipped python. Using a separate source may make sense for some jobs but not generally.13:37
Clark[m]frickler: I'm not aware of one. I try to make IRC friendly messages when communicating via the bridge, but that may not be as apparent to users who don't see the IRC side13:38
*** pojadhav|dinner is now known as pojadhav|ruck13:40
fricklermaybe we can make a note somewhere in our IRC docs. at least then I could point people at that ;)13:40
fricklerClark[m]: also note how the reply you sent has the matrix-side name of the user13:41
ianwClark[m]: that argument only does something if ensure_pip_from_upstream=True.  i agree that this never installed anything but "python3-venv".  making ensure_pip_from_upstream_interpreters do anything without ensure_pip_from_upstream=True just ... doesn't make much sense13:43
Clark[m]frickler: yes because I know that user is on matrix, but when I highlighted you I used your IRC nick instead. It's mental overhead and I'm not sure there is a single best approach but I try13:44
Clark[m]ianw: I agree it should change, but it was added by users and I worry if we just change it they will break. Mostly just saying we should fix it in the more graceful method we try to use with zuul-jobs sending a warning and leaving some time before the change happens13:46
Clark[m]ianw: I wonder if https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-pip/tasks/main.yaml#L79 is why it was set in that role13:50
ianwi'm sure there is context ... sorry getting a bit late for me to dig in.  just we're committing "fixes" to this to "fix" ... what exactly?  i don't think we know if the fix we're committing breaks anyone either, as it's not documented13:53
ianwthis is why i feel like it's a grey area for deprecation.  i agree with the overall principle of not pulling things out of zuul-jobs.  but this doesn't feel like a working feature we're retiring 13:54
Clark[m]Ya I think ykarel's change should maybe shift to ensure-python and install the -venv package for the requested versions there. Then separately deprecate and remove the block you are trying to remove13:54
ianwbut as i said, also not going to argue if it gets rebased, etc.13:54
ianwhttps://pagure.io/centos-infra/issue/814 has had a couple of updates on the corrupt mirror package13:54
Clark[m]And ya it's late Friday for you. I don't think this is urgent enough to demand attention right now :)13:55
ianwthis has vibes of people pulling packages on pypi.  in the same way you can't pull a package, you have to release a new one, you can't just replace a package13:55
ianwyes, turning in now.  just saw Penn & Teller, was a good show!13:56
ianwand only about 3 years or so after they announced they were coming to .au too :)13:56
Clark[m]Wow, sounds like fun!13:58
Clark[m]ykarel: another option on the consuming job side is to update your venv command to be python3 -m venv -p python3.x to select the right python version that way13:59
Clark[m]ykarel: that might be the easiest thing right now while we sort out zuul-jobs?13:59
* ykarel just back, reading ^14:00
*** dviroel is now known as dviroel|pto14:03
*** ysandeep is now known as ysandeep|out14:04
opendevreviewMerged openstack/project-config master: Add github svinota/pyroute2 to project list  https://review.opendev.org/c/openstack/project-config/+/84636414:05
ykarelClark[m], if i try that locally it says venv: error: unrecognized arguments: -p14:06
ykarelon ubuntu focal14:06
ykarelClark[m], just to fix on job side we can move https://opendev.org/openstack/project-config/src/branch/master/playbooks/proposal/pre.yaml#L814:10
ykarelto job definition, and override that in translation-update child jobs14:10
ykarelif that works for you?14:10
Clark[m]ykarel: it may need to be the virtualenv command to use the python version selector flag. But I still think that is a good approach. Basically just use what is there to configure a virtualenv how you want it14:16
Clark[m]Since I think we have those tools available it's just the interconnected roles and bars that aren't quite right14:17
ykarelClark[m], okk that should work if virtualenv is installed14:19
ykarelme prepares patch14:19
opendevreviewyatin proposed openstack/project-config master: Use virtualenv in translation update jobs  https://review.opendev.org/c/openstack/project-config/+/84639014:26
*** pojadhav|ruck is now known as pojadhav|out14:26
ykarelClark[m], fungi done ^14:27
ykarelcan test in https://review.opendev.org/c/openstack/neutron/+/837454 once merged14:27
ykareldue to trusted not easy to test these before14:28
Clark[m]Lgtm. I'm about to do a school run so can't +2 properly. Will do that when I get back if it hasn't landed yet14:29
ykarelsure frickler if you can please ^14:33
*** marios is now known as marios|out15:25
opendevreviewMerged openstack/project-config master: Use virtualenv in translation update jobs  https://review.opendev.org/c/openstack/project-config/+/84639015:44
opendevreviewMerged zuul/zuul-jobs master: Add the post-reboot-tasks role  https://review.opendev.org/c/zuul/zuul-jobs/+/84470415:46
clarkbtwo more zuul executors to restart, then the mergers then the schedulers16:03
clarkbthis seems slower than the last few times. I guess it is very dependent on load16:03
*** jpena is now known as jpena|off16:15
clarkbok that still fails because apparently virtualenv isn't installed. I thought it was, but we can fix that pretty easily16:25
clarkband my local git-review install is sad (I think beacuse python just updated) as soon as that is fixed I'll push up a thing for ^16:28
fungiwe're on to ze12 now16:29
opendevreviewClark Boylan proposed openstack/project-config master: Install virtualenv in proposal jobs  https://review.opendev.org/c/openstack/project-config/+/84642716:30
clarkbfwiw tumbleweed converted python3 from 3.8 to 3.10 which broke my old venvs16:31
clarkbI've created new venvs using python3.10 -m venv instead of python3 -m venv which means whne we update to 3.11 I should avoid this problem16:31
fungiyeah, one up-side to building local copies of python from source is that i get to decide when i upgrade the default one which i build my venvs from, and have scripted the rebuild of all those so it's fairly painless16:34
fungithat and also i'm not limited to the python versions my distro happens to have decided to provide16:35
fungiwhich means i'm currently able to easily test with 3.11.0b3 even16:36
clarkbthe fix is fine, its just jarring when you get errors like "this package doesn't exist" running a command you run every day16:36
clarkbI just make a new venv and install git-review, reno, docker-compose, and tox16:37
fungioh, i also use separate venvs for such things16:38
fungithough it probably doesn't buy me much16:38
fungii have a dozen different python tools installed into individual venvs and their entrypoints symlinked in my ~/bin16:39
fungimainly worried that they'll start conflicting over deps if i were to dump them into a single one16:40
fungiespecially since one of those is python-openstackclient16:40
clarkbya I've thought about that but so far they don't really care.16:40
clarkbI think beacuse they are so distinct. If I started mixing in a bunch of tools that have overlappign deps it would be worse16:40
corvusaccording to docker inspect, the image that nl01 is running was based on                 "org.zuul-ci.change_url": "https://review.opendev.org/846220"17:47
corvusthat corresponds to the latest commit on master17:47
fungiyay!17:47
corvusso i think we've been running master since the last restart17:47
corvus(yesterday)17:47
clarkbyup when I checked yestesrday it looked like how we expect it to17:48
corvuswhat's the story with zuul restarts?17:49
corvuslooks like we're still crawling through executors?17:50
clarkbcorvus: yup almost done with the executors now17:50
corvuscool.  i think we're ready for a nodepool release, but probably want to wait for this cycle to finish for zuul17:50
fungiprobably in the next 2-3 hours we'll be done17:56
clarkbfrickler: whats the story with mariadb and jammy and focal? Wondering if we should hold off on the upgrade of mariadb on review.o.o which will update from 10.4 to 10.6 using the upstream mariadb image with a containter host on focal17:57
clarkbseems like this is a ubuntu packaging specific problem so we are probably fine as we consume the image from the upstream mariadb image and not ubuntu packaging17:58
fricklerclarkb: the issue happens when running mariadb in a jammy container on a focal host, because of the older kernel it seems18:10
Clark[m]Hrm maybe that is a problem for us then using the upstream amriadb image too?18:11
fricklerif upstream uses jammy then yes18:11
fungiwe're on 1/2 scheduler starts now18:22
fungiand the second scheduler is starting up18:37
Clark[m]frickler I think it may be debian based. Is it specific to jammy then? That is the part I don't understand. If it is just the newer mariadb then we have problems. But if it is specific to jammy I think we are ok18:54
fricklerClark[m]: it seems to be related to liburing2 vs. liburing1, the former is new to jammy https://jira.mariadb.org/browse/MDEV-2839719:01
* frickler eods19:01
fungiand all done!19:03
fungireal    1637m43.306s19:03
fungiso 27h18m19:05
clarkbhttps://github.com/MariaDB/mariadb-docker/blob/c0d07be9ad5eb3bc212c6805cc8031308c56e9b6/10.6/Dockerfile the mariadb 10.6 docker image is based on focal so we should be fine19:20

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!