Monday, 2023-12-18

opendevreviewIan Wienand proposed openstack/diskimage-builder master: python3.12: "fix" unittests  https://review.opendev.org/c/openstack/diskimage-builder/+/90384800:14
tonybianw: Wow, that's some "magic" :/00:17
ianwtonyb: i was too scared to go through "git blame" in case i originally wrote it ... i don't think i did, but can't be sure! :)00:18
ianwit's such a weird situation, pulling in a file from totally outside the python hierarchy that usually gets copied to a chroot environment /usr/local/bin for unit-testing00:19
tonybYeah it is a little "special"00:21
tonybianw: I dd the git blame .... you're off the hook00:22
fungiwe should probably gracefully restart the zuul schedulers onto the latest images, since we're going to have an even more massive backlog in the openstack tenant from all the extra periodic builds otherwise00:51
fungionce normal gerrit activity starts back up tomorrow it will get gnarly00:51
tonybI can do that.00:51
tonybgimme say 30(mins)00:51
fungithanks! no rush. o00:52
fungii'll try to keep an eye on irc in case you get stuck00:52
fungi903818 merged to fix it and so the new images are basically what we've been running plus that fix00:52
tonybI'm unsure about the gitea update after the haproxy stiff yeaterday00:53
fungiyeah, i'd be inclined to wait on the gitea upgrade so we can regroup00:53
tonybKk00:53
tonybBefore restart the zuul-schedulers I want to confirm I01:31
tonyb'm basically doing docker-compose pull; docker-compose down; docker-compose up -d in /etc/zuul-scheduler on zuul* ; ensureing that a) I do them in series and there's always one availavle ; and b) that the quay.io/zuul-ci/zuul-scheduler:latest does indeed contain 90381801:33
tonybI have confirmed "b"01:35
tonybtony@thor:~$ podman inspect quay.io/zuul-ci/zuul-scheduler:latest | jq '.[0].Config.Labels'01:36
tonyb{01:36
tonyb  "org.zuul-ci.change": "903818",01:36
tonyb  "org.zuul-ci.change_url": "https://review.opendev.org/903818"01:36
tonyb}01:36
ianwtonyb: yes, if you just want to restart the schedulers that looks about right.  if you want to restart everything there is a playbook02:41
tonybianw: Thanks02:42
tonybinfra-root: I have restarted the the zuul-sceduler on zuul{01,02} Things look okay but I'll continue to monitor03:30
Clark[m]The zuul components listing is really helpful for ensuring one is always available 03:41
tonybClark[m]: where is that?03:43
tonybClark[m]: Oh: https://zuul.opendev.org/components03:46
Clark[m]You're faster than I am. And ya that should show the version and status of the components03:48
tonybLooks good to me.03:48
tonybI'm not seeing new jobs enqueued into periodic or periodic-stable as we tick over the hour04:03
Clark[m]There is a small jitter. But also I think it is supposed to see things are already enqueued from before and not reenqueue?04:07
tonyband the number of items in those pipelines decreasing04:08
tonybhttps://grafana.opendev.org/d/21a6e53ea4/zuul-status?orgId=1&from=1702868100000&to=now&refresh=1m&viewPanel=17 looks better04:12
tonybActually I'm seeing, or rather not seeing, any metrics in "Zuul Jobs Launched (per Hour)" or "Gerrit Events (per Hour)" since 040004:18
tonybhttps://grafana.opendev.org/d/21a6e53ea4/zuul-status?orgId=1&refresh=1m&viewPanel=1604:18
tonybthat seems suspicious04:18
Clark[m]903075 seems newer and has jobs running04:23
tonybOkay.  That's good :)04:26
zigoHi there! Is this a known issue? https://6dca5728c40d535db466-4fcaafdedb24be0c657932ab646595c9.ssl.cf2.rackcdn.com/903860/1/check/openstack-tox-pep8/4125f32/job-output.txt (this looks unrelated to my patch...)09:39
fricklerzigo: the job is passing, are you referring to the bandit error messages?09:48
zigoYeah.09:48
zigofrickler: BTW, thanks a lot for the SQLAlchemy hint you gave me, this was super helpful ! :)09:48
fricklerzigo: yw. moving the bandit discussion to #openstack-sdks since according to opensearch it only affects keystoneauth09:54
scoopexI don't quite understand how to deal with Gerrit :-) I created a fork of "ansible-kolla" on Github and created a change on Gerrit.  (see https://review.opendev.org/c/openstack/kolla-ansible/+/900528) After receiving feedback, I modified the commit on my branch using "--amend".12:38
scoopexNow I have two patch sets :-( How can I fix this? Where can I read how to do this better?12:38
fungiscoopex: gerrit has nothing to do with github, you don't need to create a fork of anything anywhere. clone the repository to your system, optionally make a topic branch off the branch you want to submit a change for, make and commit the edits you want, then push that commit to a review remote on gerrit (the git-review tool is recommended for this last step so you don't have to work out the13:03
fungiremote syntax yourself). to update the change, make more edits and commit --amend so it's changing your previously submitted commit, make sure not to remove or alter the change-id footer in the commit message, and then push the commit again (ideally with the git-review tool)13:03
fungiwe have a chapter in our infra-manual about the various aspects it here: https://docs.opendev.org/opendev/infra-manual/latest/gettingstarted.html13:04
fungisince you've pushed changes to multiple branches, it's not clear to me what the intent was. 900528 is a change for the master branch, you've also pushed change 900522 for the stable/2023.1 branch, and then another change for stable/2023.1 (900057) which you abandoned13:08
fungi900522 lacks the typical cherry-pick identifier so i'm not sure if it was the result of trying to backport 900528 for stable/2023.1 or you somehow accidentally pushed a revision for the wrong branch13:11
fungiaha, i see gerrit identified 900528 (for master) as a cherry-pick of 900522 (for stable/2023.1), so were you trying to forward-port your stable/2023.1 change to master?13:13
fungii think the only way gerrit would know it was a cherry-pick is if you used the "cherry pick" button in the gerrit webui. i've never tried that personally, i usually just git cherry-pick locally between branches so that the commit message includes the "cherry picked from commit ..." line that you get with the -x option13:16
fungianyway, in most openstack projects, they want changes reviewed for master first, and then backported to stable branches, not reviewed on a stable branch and then forward-ported to master13:17
fungibut it's probably best to talk to the kolla maintainers in the #openstack-kolla channel if you're not sure13:17
fungiin addition to the infra-manual, the openstack project has its own contributor documentation here: https://docs.openstack.org/contributors/ (you probably want the code and documentation contributor guide)13:19
fungialso in the repository you're trying to contribute to, you'll find a prominently named file with some quick links: https://opendev.org/openstack/kolla-ansible/src/branch/master/CONTRIBUTING.rst13:20
fungiwhich includes a link to the kolla team's own contributor guide here: https://docs.openstack.org/kolla-ansible/latest/contributor/contributing.html13:21
fungihope that helps!13:21
Clark[m]fungi: if it isn't too much trouble I'm wondering if you'd be willing to drive tomorrow's meeting and send out an agenda? I've spent the weekend managing a fever with drugs and it doesn't seem to be getting better any time soon. I'll still try to attend but best to get me out of the critical path14:11
fungioh i'll be happy to chair the meeting, no problem. hope your situation improves!14:12
fungiwill get the agenda out shortly, just have an errand i need to go run first14:12
Clark[m]Thank you!14:13
scoopexfungi: Many thanks for the elaborate answer! My orientation was https://docs.openstack.org/contributors/code-and-documentation/index.html and that did't provided me answerts for my problem area! My oddisey was: 1.) I started to make my changes based on the stable/2023.1, then (2) i relaized that i created multiple changesets because i perfomed a "git commit --amend" and not "git review" (which seems to 14:20
scoopexbe the right choice). 3. Therefore i started a new change :-) 4. After that i got the information that such changes are more suiteable to the master branch and i executed a "rebase". Then i got reviews and created accidentially new changesets while fixing them... The diagram from https://docs.opendev.org/opendev/infra-manual/latest/gettingstarted.html seems the missing piece of information....14:20
corvusi'm restarting zuul-web16:05
corvus#status log restarted zuul-web to match zuul-scheduler (zuul-scheduler was previously restarted on 2023-12-17)16:11
opendevstatuscorvus: finished logging16:11
fungimeeting agenda for tomorrow is sent to the ml. heading out for lunch but should be back in an hour or so16:14
tonybcorvus: for future reference, shoudl I have done that ... restart zuul-{web,fingergw} when I did zuul-scheduler?22:15
corvustonyb: probably not important; but because zuul-web shares a lot of scheduler code, whenever we change scheduler-like things i like to update both.  this change *shouldn't* affect web, but i didn't think about it enough to be sure.  typically if i'm restarting the scheduler, i'd restart web as well because it's fast and easier than thinking about it.  if i'm restarting web for some specific web thing, i wouldn't necessarily restart the22:18
corvusscheduler.22:18
tonybcorvus: Thanks.  Noted.22:19
corvus(consider my action today more like satisfying my ocd than any operational imperative :)22:22
tonybLOL, sure.22:22
fungiyeah, i thought about suggesting it. in this case the new image only had one change beyond what everything was running, and it only altered how timer trigger timespecs were parsed, but you're right even then it's possible some zuul-web feature might have cared that they're parsed the same as on the schedulers22:23
tonybOn https://zuul.opendev.org/components I see "9.3.1.dev8 da639bb57" as the version for z{e,m}* but if I 'git -C ~/src/opendev.org/zuul/zuul show da639bb57' I get an error.  so where does that, I assume SHA, come from?22:25
tonybfor web and scheduler I see `9.3.1.dev9 5087f00ac` and that is a valid SHA22:26
fungitonyb: from zuul. since zuul builds the images in the gate pipeline, the change hasn't been merged by gerrit yet, so that's the speculative merge zuul created22:26
fungiwe've talked about ways to backmap those from the final branch commits22:26
tonybAh okay.22:27
fungiin cases where there is a merge commit, you'll see the disconnect you described22:27
fungiin cases where a fast-forward merge is possible, it's built from the actual change commit, so will match what ends up on the branch22:27
fungi5087f00ac was a brown-bag regression fix that got fast-tracked yesterday, so there were no other changes merged between when i created it from the branch and when it got approved22:28
tonybGot it, and it makes sense.  I veridied that the images I pulled had the "right" information about changes and SHAs in the metatdata22:29
fungiin the past we've also talked about zuul growing the logic to push merges into gerrit itself. if that were to happen, it could also eliminate the disconnect22:30
fungisince merge commits created in the gate pipeline would be the actual merge commits that end up in the branch later22:30
tonybThat'd be nice.  Clearly not on the critical path22:31
JayFfungi: re: the mail to the list (about ML == forum now); I wanted to tell you -- that's how itamarst was interacting with the Eventlet thread, and I've already personally known *3* people who signed up for the mailing list (either with digest on or emails off) when they realized they could interact and reply without having to have their inbox crunchinated.23:50
JayFfungi: did you all do any comms on openinfra live about this? I don't wanna thunder-steal, but if nobody infra side has done it or is willing to, we should try and get this evangelized more; it's a huge boon for accessibility+moderization of our tools :D23:51

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!