Friday, 2021-02-26

openstackgerritMerged zuul/nodepool master: Log error message on request handler removal on session loss  https://review.opendev.org/c/zuul/nodepool/+/77768900:08
*** tosky has quit IRC00:12
*** ianychoi__ is now known as ianychoi00:34
clarkbfungi: well and when we switched to using docker containers the playbooks got updated to do docker-compose down which just sends a sigint and the process dies immediately00:35
clarkbprior to that we used stop which is less graceful than graceful but more graceful than what docker-compose down does00:35
corvusmostly the difference is when the build directory cleanup happens; stop=as the process shuts down; docker-compose down=when we start it up again00:39
*** wuchunyang has joined #zuul00:54
*** wuchunyang has quit IRC00:58
*** hamalq has quit IRC01:10
*** ikhan has quit IRC01:48
*** jhesketh_ is now known as jhesketh01:49
*** saneax has joined #zuul02:42
*** saneax has quit IRC02:43
*** EmilienM has joined #zuul02:46
*** ricolin has joined #zuul03:15
*** zbr has joined #zuul03:50
*** tflink_ is now known as tflink03:56
*** dmsimard has quit IRC04:08
*** dmsimard has joined #zuul04:09
*** jfoufas1 has joined #zuul05:20
*** evrardjp_ has quit IRC05:33
*** evrardjp has joined #zuul05:33
openstackgerritMerged zuul/zuul master: Document zuul-executor graceful  https://review.opendev.org/c/zuul/zuul/+/77769905:50
tobiashcorvus: regarding the failing nodepool-functional-k8s: the problem is not nodepool but the pod fails to start with backoff pulling image06:59
tobiashthere is not much more information than the pod cannot pull docker.io/fedora:2807:02
tobiashpresumably due to rate limiting07:02
openstackgerritTobias Henkel proposed zuul/nodepool master: Use different image in nodepool-functional-k8s  https://review.opendev.org/c/zuul/nodepool/+/77771907:09
openstackgerritTobias Henkel proposed zuul/nodepool master: Make nodepool-functional-k8s more docker registry friendly  https://review.opendev.org/c/zuul/nodepool/+/77772107:16
openstackgerritSimon Westphahl proposed zuul/zuul master: Add optional support for circular dependencies  https://review.opendev.org/c/zuul/zuul/+/68535407:21
openstackgerritSimon Westphahl proposed zuul/zuul master: Allow refreshing changes in canMerge() check  https://review.opendev.org/c/zuul/zuul/+/76708407:21
openstackgerritSimon Westphahl proposed zuul/zuul master: Check cycle items are mergeable before reporting  https://review.opendev.org/c/zuul/zuul/+/74345007:21
openstackgerritSimon Westphahl proposed zuul/zuul master: Make reporting asynchronous  https://review.opendev.org/c/zuul/zuul/+/69125307:21
*** sassyn has joined #zuul07:21
sassynHi All, Good Morning. Happy Friday....07:22
sassynI have an issue with my Zuul system which I can't find a solution for. Sometimes when a job is getting into zuul, I can see that the web management get the job. Clicking on the Job also show the black web console. However the job start after few min. So for example The first line in the web console log is 10:00 and the job only start at 10:06. Why?07:24
sassynany idea?07:24
*** icey_ is now known as icey07:43
*** yoctozepto5 has joined #zuul08:26
*** yoctozepto has quit IRC08:28
*** yoctozepto5 is now known as yoctozepto08:28
*** yoctozepto6 has joined #zuul08:32
*** yoctozepto has quit IRC08:33
*** yoctozepto6 is now known as yoctozepto08:33
openstackgerritSimon Westphahl proposed zuul/zuul master: Remove superfluous flushes and queries from SQL reporter  https://review.opendev.org/c/zuul/zuul/+/75266408:36
avasssassyn: I think that could be a delay between when an executor accepts a job and it actually starts the job or something along those lines08:44
avasssassyn: so the executor would still need to clone and setup everything before it will start running ansible and show any output08:44
openstackgerritSimon Westphahl proposed zuul/zuul master: Report executor stats per zone  https://review.opendev.org/c/zuul/zuul/+/74044808:58
*** harrymichal has joined #zuul09:06
*** samccann has quit IRC09:37
*** samccann has joined #zuul09:37
*** SaifAddin has joined #zuul09:41
SaifAddinHi, got a cicd zuul job running that starts on a server using .yaml files. It is running an Ansible job with some specific Ansible inventory that cannot have network defaults. Is there a way to add through YAML, a file with a list of variables to be used for the job?09:44
SaifAddinnote: I can't run zuul from command-line as it runs from the server. From command line this could be addressed with extra-vars with the -e command (and pass a file), but I can't do that here. Is there any equivalent from YAML?09:44
tobiashsassyn, avass: that depends on how long the executor takes to prepare all the repos it needs into the job workspace.09:55
tobiashsassyn: also, how many executors do you have and which storage backend?09:55
tobiashalso depending on available ram etc executors pause themselves to prevent overloading. In this case job requests get queued if all executors pause themselves09:56
tobiashsassyn: you can see such a behavior e.g. here: https://grafana.opendev.org/d/5Imot6EMk/zuul-status?orgId=1&from=1614312602527&to=161432755354709:57
tobiashlook at the graphs 'executors' and 'executor queue'09:58
avassSaifAddin: you can load varaibles during runtime instead, that could be easier.10:01
SaifAddinThanks avass but it is a long list of variables (with hostnames, ports, usernames), and don't want to put the entire list as extra-vars keywords10:02
SaifAddinis it possible somehow to pass an entire .yaml file as extra vars?10:02
avassSaifAddin: I mean load the file with ansible with include_vars or similar10:11
avassbut there's no way to point zuul to a yaml file with variables it should use afaik10:13
SaifAddinoh:(  I wish that worked through zuul10:15
*** mugsie_ is now known as mugsie10:16
*** jangutter has joined #zuul10:37
*** jangutter_ has quit IRC10:39
*** jangutter_ has joined #zuul10:40
*** jangutte_ has joined #zuul10:43
*** jangutter has quit IRC10:44
*** jangutter_ has quit IRC10:45
*** yoctozepto8 has joined #zuul10:47
*** yoctozepto has quit IRC10:47
*** yoctozepto8 is now known as yoctozepto10:47
*** yoctozepto1 has joined #zuul10:54
*** yoctozepto has quit IRC10:55
*** yoctozepto1 is now known as yoctozepto10:55
*** iurygregory has quit IRC11:13
*** tosky has joined #zuul11:25
*** yoctozepto2 has joined #zuul11:32
*** yoctozepto has quit IRC11:33
*** yoctozepto2 is now known as yoctozepto11:33
*** harrymichal has quit IRC12:46
*** yoctozepto4 has joined #zuul12:46
*** harrymichal has joined #zuul12:46
*** yoctozepto has quit IRC12:46
*** yoctozepto4 is now known as yoctozepto12:46
*** jangutte_ is now known as jangutter13:17
*** zbr9 has joined #zuul13:20
*** zbr has quit IRC13:22
*** zbr9 is now known as zbr13:22
*** zbr0 has joined #zuul13:43
*** zbr has quit IRC13:46
*** zbr0 is now known as zbr13:46
*** iurygregory has joined #zuul13:49
*** jangutter_ has joined #zuul13:55
*** jangutter has quit IRC13:59
*** zbr3 has joined #zuul14:03
*** zbr has quit IRC14:05
*** zbr3 is now known as zbr14:05
*** zbr7 has joined #zuul14:08
*** zbr has quit IRC14:10
*** zbr7 is now known as zbr14:10
fungizuul is designed to get variables from basically two sources: static inclusion in job configuration, and dynamically through pipeline triggers14:32
fungithough the ansible which zuul runs can get variables set through additional means14:33
*** zbr4 has joined #zuul14:33
fungifor example a pre-run playbook could fetch them based on some heuristic and set them in cached facts for other playbooks to consume in the same build14:34
*** zbr has quit IRC14:35
*** zbr4 is now known as zbr14:35
SaifAddinThanks fungi , any leads onto how to set the cached facts? I was trying with pre-run playbook but I think the variables are not passing to the next job. How can I put them in this "cached fact"?14:37
fungiSaifAddin: here's an example: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-tox/tasks/main.yaml#L16-L1814:47
fungiuse set_fact and make cacheable true in the fact14:47
fungiit's a basic ansible feature14:48
fungithe executor should maintain a fact cache, and provide those facts to later playbooks within the same build14:48
*** zbr4 has joined #zuul14:52
*** zbr has quit IRC14:53
*** zbr4 is now known as zbr14:53
SaifAddinThanks fungi , I'll be trying this. Totally skipped it. I hope that works.15:04
*** zbr5 has joined #zuul15:12
*** zbr has quit IRC15:14
*** zbr5 is now known as zbr15:14
fungiif anyone's interested on providing input, the cd foundation's interoperability sig has been discussing "policy-driven ci/cd" and is looking for more case studies/examples: https://lists.cd.foundation/g/sig-interoperability/topic/policy_driven_ci_cd/80929157?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,8092915715:17
SaifAddinSo Far no luck. Still getting undefined variables15:17
SaifAddinFollowing kind of that example15:17
fungiSaifAddin: are you accessing them as facts?15:17
SaifAddin:O15:17
SaifAddinnormal {{ .... }}15:18
SaifAddinahh got it15:19
*** tosky has quit IRC15:19
SaifAddinmmh, unfortunately not feasible. I don't want to change all variables access in the project to fetch from catched15:19
*** tosky has joined #zuul15:20
corvuscached facts are accessed as normal ansible variables15:21
SaifAddinBy what I read in the docs, fact variables are accessed as `{{ ansible_facts['nodename'] }}`15:22
fungiwell, i meant you weren't trying to access them via the zuul variable namespace or something15:22
fungiyou just access them by name as a top level variable15:23
fungiif you're setting fact "foo" cacheable in a playbook and then trying to refer to {{ foo }} in another later playbook and it's acting like it's not set, then maybe something is wrong with the fact caching on the executor15:25
corvusSaifAddin: in the test you're doing, is that using zuul, or are you running ansible manually/locally?15:26
SaifAddinWhat I did was creating a Playbook, that calls a role. In the Role I put it like:15:26
SaifAddin```15:26
SaifAddin- name: setting_cache_variables15:26
SaifAddin  set_fact:15:26
SaifAddin    ansible_connection: local15:26
SaifAddin    other_variables_foo: bar15:26
SaifAddin    cacheable: true15:26
SaifAddin```15:26
SaifAddinAnd this playbook I ran in a pre-run calling this Playbook.15:27
SaifAddinFrom the actual run, at some point in one of the tasks I did {{ ansible_connection}} but then I got that ansible_connection was undefined15:27
SaifAddinMy test runs through zuul. With Ansible this would work with just extra vars -e or something like that.15:27
SaifAddinThe problem is that I need to pass extra variables to the Ansible job15:28
SaifAddin(and they are too many and don't want to list them as extra vars individually, but have them in a file)15:28
SaifAddinEDIT: I think I had messing up something. But I think this actually works (checking...)15:30
*** zbr8 has joined #zuul15:30
openstackgerritGuillaume Chauvel proposed zuul/zuul master: [DNM] Support ssl encrypted fingergw  https://review.opendev.org/c/zuul/zuul/+/77776115:31
*** zbr has quit IRC15:33
*** zbr8 is now known as zbr15:33
*** jangutter has joined #zuul15:36
corvusSaifAddin: that should work for "other_variables_foo".  but it may not work for ansible_connection -- i don't think ansible will override an inventory variable from a fact cache.15:39
openstackgerritGuillaume Chauvel proposed zuul/zuul master: [DNM] Support ssl encrypted fingergw  https://review.opendev.org/c/zuul/zuul/+/77776115:40
SaifAddinAlright, will keep that in mind!15:40
*** jangutter_ has quit IRC15:40
SaifAddinThat is actually for me, I did not want to override inventory variables15:40
SaifAddinExtra question (thanks all btw). Couldn't I also consider include_vars (Ansible), coming from a variable defined in zuul (with the path to it)15:41
SaifAddinExtra question (thanks all btw). Couldn't we also consider include_vars (Ansible), coming from a variable defined in zuul (with the path to it)15:41
openstackgerritGuillaume Chauvel proposed zuul/zuul master: [DNM] Support ssl encrypted fingergw  https://review.opendev.org/c/zuul/zuul/+/77776115:42
corvusSaifAddin: yes -- i think avass recommended something like that15:46
SaifAddinGot it. I didn't realize that I could pass the path through zuul. That's my plan B now. Will see15:50
SaifAddinThanks!!15:50
corvusSaifAddin: ah yep.  with include_vars you can use a relative path, or if you need to construct an absolute path, you may find some of the zuul variables useful: https://zuul-ci.org/docs/zuul/reference/jobs.html#zuul-variables15:52
openstackgerritGuillaume Chauvel proposed zuul/zuul master: [DNM] Support ssl encrypted fingergw  https://review.opendev.org/c/zuul/zuul/+/77776115:52
corvusSaifAddin: and you can do things like 'vars: {include_path: "{{ zuul.work_dir }}/path/to/file"}' in the job definition (ie, you can use jinja templating in the string, zuul will ignore that and pass it through to ansible to do the templating).15:53
SaifAddinPerfect, before plan B15:56
SaifAddinI am stuck on plan A. The documentation says it should be cacheable: yes. The example shared above said cacheable: true15:56
corvusSaifAddin: in yaml they are the same15:57
SaifAddinDang, then I still have variables not coming through xD. The set_fact + cacheable .. aghh15:57
*** harrymichal has quit IRC15:57
SaifAddinGood, making progress. Yep, starting to make sense16:01
*** jfoufas1 has quit IRC16:02
*** zbr2 has joined #zuul16:11
*** zbr has quit IRC16:13
*** zbr2 is now known as zbr16:13
openstackgerritGuillaume Chauvel proposed zuul/zuul master: [DNM] Support ssl encrypted fingergw  https://review.opendev.org/c/zuul/zuul/+/77776116:15
*** zbr6 has joined #zuul16:15
clarkbguillaumec: note that zuul supports python3.6+ only16:16
clarkbguillaumec: you shouldn't need to support 3.516:17
corvusgear might though16:17
*** zbr has quit IRC16:17
*** zbr6 is now known as zbr16:17
clarkbyup, but I think if you always do PROTOCOL_TLS from the 3.6+ side that will work for gear on the remote16:18
clarkbbasically the zuul code doesn't need to fallback to handle python3.5 specific ssl lib oddities aiui16:19
clarkb3.5 can speak tls 1.0, 1.1, and 1.2. The weirdness is in the python itself to select the one you want16:19
*** SaifAddin has quit IRC16:24
*** icey has quit IRC16:33
*** icey has joined #zuul16:34
*** zbr has quit IRC16:38
openstackgerritGuillaume Chauvel proposed zuul/zuul master: [DNM] Support ssl encrypted fingergw  https://review.opendev.org/c/zuul/zuul/+/77776116:51
avasstobiash, corvus: dockerhubs rate limit seem to be popping up now and then as a problem. maybe it would actually be worth it for opendev to host their own registry?16:52
avassthere's also the option of setting up a pull-through cache i suppose16:52
guillaumecclarkb, that's why: "-min python version is 3.6, only keep PROTOCOL_TLS" ,  more explicit now16:53
fungiavass: we've talked about it. last time we looked the existing solutions weren't very flexible (e.g. downtime required to prune old entries, needing to maintain a list somewhere of every image which any job might use, still need something to do the copying of images from public registries, et cetera)16:54
clarkbbut also we don't want to be a registry of record. or at least i don't want to be. I think that presents its own set of unique challenges. I'd rather we continue to mirror/cache what is on an upstream registry16:55
fungimaybe with the rate limits starting to hit people harder, someone might have written/improved some options for self-hosted dockerhub mirrors16:55
fungibut yes, that's also a problem... the way our infrastructure is designed, if we set up a dockerhub proxy/mirror it's likely random people on the internet will start pulling images from us instead of dockerhub to get around the rate limits16:56
fungiso any solution we design would need to take that into account16:56
clarkbanother option is to try quay. They claim to not rate limit16:57
avassyeah that could indeed be a problem16:58
fungiare the images we need available from quay? i guess the one which came up today was the fedora image, so it's probably there16:58
clarkbfungi: I don't think the python ones are, but we could build our python-builder and python-base images and push them to quay using the bases found on docker hub periodically17:00
clarkbthen zuul and opendev images would just pull what we've pushed. (theoretically, I've not tested any of that)17:00
fungibut we probably also don't want users pulling our python-base image instead of the official one, in case there's a python vulnerability and we're behind building/fetching new python-base or something17:01
fungibetter would be to convince the upstream maintainers for our image dependencies to also upload to quay17:02
clarkbfungi: well that is how it works already17:02
clarkbif a user pulls zuul, they will pull our python-base, and the version of the python image that python-base is based on (not the latest python image)17:02
fungier, yeah sorry, i meant the python image17:03
clarkbwe wouldn't publish python directly, it would just be a layer under python-base17:03
fungiright, but we would want the python image to be published to quay by its maintainers, rather than us copying it into quay and ending up with users consuming that17:04
fungiin case we're behind on copying or whatever (not just zuul users, i'm talking about anyone who finds there's a copy of the python image on quay and decides relying on that's a good idea to get around dockerhub's rate limits)17:05
corvustobiash: ftr re the k8s nodepool job -- my additional concern was that nodepool threw null dereference exceptions during the pod launch failure17:05
clarkbright, what I'm saying is that we wouldn't copy it into quay as a separate thing. It would only be part of our images that we publish17:05
tobiashcorvus: oh, have to look again, I thought there were just the unclean shutdown messages17:06
fungiclarkb: but how does that solve our dockerhub rate limit problem then, if we need to obtain the python image from dockerhub?17:06
clarkbfungi: we don't it would be fetched from quay, but not as a directly addressable layer aiui17:06
clarkbfungi: we would be doing a layer squash essentially17:07
corvustobiash: oh, i assumed not due to the NoneType exeption being attached to the "pod failed" exception with "During handling of the above exception, another exception occurred:"17:07
fungiclarkb: but how does it get into quay for us to get it from there?17:07
clarkbfungi: we would publish python-builder and python-base to quay17:07
tobiashcorvus: afaik those exceptions are spammed in many test cases17:07
fungiclarkb: python-base needs the python image though, right?17:07
fungiwouldn't we still need to get the python image from somewhere?17:08
fungior are we building it from source?17:08
corvustobiash: hrm.  it's a bit hard to untangle.  :)17:08
tobiashafter shutdown of the zk connection due to dangling threads wanting to write to zk17:08
clarkbfungi: we are not building from source. The upload to quay should take the layers for the python image and include them in the upload to python-base/python-builder if they are not already there17:08
tobiashwhen running single tests locally I always see such exceptions17:08
corvusfungi: we rebuild python-base less frequently than zuul/nodepool.  so the problem exists, but the frequency is reduced.17:08
clarkbfungi: when we make that push to docker hub docker hub says I have these layers already and it is a noop17:08
clarkbwhen we push to quay, quay will accept them and tie them to python-base/python-builder17:09
corvustobiash: yeah, i'm familiar with that, i just thought it was related, but maybe that's just an artifact of the call stack17:09
corvusfungi: in other words with clarkb's suggestion, we only need to be able to build python-base every X days in order to support building zuul every X minutes.17:10
corvuser Y.  those are independent variables :)17:10
fungiclarkb: yeah, what i was getting at is we still need to get the python image from somewhere, and if not from dockerhub then where are we getting it? but as corvus says maybe less of a problem because we don't do that step very often17:10
corvusis fedora on quay?17:11
tobiashcorvus: look at https://review.opendev.org/c/zuul/nodepool/+/777719 ;)17:11
fungicorvus: i assumed it might be due to the red hat connection, but i haven't looked17:11
corvustobiash: yes that is the change i was going to write, thanks :)17:12
corvustobiash: lol one of the functests failed17:12
tobiashthere is a followup that changes to busybox but I think I broke the config17:12
clarkbfungi: right we would only need to talk to docker hub when rebuilding python-base/python-builder17:13
corvustobiash: comment on 77772117:14
corvuswhat is this setuptools_rust thing?  https://zuul.opendev.org/t/zuul/build/2219649bf70845c698b93b1ee1edf54b17:14
clarkbcorvus: it doesn't seem like there are "official" images on quay.io but I may be missing them17:14
clarkbhowever, there is fedora/fedora which is maybe functionally the same thing17:15
corvusclarkb: https://review.opendev.org/777719 from tobiash looks pretty officialish?17:15
corvusyeah17:15
corvuslike, i doubt redhat is going to let anyone squat the fedora namespace on quay :)17:15
fungicorvus: pyca/cryptography recently added rust bindings, and needs setuptools_rust if being built from sdist17:15
clarkbwhen they publish new releases they tend to push the sdist first17:16
corvusfungi: why did that hit that error in that job?17:16
*** hamalq has joined #zuul17:16
clarkbcorvus: ^ because of that typically17:16
corvusfungi, clarkb: oh, so that was a race with a release?17:16
fungicorvus: that log shows trying to consume cryptography as an sdist, so it will end up wanting setuptools_rust which is defined in a pyproject.toml which needs fairly new pip to parse17:17
clarkbwe rely on their wheels for the jobs to function, but when they release they don't always push wheels first17:17
clarkbwe could add all the build deps to the jobs and have them handle the sdist scenario. Or possibly convince pip that using a wheel for the previous version is preferably to sdist for latest17:17
fungihttps://pypi.org/project/cryptography/#history shows 3.4.6 was uploaded 10 days ago17:18
corvushrm.  we're looking at jobs that failed this morning17:18
fungiand that build was from today, yeah17:18
clarkbin that case I wonder if pypi's bad index caching is back17:18
clarkband it is serving us an index without the wheels in it17:18
clarkboh or we don't know how to use abi3 wheels17:19
fungithat ran on bionic17:19
fungiso probably older pip with no abi3 support, right17:19
clarkbcorvus: if it is the abi3 issue then upgrading pip in the job should address it17:20
tobiashcorvus: thanks17:20
fungicryptography recently stopped publishing per-cpython-version wheels and switched to publishing abi3 wheels instead which can cover multiple cpython versions. unfortunately needs fairly new pip which knows to look for those17:20
* fungi refreshes his memory on which pip release added abi3 support17:21
fungipip 20.0 added abi3 support, january 202017:22
guillaumecis there a way to execute some sort of parent's playbook after a job child's pre-run and before job child's run ? (some sort of post-pre-run, or pre-run-post ^^)17:22
openstackgerritTobias Henkel proposed zuul/nodepool master: Make nodepool-functional-k8s more docker registry friendly  https://review.opendev.org/c/zuul/nodepool/+/77772117:22
tobiashfungi: yeah, we fell over this a couple of weeks ago at various places17:23
avassguillaumec: no, we usually solve that by making the jobs configurable enough. but that depends on what you need to achieve of course17:23
tobiashfungi: hence the change https://review.opendev.org/c/zuul/nodepool/+/77501717:23
tobiashbut that failed three times in gate with three errors17:24
fungipip 10.0.0 is what added support for build dependencies in pyproject.toml, btw. so basically latest cryptography can't be installed at all with pip<1017:24
fungiand the python3-pip in ubuntu-bionic is 9.0.117:24
openstackgerritGuillaume Chauvel proposed zuul/zuul master: [DNM] Support ssl encrypted fingergw  https://review.opendev.org/c/zuul/zuul/+/77776117:28
corvusfungi, clarkb, tobiash, avass: summary: tobiash has some quick fixes in progress to start using quay for one image in func test jobs.  we could continue that approach by moving zuul and python-base and -builder into quay as well.  that would eliminate a lot of the docker image pulls in most of our builds.17:37
fungisounds like a reasonable approach17:37
tobiashcorvus: would you then also publish zuul in quay or do that still on docker hub?17:37
avasssounds good. probably a good idea to announce the move to quay when it comes to that17:37
corvushowever, zuul/nodepool publishing location is more or less a public api, so we should weigh that...17:38
corvus(python-base/builder too, but more of an "internal" api i guess)17:38
corvusif we're going to move zuul/nodepool, i think we should seriously consider whether we want to move it to quay or to registry.zuul-ci.org17:38
corvus(can we cname registry.zuul-ci.org to quay and have that work?)17:39
tobiashcorvus: cnaming will throw ssl exceptions17:39
corvusi wonder if some kind of http redirect would work  (i dunno what the auth system would think about that)17:40
fungiis there a reason not to publish to both dockerhub and quay?17:40
corvusfungi: we might be able to do that without doing any pulls for builds...17:41
fungiyeah, i'm oblivious to a lot of the nuances there17:41
corvusmy main concern would be that ^ first of all (make sure we get the benefit of moving outside the rate cap) and second, user confusion, and third, double our potential problem area17:42
fungii guess "publish to x" is not as trivial as just uploading some files17:42
tobiashare pushes to dockerhub also rate limited or just the pulls?17:42
corvustobiash: these would be authenticated at least17:42
tobiashah, so no rate limit problem there likely17:42
corvusyeah, we'd have a larger cap17:42
corvusbut i still worry about a future where we make a release and find that dockehub failed and quay succeeded17:42
openstackgerritGuillaume Chauvel proposed zuul/zuul master: [DNM] Support ssl encrypted fingergw  https://review.opendev.org/c/zuul/zuul/+/77776117:42
tobiashso we could switch the build chain to quay and keep publishing to dockerhub for now (and migrate later to quay if we want to)17:43
tobiashI think we should decide on dockerhub xor quay for publishing17:43
corvustobiash: yeah, i think so.  i'm in favor of migrating though, so if we do both, consider it temporary.17:43
corvustobiash: ++17:43
tobiashI wonder if we should do some deprecation/update notice about the new image location then17:44
tobiashlike from release x zuul/nodepool images will be published on quay17:44
corvusyeah, i'd be fine with a cutover and not publishing to both.  i think if we leave the old images in place that's sufficient.17:45
tobiashx could be 4.1 then or of changing the image is considered a breaking api 5.017:45
corvusi think 4.1 would be fine17:45
tobiash++17:45
clarkbcorvus: I think we should publish to both docker hub and quay to start with python-base and python-builder17:59
clarkbfor zuul and nodepool I think cutting over is fine, but the python-* images have a number of users now aiui17:59
openstackgerritGuillaume Chauvel proposed zuul/zuul master: [DNM] Support ssl encrypted fingergw  https://review.opendev.org/c/zuul/zuul/+/77776118:00
corvusclarkb, tobiash, fungi: a *really* quick local test suggests that 301 redirects might work18:02
corvusso if we wanted to do a cutover, and only do it once, we might consider setting up a static server to redirect to docker/quay.  we could move at will and even self-host in the future if we wanted.18:03
corvusat least -- i think my test is sufficient to say we should try it for real on a server with ssl certs, etc.  :)18:04
fungisounds like not a bad idea at all18:09
*** tjgresha has joined #zuul18:13
*** tjgresha has quit IRC18:16
*** tjgresha has joined #zuul18:22
openstackgerritMerged zuul/nodepool master: Upgrade pip before installing zuul  https://review.opendev.org/c/zuul/nodepool/+/77501718:30
*** jangutter_ has joined #zuul19:03
*** jangutter has quit IRC19:07
tjgreshaquestion regarding check pipeline success / failure comments on patches19:13
avasswe are now running zuul 4 :)19:13
tobiashyay19:17
tjgreshanice!19:18
corvustjgresha: feel free to just go ahead and ask :)19:21
avassI get a rpcfailure when trying to dequeue a buildset in a timed pipeline :(, don't think that's new however19:21
avasszuul.rpcclient.RPCFailure: Ref refs/heads/develop does not belong to project "..."19:22
tjgreshak ..  didnt wanna steal the zuul4 thunder for just a minute19:22
tobiashthundering zuul19:23
tjgreshaexcellent band name19:23
corvusavass: try using a more or less qualified project name?19:23
avasstjgresha: if anything I stole the attention from you :)19:23
tobiashavass: I think I've dequeued changes from periodic pipelines in the past, but that was always a fiddling with the ref names19:24
tobiashlooking forward to the dequeue button19:24
corvusavass: either example.com/org/project or org/project or project19:24
avasscorvus: worked, canonical name didn't and that's what was generated from zuul-changes.py19:24
corvusavass: line 1186 of scheduler if you want to change both of those to canonical_name :)19:25
avasscorvus: ++19:25
corvus(er, unsure about that line #, but it's _doDequeueEvent)19:25
avasswould it be possible to merge enqueue and enqueue-ref? feels a bit inconsistent when dequeue can handle both :)19:26
corvusavass: i think they're different based on the different data (like old/new ref etc).  they could probably be merged, but half the options would be irrelevant depending on which you were doing19:28
corvus(er, i meant old/new sha)19:28
tjgreshaSo Intel_Zuul is not leaving verification comments in Nova, Neutron, but it is in the ci-sandbox - same gerrit connection in my zuul.conf   -- it is triggering, running, completing successfully on gerrit events on these repos19:30
clarkbtjgresha: are you trying to leave +/-1 votes?19:31
clarkbI think projects like nova, neutron etc restrict who is allowed to do that19:31
corvussomething in tjgresha's commets are rendering them invisible to me19:32
corvuslike some unicode character or something19:32
tjgreshausing webchat maybe to do with it?19:32
clarkbcorvus: there is maybe some weird whitespacing19:32
corvusi see them on eavesdrop, but i'm getting missing lines in my irc window19:32
corvustjgresha: i saw that comment about webchat :)19:33
openstackgerritAlbin Vass proposed zuul/zuul master: Update _doDequeueEvent to check canonical name  https://review.opendev.org/c/zuul/zuul/+/77781619:33
corvusit's probably my fault for not having a sufficiently unicode-aware irc client.  sorry about that, and if i ignore you, it's not intentional!19:34
tjgreshavpn'd into lab at Intel where our 3rdparty CI  runs, changing proxy back and forth makes webchat easier sometimes..19:34
openstackgerritAlbin Vass proposed zuul/zuul master: Update _doDequeueEvent to check canonical name  https://review.opendev.org/c/zuul/zuul/+/77781619:34
avassupdated commit message19:34
clarkbtjgresha: typically when that happens it is because you don't have sufficient permissions to vote on the project. Intead they jus want you to leave a comment without hte +/-119:35
tjgreshawe had them -- maybe there was an account cleanup that happened... will check with owners19:36
tjgreshathanks for advice :]19:38
clarkbtjgresha: you can check the projectname-ci group membership in gerrit to see who is allowed to vote +/-119:40
clarkbnova for example only allows three accounts to do it19:40
tjgreshathere is a 3rdparty-ci group19:45
fungibut it's not used by nova's acl. also this discussion should probably be taking place in #opendev (or #openstack-infra or #openstack-nova)19:48
*** ikhan has joined #zuul19:50
tjgreshafungi - will move my questions there now that i know its the likely cause19:58
*** tjgresha has quit IRC22:17
openstackgerritJames E. Blair proposed zuul/zuul master: Support enqueuing behind circular dependencies  https://review.opendev.org/c/zuul/zuul/+/77784322:45
openstackgerritJames E. Blair proposed zuul/zuul master: Support enqueuing behind circular dependencies  https://review.opendev.org/c/zuul/zuul/+/77784322:59
*** sduthil has quit IRC23:37
*** hamalq has quit IRC23:40
corvusclarkb: I carried your +2 over what I'm pretty sure is a merge conflict patch update on https://review.opendev.org/752079  (tobiash: i left some +2 comments on that)23:44
clarkb   ok23:45
clarkber not sure where those extra spaces came from23:45
fungii'm wracking my brain trying to figure out how to get the related AnsibleJob context in https://review.opendev.org/77744123:52
fungithe errors logged by that codepath do include an event and build id, so there must be some way to figure it out23:53
clarkbfungi: that innerUpdateLoop runs in a separate thread as a gearman worker pulling off update jobs23:57
clarkbfungi: I think you may need to return an indicator via gearman to the job that requested the update that the job should be aborted as the job context isn't known in that separate thread if I read this right23:58
clarkbonly the gearman git update request is known but not more than that23:58
fungiweird then that the log call there actually reports an event/build23:58
clarkboh that isn't gearman its an internal python queue23:59
corvushow about we retry the task?23:59

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!