Wednesday, 2020-09-02

*** bolg has quit IRC00:04
ianwmy kids school (remote) today is "well-being day"00:07
ianwwhich involves a lot of arts, craft, baking soda and lemon juice and other assorted mess00:08
ianw... so not sure my well-being will survive :)00:08
clarkb"fatal: Unable to create '/etc/ansible/roles/puppet/.git/index.lock': File exists." <- current issue getting remote puppet else to run00:10
clarkbI'm sure its likely fallout from the killing of old ansible processes00:10
clarkbbut I've run out of daytime to debug that further. I can pick it up tomorrow if no one else does00:10
clarkbianw: ha, we made oobleck the other day as one of my kids watched some science show on netflix that has an episode with it and got obsessed with non newtonian fluids00:11
clarkbits quite the mess00:11
ianwhaha, i'm still cleaning that up from the backyard the other week!00:11
clarkbfwiw there are no git processes currently runningon bridge so I think manually removing that index.lock is our best bet (and if not we can always just reclone it)00:12
clarkbbut ya I need to figure out dinner now.00:12
openstackgerritMerged opendev/system-config master: Mirror Puppetlabs puppet for Ubuntu Focal
ianwi can do that, what am i watching?  the puppet-else run right?00:12
clarkbianw: look in /var/log/ansible/remote_puppet_else.yaml.log and double check the error message00:13
clarkbthere is a bit more to it than just what I quoted that talks about how to fix00:13
ianwhrm /etc/ansible/roles/puppet/.git/index.lock doesn't appear to be there now00:14
clarkbhuh maube we just watch and see then?00:17
clarkbmaybe the next run will be happy00:17
clarkbmaybe double check the timestamps on the logs maybe there is another newer problem?00:18
ianwhang on, i think that's on the remote host?  it's not clear which one ... maybe00:22
*** mlavalle has quit IRC00:27
ianwfailed: [ -> localhost] (item=puppet) =>00:29
ianw    "cmd": "/usr/bin/git reset --hard origin/master --",00:29
ianw    "rc": 12800:29
*** mlavalle has joined #opendev00:31
ianw-rw-r--r-- 1 root root   306K Sep  2 00:27 remote_puppet_else.yaml.log00:32
ianwit just ran (00:33 now) and didn't do that, but one of the elasticsearch still failed00:33
ianwfatal: Unable to create '/etc/ansible/roles/puppet/.git/index.lock': File exists.00:34
ianwboo, that's what it failed on00:34
ianwWe're running this per-host delegated to localhost. We only want00:38
ianwto run it once, otherwise we have parallel competing git processes.00:38
ianwit seems like that isn't quite working00:39
ianwhrm, it only appears once in the log00:42
ianwremote_puppet_else.yaml.log.2020-05-13T18:54:31Z:Another git process seems to be running in this repository, e.g00:50
ianwi guess this didn't just start ...00:51
*** xiaolin has joined #opendev01:09
fungistale lock?01:11
fungidoes /etc/ansible/roles/puppet/.git/index.lock have an old date?01:11
ianwfungi: yeah, the problem is the lock isn't there.  it must be a runtime thing01:13
fungioh fun01:21
ianwi am realising i have no idea how the bridge run ansible's find their roles01:38
*** xiaolin has quit IRC01:50
ianw[WARNING]: Using run_once with the free strategy is not currently supported. This task will still be executed for every host in the inventory list.01:53
*** DSpider has joined #opendev03:45
*** fressi has joined #opendev04:23
*** ysandeep|away is now known as ysandeep04:59
*** bhagyashris|away is now known as bhagyashris05:27
*** kevinz has joined #opendev05:34
openstackgerritIan Wienand proposed opendev/system-config master: puppet: don't run module install steps multiple times
openstackgerritIan Wienand proposed opendev/system-config master: puppet: don't run module install steps multiple times
*** diablo_rojo has quit IRC06:47
*** hashar has joined #opendev07:07
openstackgerritMerged openstack/project-config master: Create zuul/zuul-client
openstackgerritIan Wienand proposed opendev/system-config master: puppet: don't run module install steps multiple times
*** sshnaidm|afk is now known as sshnaidm07:32
*** tosky has joined #opendev07:37
*** ysandeep is now known as ysandeep|lunch07:48
*** moppy has quit IRC08:01
*** moppy has joined #opendev08:01
ttxre: ask.o.o the change did not seem to catch on -- not sure if it's due to the Zuul deploy jobs failing (since they really never worked), or something else (like it needs a restart to catch on config changes)08:08
ttxI guess that's what y'all were discussing earlier08:13
*** calcmandan has quit IRC08:14
* frickler goes to check08:15
*** calcmandan has joined #opendev08:16
*** dtantsur|afk is now known as dtantsur08:23
fricklerttx: infra-root: seems /opt/system-config/production hasn't been updated on ask.o.o, maybe someone can take a deeper look later08:28
*** ysandeep|lunch is now known as ysandeep08:33
* frickler now tries editing askbot config directly, on the first attempt the change doesn't seem to have any effect08:41
fricklerttx: humm, maybe I had things cached, now it seems to work. please double check if that looks o.k. to you now08:44
fricklermaybe we also want to update the help page if we can?
ttxfrickler: It looks ok (for some definition of ok that includes red-over-yellow text)08:47
ttxRe: help, I think that since the message is splashed there too, it's fine08:48
ttxwe just can't rewrite all the site so that it stops mentioning that interacting with it is possible08:49
*** andrewbonney has joined #opendev08:51
*** ysandeep is now known as ysandeep|brb09:50
*** hashar has quit IRC09:53
*** ysandeep|brb is now known as ysandeep10:04
*** bolg_ has joined #opendev10:30
*** lpetrut has joined #opendev10:47
*** elod has quit IRC11:15
*** elod has joined #opendev11:16
*** ysandeep is now known as ysandeep|brb11:36
*** ysandeep|brb is now known as ysandeep11:44
*** ysandeep is now known as ysandeep|afk11:52
*** mattd01 has joined #opendev12:00
*** ysandeep|afk is now known as ysandeep12:11
chandankumarHello #OpenDev.12:12
chandankumarThe tripleo ci team is working on jobs to remove dependency on dockerhub12:13
chandankumarin tripleo-ci, apart from openstack projects, we pull ceph images also from docker hub12:13
chandankumarwhat is the procedure for pushing those same ceph container images on openstack infra registry ?12:14
chandankumarceph ->
weshay|ruckzbr, fyi ^12:16
chandankumarand part from that, we also need to promotheorus related images12:16
chandankumarthe list is here
fungichandankumar: which openstack infra registry are you talking about? we don't maintain any persistent registry, we have a semi-persistent registry for interrelated job builds (so that one build can create an image and then another build can consume that particular build's image)12:23
chandankumarfungi: sorry, I thought we host registries,12:27
chandankumarfungi: when we are talking about semi-persistant registry you mean ?12:27
fungichandankumar: if you mean something like an alternative to dockerhub or quay, no opendev doesn't have any registries of its own just proxies to those services12:28
fungiand yes, the zuul-registry project is the software which we use to implement the semi-persistent passthrough registry for sharing built images with other consuming builds12:28
chandankumarfungi: yes something like an alternative to dockerhub,12:28
chandankumarfungi: cool, it answers my question. thanks :-)12:29
fungichandankumar: i hear microsoft is going to start providing a dockerhub-like image registry under its github brand, though i expect it will come with all the same drawbacks of actual dockerhub and quay12:30
fungii suppose what would be a useful service from a ci perspective is a proxyish image registry which speaks dockerhub protocol to clients, but behind the scenes fetches and caches the requested images from major registries12:32
fungisomething like apt-proxy does for debian package repositories12:32
fungii'm not aware of the existence of anything like that though12:32
chandankumarfungi: sounds like a nice project idea :-)12:34
sshnaidmfungi, there is such thing like
sshnaidmfungi, it acts exactly as squid or whatever cache12:36
fungineat. maybe we should look into that as an alternative to our registry proxies12:36
sshnaidmfirst time it pulls from a real registry and saves image in the cache, then provides it from cache only12:36
sshnaidmfungi, it's also transparent for clients12:36
fungiis that the one that can't purge expired images without an outage?12:36
sshnaidmneed just to adjust proxy addresses12:37
sshnaidmfungi, that I didn't check, sorry12:37
sshnaidmfungi, I was just removing files on disk12:37
fungii know we looked at something docker provides and it wasn't capable of having its cache cleaned without downtime12:37
sshnaidmworth to check I think12:37
fungiclarkb would remember, once he's awake12:38
fungi#status log zm07 was rebooted at 01:54 and again at 04:08 by the cloud provider because of unspecified hypervisor host issues12:45
openstackstatusfungi: finished logging12:45
zbrfungi: clarkb: can we proceed with ?12:52
fungizbr: i've approved it, but need to disappear for a meeting and then some errands, so likely won't be around to check on it for the next few hours if it doesn't deploy successfully12:56
zbrno urgency, i was jus trying to cleanup my pile of unmerged reviews.12:57
zbrthat one is one of the least important ones12:57
fricklerfungi: well the first patch already didn't deploy automatically, so I doubt this one will, see backlog. I deployed the first one manually, so now we have a good opportunity to check the automation again12:58
zbrfungi: when back also take a look at -- adding a new mailing list for monitoring jobs.13:00
zbrthe question was if you want to be the owner or not.13:00
zbrthat question was for corvus13:00
fungii can be owner/moderator if needed, i already act as a surrogate for some dozen lists anyway13:08
*** fdegir has quit IRC13:11
openstackgerritMerged opendev/system-config master: Improved ask read-only message
zbrcorvus: basically is waiting for your decision (see clark question).13:32
zbri do not care who is admin, i only want to see the list created to be able to subscribe to it13:32
*** _mlavalle_1 has joined #opendev13:44
*** mlavalle has quit IRC13:46
*** sshnaidm is now known as sshnaidm|bbl13:58
*** ysandeep is now known as ysandeep|away13:59
*** _mlavalle_2 has joined #opendev14:14
*** _mlavalle_1 has quit IRC14:17
*** chandankumar is now known as raukadah14:21
*** fdegir has joined #opendev14:34
AJaegerconfig-core, please review
*** _mlavalle3 has joined #opendev14:51
*** _mlavalle_2 has quit IRC14:53
*** _mlavalle3 has quit IRC14:53
*** qchris has quit IRC14:57
zbrwho can help digging an afp deploy failure, one with unfriendly output
zbrfor some weird reason "cmd" is not even included by default.15:02
*** diablo_rojo has joined #opendev15:02
zbralso the output is redirected, which makes in unaccessible. is that for security reasons?15:03
zbrlack of cmd display is fixed by --- if it ever gets approved.15:06
clarkbfrickler: the remote puppet else job was failing due to a git lock file issue. I think ianw tracked it down to the free strategy in ansoble not respecting run once15:08
clarkbfrickler: so we have many ansoble threads all trying to update the same git repo at once. This is why ask.o.o wasntupdating15:08
clarkbfiguring that out is top of my list this morning. I need some breakfast first though15:09
*** qchris has joined #opendev15:10
*** fressi has quit IRC15:11
clarkbsshnaidm|bbl: fungi yes that us the same tool that need to be off to prune content. However, it seems that document says it does auto pruning when only acting as a pull through cache which I hadnot seen before. Its possible our combined use case for intermediary registry and proxying means we overlooked the pure cache behavior15:12
clarkbhowever if we run anonymous we'll still be limited to 100 blobs an hour or whatever it is15:12
clarkbthe best bet is to see what docker says about CI in partocular which they have promised to do (maybe that is already published)15:13
*** mlavalle has joined #opendev15:22
clarkbfungi: zbr unless remote-puppet-else has been fixed I don't expect that to actually apply15:23
clarkb is the proposed fix for the puppetry from ianw15:23
*** tosky has quit IRC15:31
clarkbzbr: fwiw linter rules that catch problems like the free strategy + run once would be far more helpful than forcing people to noqa shell tasks that run git15:34
zbrclarkb: I can easily write a run_once rule, but I bet it will upset some users too15:35
clarkbzbr: that wouldn't surprise me its just that choosing to use a shell task when there are built ins is more of a matter of choice whereas ansible does not do what you've told it in the free + run once case15:36
zbrthe only case where nobody would be upset is if linter would always return 015:36
clarkbimo its far more important for linters to highlight issues like the free + run once problem15:36
clarkbwhich is what the original C linter did. It tried to highlight portability issues aiui15:37
zbrit is not easy to detect this because strategy can be configured using ansible cfg, env vars,...15:37
clarkb"this does not do what you think it does" vs "your code doesn't meet the linter authors idea of good ansible"15:38
zbri can easily detect places where run_once is used, and if I remember well newer version of ansible is creating  runtime warnings when encoutering it anyway15:38
zbri doubt that depends on strategy, as same play could be run with different strategies in different contex, so run_once is always a danger15:38
*** ysandeep|away is now known as ysandeep15:45
zbrmost linters are opinionated, and some of them depend even on a single person taste, but that is not the case for ansbile-lint, which needs at least two people to agree15:46
zbrclarkb: thanks for the idea, i am now writing a new rule...15:47
clarkbya I'm not necessarily saying that is wrong either. I'm just saying when it comes to the important of rules those that call out non portable actually wrong things are more important than the opinion portion15:47
clarkb*importance of rules15:47
clarkbshelling out to git is extremely useful bceause the git module only covers a fraction of git functionality. Using run once with free is going to do the wrong thing always15:48
*** hashar has joined #opendev15:48
zbrso you think the linter should warn regardless which strategy is used?15:50
zbras i said, it may be impossible to guess the strategy15:50
zbrespecially if someone adds run_once inside a role15:51
clarkbto start maybe just keep it to where you know the free strategy is used (as in our case as it was se tat the play level)15:51
clarkbthen based on user feedback maybe expand it15:51
zbrimho, run_once should always be avoided, just because we can consider worst case possible: someone tries to run it using free15:52
zbrimho, adding "# noqa 123" after run_one to confirm that, let me alone, i know what i am doing, is an acceptable "cost" to pay.15:53
*** lpetrut has quit IRC15:53
clarkbya, I would definitely ensure the rule explainswhy it shoul dbe avoided in that case15:54
*** ysandeep is now known as ysandeep|away16:02
zbrclarkb: would you mind creating a feature request at ? i am already half way with my implementation16:12
clarkbif you're already writing it do we need a separate issue?16:12
zbri am writing the rule implementation, not the ticket16:16
clarkbright, I'm asking if a ticket is necessary if someone is already doing the work16:16
clarkb(its not really a request if the work is already done)16:16
zbrjust say you would find it useful to be warned about run_once dangers, i can polish it later16:16
zbrit will be a useful place to receive feedback from others16:17
zbri am working at the tests now16:17
zbrzuul-jobs has only 7 occurrences, quite easy to fix compared with other issues like the mode= one16:18
zbr is ready for reviews.16:55
*** dtantsur is now known as dtantsur|afk16:57
openstackgerritMerged zuul/zuul-jobs master: Remove dependency on pkg_resources
*** mattd01 has quit IRC17:16
clarkbAJaeger: if you're still around have you seen any problems with the explicit file mode setting in opendev/base-jobs?17:16
clarkbAJaeger: mostly concerned about things we write to afs like docs17:16
clarkb(if not I thin kwe can proceed to landing the zuul-jobs update next)17:16
clarkbfungi: is a good one to review (though I'm about to head out on a bike ride so feel free to avoid approving if you'd like more eyeballs around when ansible runs puppet)17:27
fungisure thing17:35
yoctozeptoinfra-root: ethercalc down again17:35
fungiugh, checking17:35
yoctozeptofungi: thanks17:36
fungiSep  2 17:25:29 ethercalc02 bash[29644]: Error: Can't set headers after they are sent.17:36
fungisame as last time17:36
fungi#status log restarted ethercalc service following crash at 17:25:2917:37
openstackstatusfungi: finished logging17:37
fungiinfra-root: this was the full traceback:
fungi looks like maybe we need newer nodejs?17:40
clarkbfungi: our version of node shoul dbe newer than that bug17:41
fungiyeah, i concur, just checked17:41
fungiso maybe it's a different bug resulting in a similar error17:41
clarkbits also originating in the redis module17:41
clarkbmaybe its a bug there that we need to update17:41
fungithe distro package for nodejs currently installed is 12.18.3~dfsg-417:41
clarkb*update the module to fix17:41
fungiwe're not installing redis from distro packages17:42
fungibut if memory serves we pin a specific version in puppet17:42
fungi$redis_version    = '2.8.4',17:45
clarkbfungi: thats redis the server not redis the js client lib17:47
fungioh, great point17:47
clarkb/opt/ethercalc/node_modules/redis/index.js should be installed as part of ethercalcs' npm installation17:47
fungii suppose that'll be in like a yarn.lock in the ethercalc repo17:47
clarkbI need to pop out now but can help look more hwne I return17:48
clarkbhas that been changed since the version we deployed?17:51
fungihard to say, looks like the yarn.lock file was added two years ago, so more recently than what we had been running before the upgrade17:55
fungii'll have to dissect their old build system from years ago17:55
fungi"redis": "0.12.x",17:57
fungiso... not changed appreciably i guess?17:57
fungiyeah, that line was last touched 4 years ago17:58
fungithough wasn't actually updated17:58
fungijust reshuffled17:58
fungihere's when it changed:
fungiappeared in the 0.20151028.0 tag18:00
fungiso we were using the redis js lib version 0.12.x before we upgraded too18:00
fungiif it's a bug introduced in redis, it came between 0.12.0 and 0.12.1 maybe18:00
*** mattd01 has joined #opendev18:11
*** hashar has quit IRC18:34
*** DSpider has quit IRC18:51
fungii don't know if it's a behavior change in osc or rackspace, but i can't seem to refer to cinder volumes by name any longer19:45
fungii can only get uuids from volume list, do a volume show on each uuid and then grep out the name to figure out which one it is19:45
fungiit's... tedious, for sure19:46
fungior could this be a side effect of using the v1 api?19:46
*** DSpider has joined #opendev19:47
fungifor u in `./launch-env/bin/openstack --os-cloud openstackci-rax --os-region-name DFW volume list|grep available|cut -d' ' -f2`;do echo -n $u;./launch-env/bin/openstack --os-cloud openstackci-rax --os-region-name DFW volume show $u|grep name|cut -d'|' -f3|sed 's/ *$//';done19:49
fungithat shouldn't be necessary, fwiw19:50
fungii could probably tell osc to do --format on volume list to include the name, but why is it omitted to begin with? and why can't i use names in commands? if i don't use uuids, it tells me it can't find any volume matching that name19:52
clarkbmaybe smcginnis knows19:54
clarkbif its v1 behavior19:54
clarkbfungi: node redis 1.0.0's changelog doesn't show any breaking changes from 0.12.119:58
clarkbmaybe we shoul dtry updating?19:58
fungiwell, ethercalc's master branch is pinned to 0.12.1 since years20:01
clarkb does seem to point to the redis client lib being the issue though20:03
clarkbbasically it isn't do rest things properly20:03
clarkbfungi: another option may be to set that unit to just restart always and let it com eback after failing20:03
fungiyeah, i'm not opposed to upgrading, just wondering if we have to patch the ethercalc codebase to do it20:03
fungior simply add an extra puppet exec notified from it to upgrade redis after every deployment20:04
clarkbfor that I'm not sure20:04
smcginnisVolume names are optional, so I think the background was that UUID was really the preferred way to specify a volume.20:05
smcginnisSome things support looking up the UUID if it can tell a name was provided, but I'm not sure if that is done everywhere.20:06
smcginnisAnd IIRC, that's implemented in python-cinderclient, so probably not implemented in OSC.20:06
smcginnisHmm, not list commands have a Name column, but I'm not getting anything on my deployment either.20:09
fungiyeah, i guess the gap here is that i request a volume and provide a name. that request is filled async so i'm not told what uuid it got. now i volume list and i onlt get uuids, not names, so i have to inspect every volume to find out what names they have so i can identify the volume i created20:10
smcginnisAPI does is supposed to be returning name -
fungiwell, again we're pinning to old api major version because rackspace has broken its catalog20:11
fungiso we're relying on the v1 cinder api, in theory20:11
fungifar from ideal, and no fingers are being pointyed20:12
fungijust wondering if i'm missing something obvious20:12
*** andrewbonney has quit IRC20:12
funginormally i wouldn't even care, but they notified us that the region where most of our control plane resides is scheduled to undergo a lengthy outage for existing volumes, so i'm trying to go through and replace them with new ones20:13
smcginnisThis might actually be a bug. Looking at the raw json returned from my v3 API call, it contains "{name: ''}" for all volumes.20:14
fungia bit of scripting (laggily) works around the blind spot, just curious if i'm "doing it wrong"20:14
smcginnisOh, for mine it's because my volumes don't have names.20:15
funginame really is ""20:15
smcginnisYep, if I create a volume with a name, list does show it. So maybe it is a v1 issue/20:15
fungiours have names, and openstack volume show includes a name field. i think i also previously had some success passing a --format string to say to include it20:15
fungiin volume list20:15
smcginnisI'm also not sure if rax ever really switched to using cinder instead of their own special implementation.20:16
fungiahh, nope, openstack volume list is actually providing a name field, it's just empty for every entry20:17
fungibut if i openstack volume show some_uuid it spits out a field called "name" with the expected value for that volume20:18
smcginnisfungi: Is it included if you do "openstack volume show UUID"?20:18
fungicould it be case sensitive?20:18
fungino dice. -c column_name ignores "name" but gives me an empty column for "Name"20:19
smcginnisIf you openstack --debug volume list, it might be interesting to see if there's anything in the json.20:21
fungialso filtering with --name some_name doesn't actually filter the results, just gives me all of them20:21
fungiyep! it's returned in the json, so maybe this is a parsing problem in osc (or a formatting problem in rax)20:22
smcginnisCan you try a cinder list?20:22
fungimaybe, though i can't use clouds.yaml for that right?20:25
fungicinderclient doesn't seem to use oscc so --os-cloud and  --os-region-name are ignored20:25
fungiand it's not finding my creds20:25
fungii'll find the docs on how i used to pass those values pre-osc20:26
smcginnisYeah, you need to either use env vars or pass them in explicitly.20:26
funginow if only the auth url weren't baked into osc. gotta look that up20:33
clarkbfungi: I'm firmly in front of the computer again now if you have time to review
clarkbalso clouds.yaml is the best20:33
clarkbI'd be a lot more sympathetic to teams not wanting to do osc if they added clouds.yaml support to the alternatives20:33
fungismcginnis: yep, `cinder --os-volume-api-version=1 list` gives me actual volume names20:36
fungi(v2 and 3 give me endpoint not found errors)20:37
*** DSpider has quit IRC20:37
fungisupposedly we can use a block storage endpoint override in osc to get working v2 in rackspace, but i've not had luck with that20:38
fungibut yeah, if i were to place bets, it's that osc has regressed on its ability to parse names out of cinder v1 api responses20:39
fungimordred: ^ next time you're around, that might be interesting to you20:40
fungiodds are it's really the sdk doing that bit, but i really don't know20:40
fungiinfra-root: i have mounted four new cinder volumes on afs01.dfw, added them to the main vg, and am using pvmove to swap them out20:47
fungii forgot to start the first pvmove under screen, but will do subsequent ones under a root screen session. the pvmove manpage explains how to resume an interrupted move if it comes to that20:48
fungiit's at 2% complete for the first of four moves already, so probably won't last into tomorrow20:48
fungii'll keep tabs on it while poking at other work20:49
fungiwe could run them in parallel, but the manpage warns against doing that if you have logical volumes spanning more than one pv, and also i'm not sure what our storage i/o bandwidth looks like on that instance anyway20:50
fungialso not doing it in parallel with afs02.dfw because if something happens... well, you know20:50
fungiwe have until around the middle of next month to get through these anyway20:50
* fungi goes back into his do-stuff-while-feeling-like-not-getting-enough-done hole20:52
fungithe plan for volume replacements on the afs servers is to pvmove all of them, then detach all the old volumes20:54
fungii don't 100% trust the device names returned by the nova api there20:54
fungiworried that i might detach the wrong device due to a mismatch somewhere in the xen guest api layer20:55
clarkbfungi: the sysfs or is it devfs can expose uuids of the volumes20:55
clarkbthough now that I've written that that may be a kvm only feature20:55
fungiwell, also "uuid" is vague there20:55
clarkbthe uuid in cinder is mapped through in the kvm case20:56
clarkbits really useful20:56
fungicinder's volume uuid is not the device's uuid afaik20:56
clarkbit is with kvm20:56
fungioh, neat, kvm added that?20:56
clarkbpretty sure they force the uuid value such that it all works20:56
clarkbbut I only ever get to interact iwth thta in devstack usually and its been a while20:56
* fungi checks, while not getting hopes up20:57
fungiyeah, the uuids returned by blkid don't match the uuids returned by nova/cinder20:59
fungioh, wait, those are partition uuids21:00
fungibut yeah, none of the uuids listed under /dev/disk/by-uuid/ match cinder uuids either21:01
fungiand also those are only partitions too21:01
funginot the raw devices21:01
openstackgerritGhanshyam Mann proposed openstack/project-config master: Final step for networking-l2gw and networking-l2gw-tempest-plugin retirement
fungilsblk also doesn't show any uuids for raw block devices, only for their partitions21:03
clarkbhuh its the /dev/disk/by-uuid/ paths I know works on kvm21:04
clarkbbut I guess xen can't do that21:04
fungifungi@afs01:~$ sudo blkid /dev/xvdj121:04
fungi/dev/xvdj1: UUID="NrzuNc-1Ksr-8Cnf-WCYO-w77X-FIih-WWZflF" TYPE="LVM2_member" PARTUUID="182dbd0a-01"21:04
fungifungi@afs01:~$ sudo blkid /dev/xvdj21:04
fungi/dev/xvdj: PTUUID="182dbd0a" PTTYPE="dos"21:05
fungino uuid for the disk, just for the partitions21:05
*** prometheanfire has quit IRC21:59
*** paladox has quit IRC21:59
*** paladox has joined #opendev22:00
*** prometheanfire has joined #opendev22:04
*** mattd01 has quit IRC22:26
clarkbfwiw I've checked if docker has published the promised post on CI use of docker hub and it isn't up yet as far as I can tell22:29
clarkbdiablo_rojo: for does our PR closer only interact with openstack repos? (I sort of assume so just because of permissions)22:33
clarkbfungi: ^ you may know?22:33
clarkbthis has all been updated semi recently /me tries to figur eit out22:33
clarkbwell the job that runs that playbook is called maintain-github-openstack-mirror which is openstack specific enough for me I think22:34
clarkbmnaser: may interest you if you aren't already aware of the amd epyc + centos8 nested virt issues. I think you can approv ethat one if you want too22:38
*** sshnaidm|bbl is now known as sshnaidm|afk22:47
openstackgerritMerged openstack/project-config master: Updates Message on PR Close
openstackgerritMerged openstack/project-config master: Add os_senlin to zuul projects
openstackgerritMerged openstack/project-config master: kolla-cli: deprecation - retiring master branch
ianwsorry, running a bit late today, but here now22:52
clarkbianw: I've been trying to get another review on the puppte change today without much success. I did leave some notes if you want to address them22:53
clarkbI do think that is the proper fix based on the ansible warnings htough22:53
ianwclarkb: yeah noticed, will respin with those notes22:53
ianwinfra-root: tangentially related; will fix the ara reporting artifacts for system-config, along with the -devel job22:54
clarkbianw: are my previous reviews on ^ still good? or do  Ineed to do another pass?22:55
ianwumm, i think mostly good, got respun with the new project22:56
ianw is probably the one that i'd mostly want another eye on, that's dropping test-requirements.txt from system-config and moving to more targeted lists22:56
clarkbianw: and I guess that when we install the release of ansible from pypi it will include all those collections ya? its just when we install from source we have to build up the extra bits that got split out?22:58
ianwyeah, pypi's "ansible" i believe includes all these.  when you install from the -devel branch, you now install "ansible-base"22:58
ianwclarkb: i actually considered calling it puppet-setup-roles or puppet-setup-config or something more specific, do you want me to do that as well as update the readme?23:00
clarkbmaybe puppet-setup-with-ansible ?23:01
clarkbreally its the intersection of the two we're trying to make happy23:01
clarkband less one or the other in isolation23:01
ianwanother thing i noticed, we don't need the at the top-level any more?23:03
ianwi wasn't sure if the apply jobs or something might use it23:03
clarkbwe install the modules on bridge then the ansible puppet role copies them onto the remote hosts23:06
ianwto be concrete, just because i get confused ... ansible-role-puppet isn't synced remotely, right?  only the modules23:06
clarkbthat way we keep things in sync for each pass of ansible running puppet apply23:06
openstackgerritMerged openstack/project-config master: Add nested-virt-ubuntu-focal label
clarkbcorrect only the puppte modules are synced23:06
*** mlavalle has quit IRC23:07
clarkbthe ansible role itself is only running from bridge (in the ansible way of thinking)23:07
clarkbit may do remote things but its execution context is from ansible-playbook on bridge23:07
ianwthis is all probably good info to capture :)23:07
openstackgerritIan Wienand proposed opendev/system-config master: puppet: don't run module install steps multiple times
openstackgerritIan Wienand proposed opendev/system-config master: install-ansible: move to puppet-setup-ansible
ianwif you have a in-repo symlink, does gitea serve up the linked file?23:37
clarkbno idea :)23:37
clarkbyou could test it with opendev/sandbox really quickly?23:38
ianwthe only place that seems to want to grab and run it is
clarkbyou mean in addition to system-config?23:43
ianwthat's the thing, we have two copies.  ansible deploys one and we have the top level23:44
clarkboh I see23:46
clarkbI thought we were still using the top level but doing so would make ansible more difficult23:46
fungiclarkb: yes, the pr closer is an openstack job, credentials are only authorized for that org23:50
ianwi think they still get pulled in via apply tests23:50
fungiwe figure other projects can add similar jobs with their creds instead if they want that, but this one is okay to have openstack-specific messaging23:51
clarkbfungi: ya I ended up approving it once I discovered it was used in a very openstack specific job23:51
fungiif you're looking for a test of gitea and symlinks, we use a ton in zuul's in-repo ansible fork to replace modules with links to a disabled one23:52
fungialso the first pvmove is nearly done. just a few more minutes and i can start the next23:52
ianwps, i'm wrong, the modules.env in ansible are symlinks back to the top-level ones23:54
openstackgerritIan Wienand proposed opendev/system-config master: launch: move old scripts out of top-level

Generated by 2.17.2 by Marius Gedminas - find it at!