Wednesday, 2023-08-23

*** dviroel_ is now known as dviroel11:27
fungijust a heads up, i got asked in #openstack-kolla to reenqueue failed periodic jobs. seems you can actually do it, you just need to include --newrev (i passed the results of `git show-ref origin/master`)12:28
fungizuul-client enqueue-ref --tenant=openstack --pipeline=periodic --project=openstack/kolla --ref=refs/heads/master --newrev=4c31f7a3f2002d77dd715dfbb5c2eb74192149d412:28
fungithat appears to be working as expected12:28
Clark[m]Any reason they didn't trigger jobs themselves with a change?12:35
Clark[m]Neat that we can do a reenqueue like that, but shouldn't be necessary in this case12:35
fungichanges don't get enqueued into timer trigger pipelines12:36
Clark[m]Correct, but you should be able to trigger an equivalent job via a change12:37
fungiand yes this is suboptimal when a periodic build uploads a broken artifact that causes all their other jobs to fail until the next periodic run12:37
fungii think it's their compromise to avoid uploading new docker images for every change that merges12:37
fungibut they could certainly rework it so that the upload is only triggered in gate or promote when a specific file (dockerfile or whatever) gets changed12:39
fungilike we do for our container images12:39
Clark[m]Aha, it is artifact publishing that broke. That makes more sense.12:39
fungiyep12:39
SvenKieskemhm, don't really now how our artifcat build pipeline get's triggered, seems worthy to investigate, just need to find the time for that.12:45
fungipossibly a big part of why they don't want to run this in gate or promote is that it takes around 2.5 hours to complete the jobs12:45
fungiin part because they build every image for a particular package and then upload them all in one job, rather than having separate jobs per image/component12:46
fungier, they build every image for a particular platform i mean12:47
*** d34dh0r5- is now known as d34dh0r5313:52
clarkbdoesn't look like https://review.opendev.org/c/opendev/system-config/+/892057 got approved yet14:17
clarkbshould I approve it now and then if we run out of time to restart/test while frickler is around today just plan for a restart later?14:18
fungiyes please14:19
fungisorry been distracted by painters14:20
clarkbdone14:20
opendevreviewHarry Kominos proposed openstack/diskimage-builder master: feat: Add new fail2ban elemenent  https://review.opendev.org/c/openstack/diskimage-builder/+/89254114:21
fungii've done fresh test imports of mailing lists for about half the production domains, saving openstack for last and may not get to it until after lunch14:24
fricklersorry I was also distracted by other issues14:25
fricklerbut I'll be around for testing assuming the patch won't take circles getting merged14:26
clarkbhere's hoping it goes through quickly :)14:26
opendevreviewHarry Kominos proposed openstack/diskimage-builder master: feat: Add new fail2ban elemenent  https://review.opendev.org/c/openstack/diskimage-builder/+/89254114:27
fungiclarkb: https://review.opendev.org/892387 is the starlingx matrix channel logging patch you asked about yesterday, btw14:27
clarkbfungi: +2'd but can you double check the note I made to ensure that isn't a problem14:38
fungiclarkb: yep, that's safe14:41
fungithey're starting to have discussions in those new channels too, so probably the sooner we can get them logging the better14:41
clarkbok lets approve that then. Done14:42
fungithanks!14:42
clarkbinfra-root how does https://etherpad.opendev.org/p/4xnhgK1TFnLsD8WuYMME look for email about zuul version stuff cc corvus frickler gmann JayF 14:55
JayFMy only comment would be timing; if it does cause pain for anyone that's going to be a surprise task right in the middle of the hot time of the release14:59
fungiokay, test migration of all lists other than lists.openstack.org has completed successfully on latest mm3, and that one is in progress now (fingers crossed this held node has a big enough rootfs, it's going to be *very tight*)15:00
clarkbJayF: yes, the problem there is waiting meants waiting until october and we're already falling behind on the anisble upgrade path (they do releases every 6 months or so) and we're hoping that since two tenants have moved smoothyl we can get away with a quicker transition and start keeping up with ansible15:01
clarkboctober is 2/3 of the way throught the ansible 8 lifetime15:02
JayFyeah, I understand, just pointing that out15:02
clarkbI think what we would do in the case of problems is revert the change but then also work to fix the problem (using the test method to confirm) and then reset the default to 8 fairly quickly15:03
fungitext of the announcement lgtm, and the plan is reasonable15:03
clarkbreality is maybe one or two projects will test and give us the all clear (or find a problem and fix it and tell everyone else to fix it but they won't) then we'll switch and thats when we'll actually find any problems if they exist15:03
clarkband if we wait until october thats a month and a half of extra time we aren't collecting that data15:04
clarkbwhich leads to us not getting ansible 9 out in time...15:04
fricklerthough supporting ansible 9 should not be strictly tied to dropping 6?15:07
clarkbI don't think its super strict, but each ansible install is like 800MB or something silly so we'll end up with 3GB container images if we support 6, 8, and 9.15:08
clarkbI personally would like to avoid that. It makes development painful/slow15:08
fungiokay, i realized i could blow away the git cache on this held node to free up plenty of additional space for the lists.openstack.org migration test, so should work out okay15:10
fungilooks like i'm going to need to get to a lunch appointment before 892057 merges, just a heads up15:11
clarkbon gitea99 I logged in as root in my browser to start a session, I stopped the service, appended a string to the jwt secret string, started the service, checked my session was still valid in the browser (seems to be) and checked that the private.pem key did not change (it did not)15:16
clarkbso I think this oauth2 stuff is safe on the upgrade path15:16
opendevreviewMerged opendev/system-config master: gerrit: bump index.maxTerms  https://review.opendev.org/c/opendev/system-config/+/89205715:22
opendevreviewMerged opendev/system-config master: Add StarlingX Matrix channels to the logbot  https://review.opendev.org/c/opendev/system-config/+/89238715:22
fungilooks like it merged, but i need to head out so won't be around for restart testing when it deploys, sorry15:22
fungii should be back in an hour-ish15:23
clarkblooks like it is deploying right now (timing worked out for that)15:23
corvusclarkb: msg lgtm.  i put in an extra sentence at the end clarifying (i hope?) that speculative execution is sufficient; feel free to adjust/remove of course.  just thought that might be useful for some folks.15:25
corvusi seem to be a slightly lighter blue than your light blue :)15:26
clarkbwfm15:26
clarkbfrickler: are you still around and able to test if we restart gerrit?15:26
clarkbThe config file did update15:26
clarkbI'd like to `docker-compose down` then move the waiting dir for the replication plugin aside then `docker-compose up -d` if you are still here15:27
fricklerclarkb: ack15:28
clarkbok I'm going to warn the openstack release team then proceed wit hthat plan15:29
clarkbalso how about a #status notice Gerrit is going to be restarted to pick up a small config update. You will notice a short outage of the service.15:31
frickler+115:31
clarkb#status notice Gerrit is going to be restarted to pick up a small config update. You will notice a short outage of the service.15:32
opendevstatusclarkb: sending notice15:32
-opendevstatus- NOTICE: Gerrit is going to be restarted to pick up a small config update. You will notice a short outage of the service.15:32
clarkbonce that reports it is done I'll do what I descirbed above15:32
opendevstatusclarkb: finished sending notice15:34
clarkbfrickler: it is restarted and I can get the web ui again15:37
clarkbfrickler: I think we are ready for you to try and list starred changes15:37
fricklerclarkb: yay, that works. though I really wonder why I would have starred a change like https://review.opendev.org/c/openstack/oslo.messaging/+/7668615:40
clarkbI think I've starred at least one or two changes due to misclicks15:41
clarkbI seemto recall the layout of one of the older web UIs made that easy15:41
clarkbthat is excellent news. I think we can leave it on the current limit and monitor it. We probably don't need to rollback as long as this seems happy15:41
fricklerI'll still look into writing a script that unstars all merged or abandoned changes. maybe with an age limit15:42
clarkbI need to take a small break, but then I'll try to send the ansible 8 email afterwards and work on some code reviews I'm behind on15:45
clarkbemail sent16:47
fungiokay, back, looks like i missed the restart16:59
clarkbyup was quick and seems to have addressed hte problem16:59
fungilists.openstack.org test migration is still in progress thanks to the immense message count for the openstack-stable-maint archive17:00
fungino obvious errors yet though17:01
clarkbremember when we ran tumbleweed images because we thought maybe people would like to test the latest and greatest packages... turns out no one really cares to do the work17:35
JayFclarkb: I think re: that ml thread, a lot of people are missing the point that our CI system *is a production environment* and having random broken system stuff in there (including python beta or dot-oh bugs) stops the work of a ton of developers17:36
JayFit's a slider of "stability" and "testing enough stuff" and if anything we've already got an insanely large matrix17:37
funginot only did nobody run opensuse tumbleed jobs, but nobody even had interest in keeping the images buildable17:38
clarkbJayF: sort of. It has the flexibility to do what they want through periodic jobs or experiemtnal jobs etc. The problem is literally anytime we have invested any effort into helping people with it everyone else ignores us and its a giant waste of time and effort17:38
fungiwe removed them not because they were unused, but because they were unbuildable17:38
fungiwe need to revisit gentoo as well. we've been unable to build new gentoo images for a full year now17:39
JayFfungi: can you link and assign me that bug?17:39
JayFI have a personal project that a DIB gentoo image would do wonders for17:39
clarkbfwiw sean's suggestion would probably be trivial to attempt and is proibably an hour or two of someone's time to poc17:39
JayFand I can use this as an excuse to fix it in dib17:39
JayFclarkb: there's not enough of us to do all of the things and care about all of the things :( Some small % of not wanting a larger matrix is "the list of things I can care about simultaneously is full", not just in terms of hours in the day, but in terms of mental capacity17:40
fungiJayF: this is the closest thing to a bug report because there's also been nobody around who has had the time or interest to look into it and file one: https://nb01.opendev.org/gentoo-17-0-systemd-0000228578.log17:40
fungiemerge: there are no ebuilds built with USE flags to satisfy "dev-python/typing-extensions[python_targets_pypy3(-)?,python_targets_python3_8(-)?,python_targets_python3_9(-)?,python_targets_python3_10(-)?,python_targets_python3_11(-)?]".17:41
JayF> 2022-08-04 09:14:00.316 | + echo 'PYTHON_TARGETS="python3_9"'17:41
JayFthat python target isn't supported anymore for general use17:41
JayFthis is just required maintenance, maybe even just bumping it to 3_10 might be enough17:41
fungiyes, proof that these sorts of things require constant attention because, unlike lts distro versions, they change continually17:41
clarkbJayF: yes, I understand that. A healthy response to that should be "we will do what is reasonable" and not "we need to do all these things regardless!". I feel like OpenDev vs OpenStack is a good illustration of the difference in those two approaches. OpenDev has pretty clearly and loudly said we'll stop doing things that don't have bodies behind them and we've shut stuff down and17:42
clarkbits been great for us. OpenStack meanwhile seems far less willing to trim dead weight and wants to hang onto as much as possible at the expense of those who have the time to help17:42
JayFfungi:  One of those things where likely, if we want to keep it working, we'll have to thin the DIB layer (e.g. specifying PYTHON_TARGETS specifically is not something that is recommended for general gentoo use, even though we should expose it for end-users who wanna set it)17:42
clarkbnote it still feels like opendev has more than it can handle. But the scope of that is far smaller today than before and it helps our sanity I think. At least it helps mine17:42
fungiyes, someone focused on tuning dib to require less maintenance and attention would be one approach17:43
JayFlet me take a swing at this at some point (probably weekend?) I suspect it's low hanging for someone with gentoo experience17:43
JayFI've wanted an excuse to get more invovled with dib, I know enough about gentoo to fix this (and might even use it!) so I think you have a winner17:44
JayFjust probably  not something I can charge the 9-5 with lol :D No gentoo in production at G-Research, believe it or not :P17:44
fungiinfra-root: https://review.opendev.org/869210 for upgrading the mailman 3 server should be ready to review. See my comment with the held node info if you want to check out the completed test imports of copies of production mailing lists17:55
fungionce that merges, assuming no new and unforeseen problems arise, we can work on scheduling out the remaining migration windows17:55
clarkbfungi: nice. I'll put it on the list to review.17:57
clarkbFor now though I'm going to take the secrets lock and add the new unused jwt secret for gitea so that we can upgrade gitea when ready17:57
fungisounds good17:59
clarkbok thats done18:00
clarkbI'm going to try and pop out for a bike ride midday today though so unsure how around I'll be to upgrade gitea and/or mm3 today18:01
fungithere's no rush. i should take today's favorable weather as an opportunity to catch up on overdue yardwork18:01
fungisee diablo_rojo sing our praises (starting around 4 minutes in): https://www.youtube.com/watch?v=OlcIDv4iyy018:31
opendevreviewHarry Kominos proposed openstack/diskimage-builder master: feat: Add new fail2ban elemenent  https://review.opendev.org/c/openstack/diskimage-builder/+/89254118:34
fungii'm in the process of cleaning up 72 new leaked images in rackspaces iad region, as well as 380 in dfw and 393 in ord19:08
fungi845 total leaked images to delete19:08
fungii'm injecting the requests with a 10-second delay between each in order to not raise their ire19:11
fungishould require ~2.5 hours to complete depending on how slowly each call returns19:12
fricklerhmm, I wonder where these 72 images came from. pretty sure I didn't have that many failed upload attempts20:23
fungilikely from the brief period where we tried to reenable it before we reverted that again20:43
fungisince i hadn't done any more cleanup at that point20:43
clarkbfungi: in the mm3 change https://review.opendev.org/c/opendev/system-config/+/869210/8/docker/mailman/web/mailman-web/settings.py#56 might allow us to unfork that file in our ansible role. I can't remember if that was he only thing we had to fork for (git diff should clarify I guess). Note we should probably do that in a followup to the upgrade not as part of the upgrade21:30
clarkbfungi: do you know where/what sets the new SMTP_HOST_* vars in https://review.opendev.org/c/opendev/system-config/+/869210/8/docker/mailman/core/docker-entrypoint.sh I'm wondering if we need to set that in our docker compose environment21:30
clarkbI don't see it in the rest of teh change21:30
fungii don't see that we set it anywhere21:31
clarkbwe may want to grep it in the upstream repo to see how they use those vars and decide if we need to set them. I think we may override the exim config anyway so it may not be important21:32
clarkbbut those were the only two things I saw as worth followup on. All the versions of software seem to match the upstream release announcement21:32
fungithough it does optionally get consumed in settings.py21:32
fungiit's referred to in the readmes but not set by anything21:33
fungii think it's there for cases where you want to set up outbound smtp auth21:34
fungithen you can define those values in the dockerfile21:34
fungisince we send out directly from the server's own mta it's not needed, we don't allow anyone besides localhost to relay through it to remote addresses21:35
clarkbya reading the readme that became more clear. I wonder if that empty string will be put in places and break outbound smtp though21:35
clarkbfungi: did you test outbound smtp through mailman on your held node? if that works I think we can proceed as is21:35
fungii can try. should be able to send something through an ml on it and then check what's stuck in exim's deferral queue21:36
clarkb++21:37
fungias long as exim is attempting remote delivery (it won't succeed because of the custom iptables block) then that's sufficient to confirm it is allowing mailman to send outbound21:37
clarkbyup since its the mailman -> exim not exim -> world connection we're worried about here21:38
fungiclarkb: i think this adequately captures it: https://paste.opendev.org/show/bM2NnW6NZGDPpQ8swDZJ/21:51
fungii trimmed out the similar delivery failures for other recipients subscribed to that ml21:52
clarkbyup that message id seems to match in all three logs. I'll +2 the change in that case21:52
TheJuliao/ Hey guys, you can reclaim that hold I have22:27
fungithanks TheJulia! did it help at all?22:27
TheJuliaYeah, it helped me understand it wasn't the logging and actually helped me figure out what wsa wrong iwth the overall job config22:28
fungiheh, that's so computers22:28
fungianyway, autohold has been cleaned up, thanks!22:28
JayFfungi: TheJulia was running facefirst into the OVN-not-respecting-MTU issue too :( 22:33
JayFOpenStackers are of one mind even when we aren't working directly together, it seems ;) 22:33
TheJuliaindeed....  I've not flipped the table yet today, but the urge is super strong22:34
* TheJulia thinks rocket motors22:34
clarkbI self medicate with bike rides outside22:34
fungiyeah, thankfully it's not one of those table-flipping days for me, just trying to wrangle my jungle of a yard into some semblance of not-getting-fined-by-the-town22:35
JayFI've had like, 4 days in a row where I've had an item I don't wanna do on my todo list, towards the end. Finally think I've run myself out of things ahead of it :) 22:35
fungisorry to hear that. can i help by giving you more things to do instead? ;)22:36
clarkbI rode past https://www.digitalrealty.com/data-centers/americas/portland/pdx12 and its sibling PDX11 though and suddnely I was reminded computers exist. Those datacenters are absolutely massive too22:36
JayFfungi: that's what you did yesterday :) https://lists.openstack.org/pipermail/openstack-discuss/2023-August/034854.html22:37
TheJuliaclarkb: I may just go chill on the recumbant bike for a while and play inside job on the tv22:37
clarkbI really enjoy it particularly this time of year. Though its probably far too hot in your area to be outside for long right now22:38
clarkbso ++ to inside job22:38
JayFYeah I'm mostly done for the day too, my chill time is usually my porch swing outside. Sadly about to be in the part of the year where "outside = monsoon"22:38
JayFoh TheJulia speaking of, you all avoid any damage?22:38
TheJuliaYeah, house is mostly untouched.... We have no road/path to get into the airport22:38
TheJuliaor the preferred supermarket22:39
JayFI saw photos where it didn't even look like a flood, it just looked like feet of mud in some places22:39
TheJuliathe roads are like... gone... and a train burried in mud and everythign that derailed22:39
JayFThat's probably going to take a long time to get fixed, too :(22:40
TheJuliayeah22:40
TheJuliawe don't know if the roads up to our mountain hideaway are washed out yet either, I'll find out friday22:41
clarkbcan you still escape to LA or is that all messed up too?22:41
TheJuliadunno about LA, from what I've gathered the peaks nearby sheltered and captured a lot of the rain 22:41
TheJuliawell, sheltered areas west of us22:41
TheJuliawe could likely get to LA, but that is like... 2-4 hours of driving depending on the day22:41
TheJuliaotherwise I'd go see Robert Picardo sing on Saturday22:42
JayFI suspect you all are well supplied22:42
* TheJulia wonders if there is a mobile emitter....22:42
clarkbya we don't venture to saettle super often and its a similar distance time wise22:43
clarkbbut if I really had too I've always thought I could fly out of seatac if necessary22:43
JayFclarkb: I feel like we might have had this conversation before, but I didn't know you were up here22:43
TheJuliaheh22:44
TheJuliadeja vu22:44
clarkbJayF: I'm in the portland area22:45
JayFclarkb: I'm just north of JBLM in Lakewood. 22:45
JayFclarkb: so if you ever do come thru to Seattle and want to say hello, I have a smoker and I know how to use it to make tasty bbq lunches (assuming you eat meat)22:46
clarkbI do and that would be awesome. Don't currently have a seattle trip on the calendar but we tend to make it there a couple times a year to visit friends and family22:46
clarkbthe last time we went we decided to do it all in one day because seattle hotel prices are absurd now22:47
JayFJust give me a bit of heads up; i'm about an hour south of Seattle22:47
clarkbeven in southcenter it was like $350/night + parking22:47
JayFthat is pricey; I can't believe it'd still be that expensive if you went a little further south to Kent though22:47
fungifeet of mud (and sand, and seagrass) is what post-flood cleanup looks like out here, fwiw22:48
fungiso doesn't sound that odd22:48
clarkbI always enjoy seeing the grass stuck in the chainlink fence when the creek near me floods22:48
clarkb"the water got this high"22:48
TheJuliafungi: We lack boats here... that whole "it is normally just sand as far as the eye can see!" thing22:48
JayFfungi: in .nc.us, I'm used to seeing it more as sand than like MUD-MUD, if that makes sense? I think I'm drawing a distinction (where none may exist?) between "wet sand" and "mud from soil/ground"22:48
fungiwhatever dies and sinks to the bottom of the marsh gets picked up by the tide and dumped in the street (and in our house)22:48
fungiyou'd think "how can the marsh floor get dumped *inside* your house?" but that's just it. the first thing the tide does is blow out all the doors and windows on the bottom floor so it has better access22:50
TheJuliafluids dynamics!22:50
TheJuliaand well, water doesn't pressurize22:50
* fungi still has some doors that need re-hanging for the past 5 years22:50
TheJuliafun22:50
JayFI actually have a GC coming tomorrow to quote water damage repair. That all came from the sky down into the house though -- if my house floods half of western washington will be underwater22:52
TheJulia... sigh22:52
TheJuliaI need to get someone out for my house, but it would almost be easier to hire a male friend to make that phone call with the way some folks are behaving these days22:53
fungiyeah, that seems like an unnecessary amount of added challenge22:53
TheJuliaonly took ~4 months to find someone to trim a tree22:54
JayFI found a local place a few years ago, GC owned/operated by a woman, all the office staff are women. It's such a nice change. Good communication without it being corporatey.22:54
fungiugh22:54
fungiwish we had contractors like that out here, yeah22:54
JayFBeing able to email back and forth with the GC instead of it just being some dude named Ted who stops by in a beat up truck every now and then to hit something with a hammer22:54
JayFso nice22:54
TheJuliaomg that sounds heavenly22:55
clarkbJayF: not sure if you've ben but hood canal up to port townsend/port angeles is one of my favorite places to explore22:55
fungited was just here this morning scraping down the awful popcorn ceiling in our guestroom. he'll be back tomorrow to hit it with hammers though22:55
JayFclarkb: I've not, I'll add that to the list. We (my wife and I) can't travel much together right now because of our pet situation. There's a lot of places in WA we haven't seen yet.22:55
JayFWe did make it to ocean shores22:55
clarkbquinault is also amaxing on the sound end of the olympics22:55
clarkb*south end22:56
clarkbI've never made it to vampire country though22:56
JayFvampire country?22:57
JayFreally the only things I've seen here local are Seattle-things and a day trip to Aberdeen then Ocean Shores. We then got a doggo who is scared of approximately everything so we can't leave him with anyone :( 22:57
* TheJulia raises an eyebrow and wonders if this is where she should be living22:57
clarkbForks where twilight is set22:58
TheJuliaoh22:58
TheJulianvmd22:58
funginext to sasquatch country?22:58
JayFTheJulia: I've told you multiple times that .wa.us is the place to be :D22:58
JayFTheJulia: no hurricanes, guaranteed[1]22:58
JayF1: offer not valid in climate change situations22:58
TheJuliaheh22:58
opendevreviewJay Faulkner proposed openstack/diskimage-builder master: DNM: Testing Gentoo CI job against merged-usr profile  https://review.opendev.org/c/openstack/diskimage-builder/+/89262723:10
JayFI found a (potentially) easy/obvious break in the gentoo build, maybe I'll get lucky and pluck a low hanging fruit :D 23:10
clarkbthe testing of the very foundation of the distro images is pretty good23:25
clarkbso hopefully you get workable results that you can refine off of23:25
JayFit's one of those things where gentoo is a rolling release distro, but as long as you are careful not to version-lock anything it should be "stable" in terms of interface for image building, with the small exception of news items23:26
JayFif I can get this working, I'll just make it a point to check these builds anytime I get a news item, that'll give plenty of warning (I run ~arch, basically the equivalent of "testing" vs arch which is "stable")23:27
JayFin this case: new systemd released, and I think is stable, that requires merged-usr, which means you gotta use that profile (they are about to release 23.0 profiles soon which will fix that awkwardness)23:27
JayFin fact... profiles are probably the closest analogue in gentoo to release23:28
opendevreviewJay Faulkner proposed openstack/diskimage-builder master: DNM: Testing Gentoo CI job against merged-usr profile  https://review.opendev.org/c/openstack/diskimage-builder/+/89262723:28
fungiyeah, my main concern with debian is that testing isn't really a rolling release (it pauses for freezes and such) while unstable isn't always guaranteed to be installable due to dependency transitions so may result in extended periods of unbuildable images23:29
fungii personally use unstable on most of my systems, but i also have intimate familiarity with how to un-break it. it doesn't seem suitable for ci jobs23:30
JayFBasically the rule with gentoo is, if you run ~arch you're going to end up with weird breakages from time to time that usually are resolved by ignoring your package manager for 48 hours and retrying an upgrade (or fixing/reporting the bug)... for arch, it's extremely rare for it to be meaningfully broken23:35
JayFso I hope I can get it working and monitor it23:36
opendevreviewJay Faulkner proposed openstack/diskimage-builder master: DNM: Testing Gentoo CI job against merged-usr profile  https://review.opendev.org/c/openstack/diskimage-builder/+/89262723:44
fungiinfra-root: rackspace leaked images have been cleaned up in all three regions now23:46
corvusfungi: thanks!  if it happens again, ping me and i'll take a look with you23:55
*** dtantsur_ is now known as dtantsur23:58

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!