Wednesday, 2022-10-12

opendevreviewOpenStack Proposal Bot proposed openstack/swift master: Imported Translations from Zanata  https://review.opendev.org/c/openstack/swift/+/86102203:15
opendevreviewAlistair Coles proposed openstack/swift master: proxy: refactor error limiter to a class  https://review.opendev.org/c/openstack/swift/+/85879009:52
opendevreviewAlistair Coles proposed openstack/swift master: proxy: refactor error limiter to a class  https://review.opendev.org/c/openstack/swift/+/85879011:31
opendevreviewAlistair Coles proposed openstack/swift master: Refactor memcache config and MemcacheRing loading  https://review.opendev.org/c/openstack/swift/+/82064812:46
opendevreviewAlistair Coles proposed openstack/swift master: Global error limiter using memcache  https://review.opendev.org/c/openstack/swift/+/82031312:46
DHEtimburke_: for context, I have a 2.23.1 based cluster with a 10+10 EC policy. randomly GETs of big files just won't start. I'm running curl with tempurl authentication and it just sits there.... it's not too common - I'd say less than 1% of requests - but it does happen18:10
DHEfigured I'd apply some updates to the cluster since it's getting old at this point18:10
timburke_DHE, kinda sounds like something we fixed in 2.27.0: https://github.com/openstack/swift/blob/master/CHANGELOG#L709-L71118:19
timburke_you could try manually applying https://github.com/openstack/swift/commit/86b966d950000978e2438f1bd5d9e2bf2e238cd1 -- it's (fortunately) a pretty small change18:20
timburke_does it hang just the one request, or the whole process?18:20
timburke_either way, you might be able to use https://github.com/tipabu/python-stack-xray/blob/master/python-stack-xray to get a sense of where it's hanging, though it can be tricky if it's a busy server18:26
DHEjust the one request. I terminate curl and try again. it works. swift itself has been largely fine.18:29
DHEyeah I tampered with python itself to fix this. I think I'm the one who helped push that through in the first place18:30
timburke_ah, right! :-)18:30
DHEfor non-EC jobs (which is 99.9% of the workload) it's been humming along nicely18:30
timburke_logs have much of anything to say? i know it'd be a little tricky since there's no transaction id sent back to the client18:31
timburke_if not, getting a stack is probably the best way forward -- might be able to spin up a separate proxy-server instance and hitting that directly until it hangs, then run the xray script18:33
kotagood morning20:57
timburke_o/20:59
*** timburke_ is now known as timburke20:59
timburke#startmeeting swift21:00
opendevmeetMeeting started Wed Oct 12 21:00:27 2022 UTC and is due to finish in 60 minutes.  The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot.21:00
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:00
opendevmeetThe meeting name has been set to 'swift'21:00
timburkewho's here for the swift team meeting?21:00
kotao/21:00
cschwedeo/21:01
timburkewhoa! a cschwede!21:01
cschwede:)21:01
zaitcevA rare meeting indeed.21:02
mattolivero/21:02
mattolivercschwede is here too!21:02
timburkenot sure if acoles or clayg are around -- i think clay's kind of busy trying to get an nvidia release lined up21:03
timburkefirst up21:03
timburke#topic PTG21:03
timburkei've booked time slots and updated the etherpad list to point to the right place!21:04
mattoliveroh nice!21:04
kotagood21:04
timburkei went with 2100-2300 M-Th (though i wonder if we could get away with starting a little earlier to get more of acoles's (and cschwede's?) time21:05
cschwedegreat, happy to meet all of you at least virtually :)21:05
timburkethough it wouldn't help kota -- timezones are hard :-(21:06
kotain UTC?21:06
timburkeyes -- so i know it'd be a bit of an early start21:06
kotaseems like just morning not so matter21:06
mattoliverif earlier makes it easier for Al and cschwede then I'm ok with it. 21:08
timburkeif we try for an hour earlier, it's 5am -- might be "just morning" for you, but that doesn't match my experience ;-)21:08
mattoliverlol21:08
kotait's still ok, i think21:09
kota:)21:09
timburkei went with just M-Th to make sure we don't run into kota and mattoliver's weekend, and kept shorter slots to try to keep everyone well rested21:09
timburkeif you've got topics for the PTG, please add them to the etherpad!21:10
timburke#link https://etherpad.opendev.org/p/swift-ptg-antelope21:10
timburkeand if we feel like we need more time, we can always book more slots21:11
timburkethat's all i've got for ptg logistics -- any questions or comments?21:12
mattolivercool, I'll go through it. And add stuff I can think of.21:12
timburke👍21:12
mattoliverDo we want to book a "normal" time for a ops feedback?21:13
mattoliverhappy to do it in my timezone though (makes it much easier for me) ;) 21:13
timburkei don't know what "normal" means :P21:13
timburkebut yeah, an ops feedback session is probably a good idea21:14
mattolivera time that have more overlap with the attendees of PTG. Although it probably be just mostly us. 21:14
timburkeah, yeah -- looks like a lot of the rest of the ptg is in the 1300-1700 UTC slot21:16
timburkei think i'll let mattoliver schedule that one then :-)21:17
mattoliverlol, shouldn't have opened my big mouth :P kk :)21:17
timburkeall right -- i don't have much else to bring up21:19
timburke#topic open discussion21:19
timburkewhat else should we talk about this week?21:19
timburkecschwede, i figured you'd have something, after making the effort to attend ;-)21:21
clarkboh I hav esomething21:21
clarkbwe're (opendev) looking to bump our default base job node to ubuntu jammy on the 25th21:22
cschwedetimburke: not really, i just wanted to ensure I get all the important PTG infos :)21:22
timburke:-)21:22
clarkbemail has been sent about it, but I know ya'll bindep file doesn't work with jammy currently so wanted to call it out (I suspect that your jobs are probably fine since they probably specify a specific node type already)21:22
timburkeclarkb, oh! i bet that's part of the recent interest in https://review.opendev.org/c/openstack/swift/+/85094721:22
clarkbtimburke: I left a comment on the bindep file in your py310 change with a note about correcting that21:23
clarkbya that change21:23
mattoliveroh good thing I moved the vagrant swift all in one environment over to jammy then :P 21:23
timburkethanks -- the bigger issue right now is the parent change -- our continued reliance on nosetests is definitely a problem now :-(21:24
clarkbI have no idea if this will be the case for swift due to all the IO but in Zuul there is a distinct difference in test runtimes between 3.8 and 3.10 with 3.10 being noticeably quicker which is nice21:24
mattoliverI've heard there is a performance boost in later py3's. nice21:25
timburkei've noticed similar sorts of performance benefits when benchmarking some ring v2 work -- going py27 -> py37 -> py39 -> py310 kept making things better and better :-)21:25
mattoliverdown with nose, time to move everything to pytest. 21:25
timburkenow if only i could get that first jump done in the clusters i run21:26
clarkbre pytest one thing I've encouraged others to do is use ostestr for CI because it does parallelized testing without hacks (thought maybe pytest has made this better over time) and it is a standard test runner which means you can run pytest locally to get the more interactive tracebacks and code context output21:28
clarkbbut if you use pytest in CI you very quickly end up being pytest specific and running with standard runners is much more difficult21:28
mattolivergreat tip thanks clarkb 21:29
clarkblooks like your test jobs don't take a ton of time. But for a lot of projects not running in parallel would make them run for hours21:30
timburkeour tests (unfortunately) generally can't take advantage of parallelized tests -- unit *might* be able to (though i've got concerns about some of the more func-test-like tests), but func and probe are right out21:32
timburkeall right, i think i'll call it21:35
timburkethank you all for coming, and thank you for working on swift!21:35
timburkesee you next week for the PTG!21:36
timburke#endmeeting21:36
opendevmeetMeeting ended Wed Oct 12 21:36:05 2022 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)21:36
opendevmeetMinutes:        https://meetings.opendev.org/meetings/swift/2022/swift.2022-10-12-21.00.html21:36
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/swift/2022/swift.2022-10-12-21.00.txt21:36
opendevmeetLog:            https://meetings.opendev.org/meetings/swift/2022/swift.2022-10-12-21.00.log.html21:36
timburkehmm... speaking of CI, we should figure out what's going on with the fips func test jobs -- they started failing a couple weeks ago (9/29): https://zuul.opendev.org/t/openstack/builds?job_name=swift-tox-func-py39-centos-9-stream-fips&project=openstack/swift21:54
timburkei don't immediately see any difference between the packages installed (or setup writ large) for the most recent pass vs that next failure, either...21:55
opendevreviewMerged openstack/swift stable/yoga: CI: Add nslookup_target to FIPS jobs  https://review.opendev.org/c/openstack/swift/+/85827722:30

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!