Wednesday, 2020-07-22

*** martial has quit IRC00:01
*** Lucas_Gray has joined #openstack-meeting00:12
*** gyee has quit IRC00:20
*** tetsuro has joined #openstack-meeting00:27
*** yaawang has quit IRC01:57
*** yaawang has joined #openstack-meeting01:58
*** ricolin has joined #openstack-meeting02:01
*** Liang has quit IRC02:02
*** ykatabam has joined #openstack-meeting02:14
*** tetsuro has quit IRC02:20
*** rfolco has quit IRC03:00
*** baojg has quit IRC03:10
*** psachin has joined #openstack-meeting03:31
*** Liang__ has joined #openstack-meeting03:31
*** redrobot has quit IRC03:42
*** mahatic has joined #openstack-meeting03:45
*** rbudden has quit IRC03:50
*** rbudden has joined #openstack-meeting03:51
*** rbudden has quit IRC03:51
*** rbudden has joined #openstack-meeting03:52
*** rbudden has quit IRC03:52
*** rh-jelabarre has quit IRC04:25
*** diurnalist has quit IRC04:35
*** Lucas_Gray has quit IRC04:36
*** diurnalist has joined #openstack-meeting05:32
*** diurnalist has quit IRC05:36
*** vishalmanchanda has joined #openstack-meeting05:55
*** e0ne has joined #openstack-meeting06:27
*** eharney has quit IRC06:40
*** eharney has joined #openstack-meeting06:52
*** ociuhandu has quit IRC07:10
*** ralonsoh has joined #openstack-meeting07:29
*** rcernin has quit IRC07:32
*** ociuhandu has joined #openstack-meeting07:36
*** markvoelker has joined #openstack-meeting07:47
*** markvoelker has quit IRC07:52
*** psahoo has joined #openstack-meeting08:03
*** moguimar has joined #openstack-meeting08:08
*** kaisers has joined #openstack-meeting08:11
*** yaawang has quit IRC08:13
*** yaawang has joined #openstack-meeting08:13
*** tetsuro has joined #openstack-meeting08:24
*** baojg has joined #openstack-meeting08:26
*** cschwede has joined #openstack-meeting08:32
*** tetsuro has quit IRC08:48
*** rcernin has joined #openstack-meeting09:11
*** rcernin has quit IRC09:35
*** Liang__ has quit IRC09:40
*** yaawang has quit IRC10:11
*** yaawang has joined #openstack-meeting10:12
*** yaawang has quit IRC10:20
*** yaawang has joined #openstack-meeting10:21
*** priteau has joined #openstack-meeting10:34
*** Lucas_Gray has joined #openstack-meeting10:41
*** moguimar has quit IRC10:41
*** ricolin has quit IRC10:48
*** raildo has joined #openstack-meeting11:29
*** baojg has quit IRC11:49
*** rcernin has joined #openstack-meeting11:50
*** baojg has joined #openstack-meeting11:50
*** rfolco has joined #openstack-meeting11:51
*** rh-jelabarre has joined #openstack-meeting12:00
*** moguimar has joined #openstack-meeting12:02
*** ociuhandu has quit IRC12:03
*** ociuhandu_ has joined #openstack-meeting12:03
*** baojg has quit IRC12:11
*** baojg has joined #openstack-meeting12:12
*** tosky has joined #openstack-meeting12:17
*** e0ne has quit IRC12:33
*** e0ne has joined #openstack-meeting12:37
*** seba has quit IRC12:38
*** carloss has quit IRC12:38
*** e0ne has quit IRC12:39
*** mordred has quit IRC12:40
*** mahatic has quit IRC12:41
*** seba has joined #openstack-meeting12:42
*** carloss has joined #openstack-meeting12:42
*** mordred has joined #openstack-meeting12:45
*** moguimar has quit IRC12:52
*** moguimar has joined #openstack-meeting13:09
*** armstrong has joined #openstack-meeting13:09
*** rbudden has joined #openstack-meeting13:11
*** baojg has quit IRC13:23
*** baojg has joined #openstack-meeting13:24
*** bnemec has joined #openstack-meeting13:30
*** rcernin has quit IRC13:32
*** sluna has quit IRC13:33
*** sluna has joined #openstack-meeting13:33
*** baojg has quit IRC13:41
*** baojg has joined #openstack-meeting13:43
*** TrevorV has joined #openstack-meeting13:43
*** ricolin has joined #openstack-meeting13:43
*** ZhuXiaoYu has joined #openstack-meeting13:53
*** Liang__ has joined #openstack-meeting13:58
*** Liang__ is now known as LiangFang13:59
*** e0ne has joined #openstack-meeting14:00
*** LiangFang has quit IRC14:12
*** Liang__ has joined #openstack-meeting14:14
*** ykatabam has quit IRC14:14
*** e0ne has quit IRC14:22
*** baojg has quit IRC14:22
*** e0ne has joined #openstack-meeting14:23
*** baojg has joined #openstack-meeting14:24
*** psachin has quit IRC14:24
*** moguimar has quit IRC14:38
*** moguimar has joined #openstack-meeting14:43
*** mlavalle has joined #openstack-meeting14:44
*** rsimai has joined #openstack-meeting14:47
*** Guest23136 has joined #openstack-meeting14:47
*** Guest23136 is now known as redrobot14:49
*** diurnalist has joined #openstack-meeting14:58
*** jmlowe has left #openstack-meeting15:04
*** Liang__ has quit IRC15:15
*** Liang__ has joined #openstack-meeting15:19
*** armstrong has quit IRC15:19
*** Liang__ has quit IRC15:23
*** svyas|afk has joined #openstack-meeting15:26
*** gyee has joined #openstack-meeting15:48
*** moguimar has quit IRC15:54
*** baojg has quit IRC15:54
*** baojg has joined #openstack-meeting15:56
*** zbr|rover is now known as zbr15:58
*** moguimar has joined #openstack-meeting16:03
*** e0ne has quit IRC16:15
*** ricolin has quit IRC16:19
*** Lucas_Gray has quit IRC16:24
*** rsimai has quit IRC16:24
*** Lucas_Gray has joined #openstack-meeting16:31
*** baojg has quit IRC16:34
*** cschwede has quit IRC16:41
*** lbragstad_ has joined #openstack-meeting16:54
*** lbragstad has quit IRC16:57
*** ociuhandu has joined #openstack-meeting17:11
*** ociuhandu_ has quit IRC17:14
*** ociuhandu has quit IRC17:16
*** moguimar has quit IRC17:17
*** e0ne has joined #openstack-meeting17:20
*** ociuhandu has joined #openstack-meeting17:22
*** Lucas_Gray has quit IRC17:23
*** psahoo has quit IRC17:25
*** ociuhandu has quit IRC17:26
*** lbragstad_ has quit IRC17:37
*** lbragstad_ has joined #openstack-meeting17:38
*** lbragstad__ has joined #openstack-meeting17:45
*** lbragstad_ has quit IRC17:47
*** jmasud has quit IRC18:00
*** jmasud has joined #openstack-meeting18:00
*** _erlon_ has joined #openstack-meeting18:02
*** jmasud has quit IRC18:07
*** e0ne has quit IRC18:15
*** jmasud has joined #openstack-meeting18:35
*** ZhuXiaoYu has quit IRC18:44
*** ralonsoh has quit IRC18:45
*** bnemec has quit IRC19:03
*** baojg has joined #openstack-meeting19:07
*** e0ne has joined #openstack-meeting19:17
*** baojg has quit IRC19:31
*** ociuhandu has joined #openstack-meeting19:31
*** baojg has joined #openstack-meeting19:32
*** ociuhandu_ has joined #openstack-meeting19:34
*** ociuhand_ has joined #openstack-meeting19:35
*** ociuhandu has quit IRC19:36
*** ociuhandu_ has quit IRC19:38
*** diurnalist has quit IRC19:38
*** adrianc_ has joined #openstack-meeting19:44
*** ociuhand_ has quit IRC19:45
*** adrianc has quit IRC19:47
*** e0ne has quit IRC19:47
*** TrevorV has quit IRC19:59
*** e0ne has joined #openstack-meeting20:04
*** e0ne has quit IRC20:12
*** diurnalist has joined #openstack-meeting20:13
*** hemna has quit IRC20:32
*** hemna has joined #openstack-meeting20:39
*** priteau has quit IRC20:47
*** zaitcev has joined #openstack-meeting20:51
*** patchbot has joined #openstack-meeting20:57
timburke#startmeeting swift21:00
openstackMeeting started Wed Jul 22 21:00:06 2020 UTC and is due to finish in 60 minutes.  The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot.21:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:00
*** openstack changes topic to " (Meeting topic: swift)"21:00
openstackThe meeting name has been set to 'swift'21:00
timburkewho's here for the swift meeting?21:00
alecuyero/21:00
rledisezhi o/21:00
seongsoochoo/21:00
claygare we having a 🎉!!!?21:01
timburkeagenda's at https://wiki.openstack.org/wiki/Meetings/Swift21:01
timburke#topic releases21:02
*** openstack changes topic to "releases (Meeting topic: swift)"21:02
timburkeso i'm realizing that i've been pretty bad about doing releases lately. i feel like i'd mentioned wanting to do a release a month or so ago, but just never got around to it21:02
timburkeso now it's been 3 months since 2.25.021:02
* tdasilva is lurking21:03
timburkedoes anyone know of patches we really ought to have before a 2.26.0?21:03
*** bnemec has joined #openstack-meeting21:04
claygnot really, but we should cut 2.26.0 so we can start landing stuff for 2.27.0!!21:04
timburkethe only two that come to mind for me are p 742033 and p 73916421:04
patchbothttps://review.opendev.org/#/c/742033/ - swift - py3: Work with proper native string paths in crypt... - 3 patch sets21:04
patchbothttps://review.opendev.org/#/c/739164/ - swift - ec: Add an option to write fragments with legacy crc - 1 patch set21:04
claygyes, both of those are terrible 😥21:05
timburkebut as clayg points out, we can always have another release :-)21:05
claygi mean the problems the address - the patches are probably 🤩21:05
zaitcevwe're going to have 2.917.0 eventually if this continues21:05
timburkeand that last one doesn't entirely make sense without https://review.opendev.org/#/c/738959/21:05
patchbotpatch 738959 - liberasurecode - Be willing to write fragments with legacy crc - 1 patch set21:05
timburkezaitcev, with luck we'll feel like we can drop py2 in the not *so* distant future -- that seems to warrant a 3.0 ;-)21:06
claygyeah the ec one seems worth some effort to get all the versions to line up21:06
claygthe py3 fix is probably going to be backported stable releases anyway (timburke is doing great with the backports 👏)21:06
zaitcevso, you went with an environment variable after all.21:07
claygzaitcev: did you have a better idea?!21:07
zaitcevIn that case, PyLibEC does not need a change, right?21:07
timburkezaitcev, or rather, i never got around to trying to do it as a new, supported API21:08
timburkecorrect, pyeclib won't need to change as things stand21:08
timburke(which seems like a point in the env var's favor to me)21:09
clayg21:10
timburkeso i'm not really hearing much in the way of known issues -- is anyone going to be able to review either of those two threads in the near future? or should we just release without them?21:10
*** bnemec has quit IRC21:11
timburkeclayg's right about backporting the py3 fix -- i'm definitely planning to take it back to train, and if we get it reviewed soon-ish, i'll include it in the next stable release (which i hope to do concurrent with 2.26.0)21:11
claygthe py3 fix is newer; but i'm less clear on how to effectively review the libec patches21:13
timburkeclayg, fwiw, i built a few different versions of libec (i want to say 1.0.9, 1.5.0, 1.6.0, and a new 1.6.1+ that supports the env var) and used LD_PRELOAD to split-brain my proxy and object layers21:15
clayg🤯21:15
timburkethe tricky part with *really* old libec is making sure you've got a pyeclib that supports it. it took a bit to get it right as i recall21:17
claygwell that sounds miserable21:17
mattoliverauo/ (sorry I'm late)21:17
timburkebut i think 1.5.0, 1.6.0, and 1.6.1+ likely all work well enough with master or latest-tag pyeclib21:17
claygI think i'm more likely to review the encryption/py3 one - can someone offer to help with the libec?21:18
*** bnemec has joined #openstack-meeting21:18
timburkethe only reason i went back so far was to confirm that the tool i attached to the bug did the right thing on frags with no CRC21:18
timburkeclayg, thanks for picking up the py3 patch. i guess we'll continue kicking the can down the road for the ec work, but i think it's important that we get to a resolution by the end of the cycle21:21
timburkeon to updates!21:22
timburke#topic replicaiton networks, saio, and probe tests21:22
*** openstack changes topic to "replicaiton networks, saio, and probe tests (Meeting topic: swift)"21:22
*** baojg has quit IRC21:22
timburkejust before the meeting i pushed up fresh versions of some patches that let me run separate replication servers in my saio21:23
*** baojg has joined #openstack-meeting21:23
timburkei mentioned some concerns about the upgrade path in -swift21:24
claygit's gunna be FINE21:26
clayg... probably21:26
timburkein particular, since replication servers today can't handle GET/PUT/etc. methods, when p 735751 comes in, there's the potential for a lot of 405s during a rolling upgrade21:26
patchbothttps://review.opendev.org/#/c/735751/ - swift - Allow direct and internal clients to use the repli... - 5 patch sets21:26
claygi mean, anyone running EC has already turned off replication_server = True; so it really only effects account/container servers21:26
timburkein talking it through with clayg, i think it's fairly tractable -- you'd just need to make sure that any configs with replication_server=true have that line cleared -- then you can the server-can-respond-to-all-methods behavior21:27
clayg🤞21:27
claygI mean; we can probably test/verify that once we have vsaio ready to replication server!  💪21:27
*** raildo has quit IRC21:28
timburkeso i bring this up for two reasons: first to raise awareness of the potential upgrade issue, and second to find out how other people are deploying their swifts -- will it even be much of a concern?21:29
rledisezso, we don't use replication_server=True, so it's not a major concern to me. I assume every operator are reading changelog before upgrading21:30
timburkei *did* discover that nvidia (formerly swiftstack) sets replication_server=true on a/c... so, that's a thing we'll need to fix...21:30
rledisezbut at least an operator upgrade one node to check everything is fine, so have a clear error message if the config line still exist is also mandatory I think (I mean, process does not start and log an error)21:31
claygrledisez: well the "error" would be with old code and old configs; once everyone is upgraded it's fine 🤔21:32
timburkethe trouble isn't the upgraded node, though -- it's the *other* ones that only respond to REPLICATE :-(21:32
rledisezha… I missed that21:32
claygtimburke: I'm pretty sure we can find precedent where the answer was "push out a config that makes sense before you upgrade" - we just need to verify that works; but I think it's fine21:32
timburkecool, i'll keep pushing on that then ;-)21:33
rledisezI'm totally OK with that, if there is a proper upgrade procedure it seems standard in the software industry21:33
mattoliveraucan a new upgrade fall back to the client-data-path network on a 405 from a server on a replicated network?21:33
claygwe'll fix our configs; it doesn't effect rledisez; other people probably don't even use replication networks, or if they do they already learned to turn it off because ssync ... or they get some errors while upgrading!21:34
claygmattoliverau: 🤮 I think timburke was sort of going that path, but with a config option - automatic fallback might have to produce warnings... cruft 🤔21:35
claygmattoliverau: but it's a good point!  if we're worried enough about the errors fallback to client network would work and shouldn't be much more painful than what we do know... we could probably clean it up eventually!21:35
timburkeyeah, it'd be doable -- but i'd worry about the extra complexity and knowing when we could feel safe removing it21:35
claygtimburke: I'm not THAT ^ worried about it - YOU?21:36
claygmattoliverau: do you know anyone that deploys with replication_server = true?  (for a/c server; N.B. replication_server = true is currently *broken* for ec/ssync - which is more or less what we're trying to fix 🤷)21:37
timburkeyeah, i think i'm not so worried -- might be worth confirming that we're not going to flood logs with tracebacks or anything, but the upgrade window should be fairly short and it's only disrupting consistency engine work which is used to delays21:38
mattoliverauI wonder if a fallback to the other network is ever really bad.. on link failures and congestion, swift will keep going. I guess we don't want to effect user traffic, and detecting when to fallback could get problematic.21:38
claygtimburke: I think we also have "upgradeimpact: everything is going to replication network" on our side?  like you kind of have to really *want* to get in on this?21:38
mattoliverauclayg: I can't think of anyone, but can look into our cloud product to see what we configure (we still have customers running openstack).21:39
timburkei guess another option would be to get the respond-to-all-methods fix in now, then do the move-all-traffic-to-replication-network fix later21:39
claygmattoliverau: if I opt'd into replication networks it's probably for segregation, and I don't think I'd want errors to push over to client network21:39
claygmattoliverau: thanks!21:39
timburke(though i'm also kind of happy that these piled up together to force us to acknowledge that the upgrade is a little tricky)21:40
claygtimburke: i think they go together and we regularly introduce changes that will cause upgraded servers to not talk to old versions until they get upgraded 🤷21:40
mattoliverauclayg: I guess, but self healing and things just working with warnings in the log could be cool :) But sure fair enough21:40
timburkeonly other question i had was whether we should work toward having saio and probe tests *expecting* an isolated replication network21:41
claygi mean, it's not even the container-replicator... it's like the *sharder* and ... updater?  I think it's fine.21:42
timburkereconciler, reaper, container-sync...21:43
timburkecurrently, https://review.opendev.org/#/c/741723/ is a hack and a half -- there's literally a comment like `#  :vomit:` in there at the moment21:44
patchbotpatch 741723 - swift - wip: Allow probe tests to run with separate replic... - 2 patch sets21:44
zaitcevcontainer-sync cannot use replication network by definition, I think. It talks to _other_ cluster.21:45
timburkezaitcev, true, it does -- but it uses internal-client to get the data to send21:46
*** ociuhandu has joined #openstack-meeting21:46
clayg💡21:46
timburkemaking probe tests tolerant of either config seems tricky; if we could assume it to be there, i think it could clean up better21:46
*** rh-jlabarre has joined #openstack-meeting21:47
timburkesomething to think about, anyway. i want to make sure clayg gets some time to talk about...21:47
timburke#topic waterfall ec21:48
*** openstack changes topic to "waterfall ec (Meeting topic: swift)"21:48
timburkeclayg, i saw new patches go up! how's it going?21:48
zaitcevCoincidentally I wrote a little function that does not make the assumption  node_id == (my_port - 6010) / 1021:48
claygit's kinda stalled out waiting for feedback; the latest patch was just some minor "rename things"21:48
claygtimburke: you had said something like "the per-replica timeout would make more sense if it started at 0 = first parity" or something like that - and that makes sense to me21:49
claygthe fact that first (in replicated) or n-data (in ec) replicated timeouts are "ignored" is kinda dumb21:50
claygand I can't really imagine a path in the future where they'd be used... so... i just need to figure out how to express something like that in code21:50
*** ociuhandu has quit IRC21:50
*** rh-jelabarre has quit IRC21:51
timburkei think it could also make it so the config won't strictly *need* to be per-policy, though it seems like you'd likely want to tune it differently for different policies21:51
claygunder the hood the `get_timeout(n_replica)` function will probably still grab out of a list... but it'll either be sparce, or have to `+ offset`21:51
claygoh yeah; i hadn't thought about having global default... hrm21:51
claygso maybe that's where the next bit of work will be21:52
*** ykatabam has joined #openstack-meeting21:52
claygtimburke: can you confirm you're leaning towards *further* decoupling (remove vistigial-ec from GetOrHeadHandler & remove vistigial-replicated from ECFragGetter) as much as possible - or do you see some path where they go back together?21:53
claygbecause that's like a 4th patch I haven't even started on21:53
claygI did go back and look at some of the EC tests that I "made more robust" and they seem to work (or fail in a reasonable way) with the old code; so I'm pretty confident in maintaining the status quo21:54
timburkei see all of this (and the further cleanup that could be done now that they're separated) as a pretty convincing case that they should stay apart21:54
claygthat is I think everything will work and continue to be just as robust post upgrade if you don't turn on the new timeouts (but I think people will want to turn on the timeouts; just like concurrent_gets)21:55
timburkeunless you've gotten a new picture of how they could come back together, anyway21:55
timburkeall right, last few minutes21:55
timburke#topic open discussion21:56
*** openstack changes topic to "open discussion (Meeting topic: swift)"21:56
timburkeanything else we should bring up today?21:56
claygok, so I'll probably leave p 709326 and it pre-req alone; respin 737096 and work onto something ontop of that with more refactoring21:56
patchbothttps://review.opendev.org/#/c/709326/ - swift - Enable more concurrency for EC GET - 2 patch sets21:56
*** zaitcev has left #openstack-meeting21:56
claygshrinking doesn't really work!  https://review.opendev.org/#/c/741721/21:56
patchbotpatch 741721 - swift - add swift-manage-shard-ranges shink command - 2 patch sets21:56
claygat least it's not working well for me when I try to do it manually without auto_shard = true (which is how my clusters are)21:56
mattoliverauoh interesting21:57
claygso I really want to merge https://review.opendev.org/#/c/741721/ and get that deployed ASAP (it's mostly stealing all the fixes from tim and then putting a cli ontop of it)21:57
patchbotpatch 741721 - swift - add swift-manage-shard-ranges shink command - 2 patch sets21:57
claygalso I really want to merge https://review.opendev.org/#/c/742535/ because until I have way to do manual shrinking I have a bunch of overlapping shard ranges and sometimes a bunch of listings get lost21:58
patchbotpatch 742535 - swift - container-sharding: Stable overlap order - 1 patch set21:58
claygi tried to keep both of those neat-and-tight cause I really want these fixes!21:58
mattoliverauI'll fire up a sharding env and have a play with them.21:59
claygwhile you have a sharded db up check out https://review.opendev.org/#/c/737056/ too!21:59
patchbotpatch 737056 - swift - swift-container-info: Show shard ranges summary - 3 patch sets21:59
claygI think timburke and I agreed on that one now 🤞22:00
mattoliveraukk :)22:00
timburkeyeah, that was definitely one of the things i noticed while messing with https://review.opendev.org/#/c/738149/ -- shrinking currently needs to be "online" and actively push new info to the shard that should be shrunk22:00
patchbotpatch 738149 - swift - Have shrinking and sharded shards save all ranges ... - 5 patch sets22:00
timburkeall right, looks like we're out of time22:01
timburkethank you all for coming, and thank you for working on swift!22:01
timburke#endmeeting22:01
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"22:01
openstackMeeting ended Wed Jul 22 22:01:19 2020 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)22:01
openstackMinutes:        http://eavesdrop.openstack.org/meetings/swift/2020/swift.2020-07-22-21.00.html22:01
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/swift/2020/swift.2020-07-22-21.00.txt22:01
openstackLog:            http://eavesdrop.openstack.org/meetings/swift/2020/swift.2020-07-22-21.00.log.html22:01
*** vishalmanchanda has quit IRC22:01
*** Lucas_Gray has joined #openstack-meeting22:32
*** rcernin has joined #openstack-meeting22:52
*** rcernin has quit IRC22:58
*** mlavalle has quit IRC23:00
*** rcernin has joined #openstack-meeting23:02
*** tosky has quit IRC23:03
*** rcernin has quit IRC23:04
*** rcernin has joined #openstack-meeting23:05
*** Lucas_Gray has quit IRC23:09
*** diurnalist has quit IRC23:26
*** markvoelker has joined #openstack-meeting23:27
*** markvoelker has quit IRC23:32
*** ociuhandu has joined #openstack-meeting23:47
*** ociuhandu has quit IRC23:52
*** number80 has quit IRC23:55

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!