Wednesday, 2020-08-26

*** gyee has quit IRC00:26
*** takamatsu has quit IRC00:45
*** takamatsu has joined #openstack-swift00:51
*** rcernin has quit IRC03:25
*** rcernin_ has joined #openstack-swift03:26
*** dsariel has quit IRC03:35
*** dsariel has joined #openstack-swift03:35
*** psachin has joined #openstack-swift03:57
*** evrardjp has quit IRC04:33
*** evrardjp has joined #openstack-swift04:33
*** m75abrams has joined #openstack-swift05:24
*** dosaboy has quit IRC08:17
*** dosaboy has joined #openstack-swift08:17
*** tdasilva has quit IRC08:33
*** tdasilva has joined #openstack-swift08:35
*** ChanServ sets mode: +v tdasilva08:35
*** adriant has quit IRC08:48
*** adriant has joined #openstack-swift08:48
*** rcernin_ has quit IRC09:39
*** baojg has joined #openstack-swift09:53
*** m75abrams has quit IRC14:49
*** tkajinam has quit IRC14:57
*** gyee has joined #openstack-swift15:26
ormandjtimburke: fwiw, we're going to work on doing the servers per port thing, we'll retest after the implementation and let you know how it goes15:38
ormandjwe'll use '2' as you suggested15:39
ormandjin 748043 you may want to consider giving an example with a setting of '2' if that's a best practice starting point15:42
timburkei think it can vary pretty widely based on how many cores and how many disks per chassis16:00
ormandjany good rule of thumb on calculating that, then, based on N drives and Y cores or something?16:00
timburkenot sure offhand -- i feel like clayg probably has a better idea than me, though16:01
claygI like 4 with 10-20 disk chassis, and only turn it down when there's more like 25+ disks.  Didn't you say 50+ disk per chassis!?  I think 2 is a great place to start.16:03
ormandjyeah, 56 disks in these16:06
ormandj+ 4 for account/container16:06
ormandj(ssd of course)16:06
ormandjif that's a good starting point then my suggestion would be to add that into the docs on deployment, it would be great to help peope get off on at least a better foot, if not perfect16:07
claygormandj: makes sense to me; have you ever done a patch in gerrit before?16:09
ormandjnope16:09
*** dosaboy has quit IRC16:19
*** dosaboy has joined #openstack-swift16:19
*** psachin has quit IRC16:21
*** dsariel has quit IRC16:52
openstackgerritMerged openstack/swift master: docs: Clean up some formatting around using servers_per_port  https://review.opendev.org/74804317:03
*** jv_ has quit IRC17:13
openstackgerritClay Gerrard proposed openstack/swift master: swift-init: Don't expose misleading commands  https://review.opendev.org/74828117:27
*** cwright has joined #openstack-swift17:55
timburkeclayg, thinking about ^^^ -- how many workers did you have? if it's more than one, shouldn't we still have a listener available for new connections?17:58
claygfrom p 747332 > If I had *more* than one worker killing just one wouldn't interrupt connections at all!17:59
patchbothttps://review.opendev.org/#/c/747332/ - swift - wsgi: Allow workers to gracefully exit - 3 patch sets17:59
claygbut even if there's 3-4 workers - closing ALL of the sockets is NOT seamless?18:00
claygmaybe in practice with enough workers-per-port it's unlikely they'd all have someone holding a socket open and the window  before the parent respawns one is small?  ðŸ¤”18:01
claygwhat's the justification for exposing the new command?18:01
timburkemm -- i get you now. would it be better to space the socket-closes out a little? or i just need to do that second attempt i mentioned, and see about keeping listen sockets around as long as possible ;-)18:02
timburkei was mainly just dribbing from the seamless work. you're almost certainly right to drop it as a command18:02
timburke*cribbing18:03
claygtimburke: I think it's great except when you kill ALL the children18:10
claygi probably don't have a clear picture of the future work you're imaging18:10
claygmaybe drop the SIGUSR1 handling and just make them do HUP - the "graceful" is very clearly what's happening from the worker perspective18:10
timburkeso i had USR1 do the same thing because *otherwise*, the default is to just terminate -- and it seemed likely that someone who got used to sending USR1 to parents might try it on a child expecting similarly graceful behavior18:15
timburkemaybe it'd be better to just ignore USR1? at least, until we find a better use for it in children? i don't think there's any way the child can achieve similar semantics to what USR1 means in the parent18:16
claygoh, yeah i assumed it'd default to no op instead of term - my bad18:25
claygI think getting used to usr1 to the parent is reasonable - leave it how you wrote it!18:25
*** baojg has quit IRC18:52
*** baojg has joined #openstack-swift18:53
openstackgerritTim Burke proposed openstack/swift master: Suppress CryptographyDeprecationWarnings  https://review.opendev.org/74829720:10
timburkethe more i think about it, the more i want our seamless reload to pass listen socket fds over domain sockets... and have the graceful shutdown just turn off wsgi.is_accepting...20:15
timburkehmm... i wonder if eventlet's monkey-patching negates the "The socket is assumed to be in blocking mode." note at https://docs.python.org/2/library/socket.html#socket.fromfd ...20:51
timburkealmost meeting time! i expect it'll be a short one20:51
claygthat early in the process life-cycle it probably doesn't matter if it's blocking or not (I'm assuming it means the *returned* socket may not inherit all the parent socket options; like blocking)20:55
claygyay meeting!20:55
kota_good morning20:55
mattoliverauMorning20:58
ormandjtimburke: another few quick ones: 1) when doing servers_per_port of 2, are we basically setting each device in the ring file to use unique port, and in effect servers per port is running a few server processes for each of these, allowing requests to come in even when one is blocked (downside of higher servers per port is more memory)  2) what do bind_ip and bind_port look like in the default section21:17
ormandjof config?/buffer 621:18
timburkeyeah, typically you'd have each disk get its own port in the ring. if the nodes were more cpu-constrained, i'd maybe say group a few disks together -- it wouldn't be ideal, but at least it limits the blast-radius a bit compared to having them all on a single port21:23
ormandjservers per port working the way i think then? and the bind_ip and bind_port?21:24
timburkeiirc correctly, bind_ip and bind_port are ignored when using servers_per_port -- all the info we need is coming from the ring (and checking what ips the node has available)21:25
ormandjok, so we can just leave them at the 'primary' ip and port or w/e like we have them now, and should be gtg21:25
ormandj(we'll test in dev, obviously)21:25
mattoliverauI'll loop back to shrinking this week, and see if I can progress the audit to something hacky but mostly working as a POC.21:37
*** neonpastor has joined #openstack-swift21:52
*** rcernin has joined #openstack-swift22:40
*** tkajinam has joined #openstack-swift23:01
*** baojg has quit IRC23:21
*** baojg has joined #openstack-swift23:21

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!