Monday, 2020-05-11

mattoliverauOK so looking at the ethercalc (booking schedule) there are more 9pm - 11pm UTCs (meeting and an hour after meeting) times available then in the 1pm - 5pm (UTC) blocks. Seems most people want to meet then, which was were I was also looking.03:55
mattoliverauThere are still some on Monday, because this was suppose to be for sigs and horizontal projects. But maybe we could take a few hours then, and then 1 or more 2 hour blocks around out meeting time.03:57
mattoliverauThough I know that latter isn't that great for Europe (and it'll be early morning APAC) but at least its something we're more used to03:57
mattoliverauMaybe I could pick a mix, but trying to put them all in the same room over the week:04:12
mattoliverauROOM: Liberty04:12
mattoliverauMon 14 - 16 UTC (2 hours)04:12
mattoliverauTue 13 - 14 UTC (1 hours)04:12
mattoliverauWed 21 - 23 UTC (2 hours) (normal meeting time anyway + an hour)04:12
mattoliverauThu 21 - 23 UTC (2 hours)04:12
mattoliverauThere is room to extend monday to be more then 2 hours, say if we want time to have a planning session. But maybe we can do that in the 2 hour allotment anyway. Not sure we really need an hour for each topic.04:13
mattoliverauThere was only 1 hour free on the Tue, maybe we could make that an 'ops feedback session' as it's a better time. And if no one turns up we can just continue our discsussions?04:14
mattoliverauI might put ^ in the ethercalc (we just put the name swift, we can decide what to talk about when ourselves).04:15
mattoliverauAnd I think they mostly match what people have said (or could begrundly do) in the poll. But unfortuantly we're coming to this very late so don't really have too many options.04:16
mattoliverauOK put those in04:29
*** evrardjp has quit IRC04:36
*** evrardjp has joined #openstack-swift04:36
*** ccamacho has joined #openstack-swift07:11
*** ccamacho has quit IRC07:12
openstackgerritTim Burke proposed openstack/swift master: object-updater: Ignore ENOENT when trying to unlink stale pending files  https://review.opendev.org/72673807:23
timburkethanks mattoliverau! you're doing a great job, sorry to hand it off so late in the game07:26
*** ccamacho has joined #openstack-swift07:29
*** rpittau|afk is now known as rpittau07:36
*** mikecmpbll has joined #openstack-swift08:02
*** dtantsur|afk is now known as dtantsur08:04
*** kukacz_ has joined #openstack-swift08:11
*** mattoliverau_ has joined #openstack-swift08:15
*** ChanServ sets mode: +v mattoliverau_08:15
*** kukacz has quit IRC08:15
*** mattoliverau has quit IRC08:15
*** mahatic has quit IRC08:19
*** mikecmpbll has quit IRC09:36
*** mikecmpbll has joined #openstack-swift09:37
*** rpittau is now known as rpittau|bbl10:15
*** rpittau|bbl is now known as rpittau11:58
*** mahatic has joined #openstack-swift12:09
*** ChanServ sets mode: +v mahatic12:09
rledisezI've been deploying with sharding recently. One way to find the container that need to be sharded is to look the container.recon file, of course. The other is to search for the biggest partition. I found many partitions that were tens of GB, but with only few objects (like 1500). I ran a VACUUM on them and it saved the, well, tens of GB of space. Did we ever consider to do some kind of auto-VACUUM? I don't know, the auditor mayb12:16
rlediseze?12:16
rledisezs/many partitions/many databases/12:17
*** tkajinam has quit IRC12:31
DHEspeaking as a regular user, doing so would lock the database pretty hard and the replica in question would be effectively offline for the procedure. so if nothing else you need to make sure you stagger those out properly12:48
rledisezSure, actually, I lock the file, vacuum into a temporary file (it's way faster) and then move it to the original place and unlock. It's pretty fast (few seconds max). I didn't had to try on db files with a lot of objects (I prefer to shard it first)13:18
*** dtantsur is now known as dtantsur|brb14:17
*** dtantsur|brb is now known as dtantsur14:59
-openstackstatus- NOTICE: Our CI mirrors in OVH BHS1 and GRA1 regions were offline between 12:55 and 14:35 UTC, any failures there due to unreachable mirrors can safely be rechecked15:09
*** mikecmpbll has quit IRC15:46
*** mikecmpbll has joined #openstack-swift15:48
*** gyee has joined #openstack-swift16:04
*** rpittau is now known as rpittau|afk16:09
*** ianychoi_ is now known as ianychoi16:09
*** dtantsur is now known as dtantsur|afk16:19
DHErledisez: I meant lock it in a way that only 1 host a a time could possibly be vacuuming their database. thus only 1 server would appear to be down/hung at a time and largely preserves swift's expectations for quorum, etc16:29
timburkegood morning16:35
*** evrardjp has quit IRC16:36
*** evrardjp has joined #openstack-swift16:36
timburkerledisez, i seem to remember some experiments involving vacuuming... i don't quite remember what findings fell out of them, though16:37
*** zaitcev has joined #openstack-swift16:56
*** ChanServ sets mode: +v zaitcev16:56
DHEI can see the benefits, I'm just worried that an extended lock on a database could be unhealthy for swift if it happens on multiple hosts at once16:59
openstackgerritClay Gerrard proposed openstack/swift master: updater: Shuffle suffixes so we don't keep hitting the same failures  https://review.opendev.org/72657017:23
openstackgerritTim Burke proposed openstack/swift master: updater: Shuffle suffixes so we don't keep hitting the same failures  https://review.opendev.org/72657017:33
*** viks____ has quit IRC18:55
*** ccamacho has quit IRC19:25
*** mikecmpbll has quit IRC20:17
*** mikecmpbll has joined #openstack-swift20:21
mattoliverau_I always wondered if we should attempt a vacuum before or during an rsyc_then_merge. It'll mean less data to send if we do it before rsyncing to the node.. or after sending and before merge as the rsynced one lives safely in TMP. Thought I wrote some code at some point but needed testing. I can search for it, iwasnt too much code. I think it was the former version to save bandwidth.21:09
mattoliverau_*or just in rsync in the case that it doesn't exist (ie after rebalances)21:10
mattoliverau_Oh and this is in the container replicator (if that wasn't apparent), I'm not quite awake yet so not sure any of that made sense :p21:11
mattoliverau_Or rather the db_replicator so both accounts and containers get it21:14
timburke600M pendings cleared in 4 days! not too shabby!22:52
*** tkajinam has joined #openstack-swift22:55
mattoliverau_nice23:18
*** mattoliverau_ is now known as mattoliverau23:18
DHEI was thinking that if there were some way to do centralized locking for the cluster a host could take the lock, spot check that its replicas were up, perform a full vacuum of all dbs on all devices, then release the lock23:31
DHEjust to try to ensure availability as a vacuum would be a host effectively down23:31
DHEsomething like that23:32

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!