Wednesday, 2019-01-30

*** mrjk has quit IRC01:01
*** tkajinam_ has joined #openstack-swift01:47
*** tkajinam has quit IRC01:50
openstackgerritTim Burke proposed openstack/swift master: WIP: symlink-backed versioned_writes  https://review.openstack.org/63385701:55
timburkeclayg: ^^^ very much work in progress, but you seemed curious about where i was going with p 63309401:56
patchbothttps://review.openstack.org/#/c/633094/ - swift - Allow "harder" symlinks - 2 patch sets01:56
timburkei made it an option for now, mainly just to make it easier to test things when starting from the old behavior, but i'm pretty sure that will be strictly better (once we've actually beaten it into shape where we'd feel comfortable landing it)01:59
openstackgerritMerged openstack/swift master: Python3: Fix test/unit/common/test_container_sync_realms.py  https://review.openstack.org/63364402:17
*** mikecmpbll has quit IRC04:23
mattoliverauzaitcev: sorry, I am back, just attempting to catch back up with email and everything that I've missed over the last few weeks. I'll take a look at the patch tomorrow (running out of time today).05:43
*** zaitcev_ has joined #openstack-swift05:45
*** ChanServ sets mode: +v zaitcev_05:45
zaitcev_mattoliverau: I know that py3 is not your thing, but 1. sharder is your code, and 2. that patch unfortunately touches py2.05:46
*** zaitcev has quit IRC05:48
*** psachin has joined #openstack-swift05:55
*** gkadam has joined #openstack-swift07:51
*** ccamacho has joined #openstack-swift07:52
*** e0ne has joined #openstack-swift07:56
*** pcaruana has joined #openstack-swift08:11
*** tkajinam_ has quit IRC08:15
*** e0ne has quit IRC08:20
*** mikecmpbll has joined #openstack-swift08:37
*** gkadam has quit IRC08:42
*** mikecmpbll has quit IRC09:06
*** mikecmpbll has joined #openstack-swift09:16
*** dr_gogeta86 has quit IRC10:15
*** e0ne has joined #openstack-swift10:18
*** dr_gogeta86 has joined #openstack-swift10:20
openstackgerritMerged openstack/swift master: Quiet down a unittest  https://review.openstack.org/63379310:30
*** hseipp has joined #openstack-swift10:42
*** mvkr has joined #openstack-swift11:29
*** mark-mcardle has joined #openstack-swift11:30
*** mahatic has joined #openstack-swift11:53
*** ChanServ sets mode: +v mahatic11:53
*** gkadam has joined #openstack-swift12:08
*** gkadam is now known as gkadam-bmgr12:19
*** e0ne has quit IRC12:48
*** psachin has quit IRC13:44
*** e0ne has joined #openstack-swift13:46
*** pcaruana has quit IRC13:50
*** psachin has joined #openstack-swift13:53
*** pcaruana has joined #openstack-swift13:57
*** psachin has quit IRC13:58
*** psachin has joined #openstack-swift13:59
claygon p 633671 - I think I don't really understand the key_path :'(15:18
patchbothttps://review.openstack.org/#/c/633671/ - swift - Fix decryption for broken objects - 2 patch sets15:18
*** zaitcev_ is now known as zaitcev15:22
*** NM has joined #openstack-swift15:53
*** gkadam-bmgr has quit IRC15:57
*** pcaruana has quit IRC16:01
claygAnyone know anyone that uses/hacks on joss?  https://github.com/javaswift/joss/issues/12016:14
*** ccamacho has quit IRC16:16
*** pcaruana has joined #openstack-swift16:17
claygThank goodness for probe tests.  I thought I was going to get to ignore the proxy's handling of fragment handoffs for now, but I guess not... https://github.com/openstack/swift/blob/master/swift/proxy/controllers/obj.py#L161616:29
zaitcevI was lucky to be able to ignore the encryption up to now.16:31
*** e0ne has quit IRC16:31
*** e0ne has joined #openstack-swift16:31
claygwould a "is_handoff" flag be too on the nose?  "is_primary" maybe?  OTOH I could just *fix* the proxy so that it tries to PUT handoffs where they go... but I'd still need to think harder about the case where you run a 4+2 with only 8 disks like our default saio setup16:36
*** gyee has joined #openstack-swift16:42
*** pcaruana has quit IRC16:45
*** psachin has quit IRC16:48
*** NM has quit IRC16:53
*** ybunker has joined #openstack-swift17:28
ybunkerhi all, quick question... i have to reinstall a data node (because of a kernel panic error), and i would like to know if there's a way to 'keep' the data drives of the objects, so i reinstall the OS, configure the swift pkgs, rsync and all the configuration files and then mount the drives with the objects on the same path from before, and finally start the swift daemons.. could it work? or do i need to remove the node from the ring.. and th17:30
ybunkeren reassign it?17:30
*** hseipp has quit IRC17:31
*** e0ne has quit IRC17:31
DHEif the server has the same IP address and the data is in the same directory, sure. however if the system has been down for more than a week (can't remember the proper name of the setting) then you are probably better off just reformatting them.17:33
ybunkerthe nodes keeps the same ip addresses, and the same directories17:34
ybunkerjust two days17:34
DHEI think it's the reclaim_age in the object server... the idea being that if a server is down longer than this amount of time, deleted objects in the cluster could become undeleted by reintroducing this node17:35
ybunkerthe things is that when i mount the drives (w the obj), the permissions on the directory were:  dnsmasq lpadmin17:35
DHEoh... oh dear... dynamically assigned system account uids17:36
*** mikecmpbll has quit IRC17:39
ybunkerchown -R swift:swift and wait for a life time? :P17:39
ybunkeror better to remove node and add it again?17:40
DHEI considered renumbering the uids, but that affects dnsmasq and lp I guess...17:44
DHEchow is probably the better way, but yeah going to take a while and all that...17:45
DHEyou can at least run a copy of chown per-disk and get some throughput going17:45
ybunkeralso, i chown on the acct/cont disks (fast), but when I try to start the daemons (acct and cont), im getting 507,.. check que space and its ok...nothing else on the logs17:46
claygybunker: chown -R is a reasonable option (yes, slow)17:51
claygybunker: after the chown maybe restart the proxies to clear error limiting?17:52
claygi can't really think why the a/c servers would respond 507 if all the mount paths are in the right paths...17:52
clayg`devices = /srv/node` ?17:53
ybunkerhttp://pasted.co/fe66cc4717:55
ybunkerhttp://pasted.co/0f54b67d17:55
claygok, /srv/node is the default - so you have all your disks mounted at /srv/node/<device> where <device> is the name in the ring?17:56
ybunkerhttp://pasted.co/f6b2f98917:57
ybunkeryes17:57
claygyeah, i can't really think of why the a/c nodes would respond 507 then...17:57
claygbasically it's just self.root (which defaults to /srv/node) join <device> (from path of request) and utils.ismount - if that returns false 50717:59
claygso... kind of the ONLY way you get a 507 is if the device in the URL of the request isn't a mount at /srv/node/<device>17:59
claygyou should have the device name in the log line of a 507 request18:00
claygcan you find a 507 resp log line on a account/container node?18:00
ybunkerhttp://pasted.co/8fb9fdcd18:06
*** mikecmpbll has joined #openstack-swift18:07
ybunkerclayg: here are some of ther 507 errors:    http://pasted.co/8fb9fdcd18:08
ybunkerclayg:...got it.. it was a chmod 755 permissions thing :), now.. the problem is that is not storing on the same node18:13
*** pcaruana has joined #openstack-swift18:17
clayglooks like the device names were just integers?18:20
claygwell, the proxy could be writing to handoffs if it error limited the node you were working on18:21
clayg I think 507 is cached for like 5m!18:21
claygoh, maybe it's just 60s18:23
claygswift-container-info /real/path/to/hash.db will check the ring and say what are the expected locations18:23
claygIs the node you're expecting the db's to be stored on in that list?18:24
ybunkerclayg:  http://pasted.co/92864c5318:36
claygtimburke: I think leaving the use_symlinks option in default to true and maybe deprecate is ok if it doesn't come across to crufty - but maybe I'm just nervous because it's something new18:48
claygoic, "2" is on .14 & .12 - ok...18:49
claygybunker: but that looks fine i guess - I can't correlate from the logs to those ips tho - I don't know which node you're on - do you see "not storing on the same node" problem?18:51
ybunkerthe node with the problem is 192.21.100.1218:52
*** e0ne has joined #openstack-swift19:06
*** rchurch has joined #openstack-swift19:22
timburkeclayg: on p 633094 -- i'd love more input on (1) whether there ought to *also* be a X-Symlink-Target-Size and whether it should require that you specify an ETag and (2) whether 412 is the right error, or if it ought to be 409 (or maybe there's something better?)19:23
patchbothttps://review.openstack.org/#/c/633094/ - swift - Allow "harder" symlinks - 2 patch sets19:23
claygCould a put with an Etag do a HEAD?19:25
claygAt least then we knew it worked at one time...19:26
claygSLO does that and it mostly works?19:26
timburkeick. currently there are no requirements on the target existing to create a symlink...19:27
timburkeand the *real* feature i want out of this tool will have *just written* the object -- it knows *exactly* what the etag and size should be!19:27
claygRight19:27
timburkeand it gets even messier when you start wanting to have symlinks pointing to symlinks pointing to data...19:28
claygInternal api could write directly into sysmeta - HEAD request only required for client feature where you wanna do fancy container listings...19:28
clayg412 is probably wrong, you can’t change the request and make it work. 409 makes sense to me.19:30
*** ybunker has quit IRC19:33
timburkeugh, properly capturing the listing info for a VW symlink to a client-created, etag-validating symlink is gonna be a pain... maybe i don't actually get to re-use as much of the symlink machinery as i thought i could...19:35
*** e0ne has quit IRC20:05
zaitcevWhat is a Volkswagen simlink?20:14
timburkeversioned_writes :-)20:15
timburkei want to make VW stop copying data all over the place20:15
timburkeit's non-atomic and race-prone20:15
timburkeplus it just sucks for your IO budget20:16
*** e0ne has joined #openstack-swift20:17
*** e0ne has quit IRC20:19
mattoliverauSeeing as notmyname is away, and seems some people are en route to Fossdem, I'll assume we're not having a meeting today20:38
*** e0ne has joined #openstack-swift20:48
claygoh.. uh21:03
claygmattoliverau: 👍 two weeks off!  SO awesome!21:04
claygit's like notmyname takes a vacation and we all get a break21:04
mattoliverau\o/21:04
*** e0ne has quit IRC21:18
*** early has quit IRC21:25
*** early has joined #openstack-swift21:26
*** e0ne has joined #openstack-swift21:47
*** openstackgerrit has quit IRC21:50
*** e0ne has quit IRC21:58
*** tkajinam has joined #openstack-swift23:01

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!