Tuesday, 2019-07-16

openstackgerritTim Burke proposed openstack/swift master: py3: fix up swift-orphans  https://review.opendev.org/67093201:08
openstackgerritTim Burke proposed openstack/swift master: py3: fix object-replicator rsync output parsing  https://review.opendev.org/67093301:08
*** gyee has quit IRC01:14
*** baojg has joined #openstack-swift01:42
*** tkajinam has quit IRC02:23
*** tkajinam has joined #openstack-swift02:24
*** psachin has joined #openstack-swift03:26
*** pcaruana has joined #openstack-swift05:08
*** new_student1411 has joined #openstack-swift06:10
new_student1411timburke I am unable to remove the `sysmeta` header `X-Remove-Account-Sysmeta-Global-Write-Ratelimit: x` using the same command as used for creating `sysmeta`.06:20
*** new_student1411 has quit IRC06:29
*** rcernin has quit IRC07:00
*** rdejoux has joined #openstack-swift07:17
*** tkajinam has quit IRC08:25
*** tesseract has joined #openstack-swift11:30
*** tdasilva has joined #openstack-swift11:44
*** ChanServ sets mode: +v tdasilva11:44
*** tdasilva has quit IRC12:12
*** tdasilva has joined #openstack-swift12:12
*** ChanServ sets mode: +v tdasilva12:12
*** tesseract has quit IRC12:53
rledisezgood morning12:53
*** tesseract has joined #openstack-swift12:55
rledisezwe have one user hitting the max_get_time=86400 of slo/dlo. I'm wondering what was the purpose of this limit? is it a "security" feature?12:56
*** sasregulus has quit IRC13:10
*** zaitcev has joined #openstack-swift13:24
*** ChanServ sets mode: +v zaitcev13:24
*** psachin has quit IRC13:52
*** zaitcev_ has joined #openstack-swift14:04
*** ChanServ sets mode: +v zaitcev_14:04
*** zaitcev has quit IRC14:08
*** zaitcev_ has quit IRC14:15
tdasilva24hrs to return an obj?14:20
tdasilvarledisez: I think the idea was just to limit for a reasonable timeout, no? from reading the comments (commit message) didn't sound like a security feature more like "just don't let this run forever"14:22
tdasilvatimburke: noticed swift-multinode-rolling-upgrade tests started passing again14:25
*** zaitcev_ has joined #openstack-swift14:28
*** ChanServ sets mode: +v zaitcev_14:28
rlediseztdasilva: it's a 12TB DLO ;)14:34
tdasilvarledisez: heh, that's what i figured14:34
tdasilvaIIRC s3api has a nice (hidden) feature to specify the parts when downloading a MPU, which would allow a client to start parallel GETs and then stitch object back together.14:37
claygrledisez: I'm pretty sure the option was there to spur exactly this conversation15:09
claygrledisez: you can either increase the timeout - or talk to the client about maybe using range requests?15:10
*** zaitcev_ has quit IRC15:13
*** zaitcev_ has joined #openstack-swift15:25
*** ChanServ sets mode: +v zaitcev_15:25
*** e0ne has joined #openstack-swift15:32
*** gyee has joined #openstack-swift15:39
claygso i'm cleaning up some tests that want to support container servers that don't handle reverse=on correctly - which time originally fixed back in ebf0b220127b14bec7c05f1bc0286728f27f39d116:12
claygi don't think i'm really sure how I broke it 🤔16:13
timburketdasilva, must've been https://github.com/openstack/requirements/commit/7ae7c4238a9d0bd8386825fac8284dc69e1b209716:27
tdasilvatimburke: ack16:30
timburketdasilva, rledisez: there's also the *published* feature of just doing Range requests -- ask for the first 10 or 100MB of the object; something large enough that small-enough objects are served immediately. if it's bigger than that, Content-Range header will tell you the full size and you can spin up parallel range requests for the remaining16:33
*** tesseract has quit IRC16:33
tdasilvawe would be cool to build into python-swiftclient the ability to download objects using parallel range requests16:35
tdasilvas/we/it16:35
timburkethe ?partNumber=N api would be nice to have, though; would let the client be smart about not hammering the same disks over and over as they try to grab the same 5GB segment 10MB at a time16:35
timburkesee https://bugs.launchpad.net/swift/+bug/173528416:36
openstackLaunchpad bug 1735284 in OpenStack Object Storage (swift) "Support partNumber query parameter on object GET" [Medium,Confirmed]16:36
timburketdasilva, i know at least one customer has done something similar, where they pull down the slo manifest so they can request the segments in parallel directly16:37
tdasilvayeah, makes sense for large objects, can't imagine the frustration of starting a 12TB download only to see it fail at 11.5 TB...16:40
rlediseztdasilva: hopefuly, our user only has a 100Mb/s, it was failing at 1TB ;)16:44
rledisezfor thid case, i'll increase the limit16:44
rledisezbut yeah, parallel download would be awesome16:44
openstackgerritThiago da Silva proposed openstack/swift master: Add rolling upgrade tests  https://review.opendev.org/62666316:44
timburkeugh. i'd actually be more ok with the timeout popping at 11TB -- if the client's smart, it'll probably still have some retries left for the last bit. at 1 of 12... it's unlikely we'd have enough retries16:50
claygthere's some tests verifying behavior of DELETE in stack mode when some of the objects in the versioned container are expired17:00
claygi'm not sure if the use-case was more "don't get a 503 trying to DELETE when the COPY operation fails on the GET" or if the "skip through the list of old versions by doing HEADs before we write a pointer to one that's potentially expired" is worth the trouble 🤔17:01
claygOTOH, it does have me thinking about building hardlinks from container listings in general - it's entirely possible for middleware to make a hardlink to object that will currently (or will soon) 404 - while we go out of our way to prevent that when clients do it17:03
*** aluria has quit IRC17:06
*** e0ne has quit IRC17:20
*** zaitcev_ has quit IRC18:01
openstackgerritTim Burke proposed openstack/swift master: py3: fix object-replicator rsync output parsing  https://review.opendev.org/67093318:04
timburkehmm... the experimental probe test failure on https://review.opendev.org/#/c/668990/ looks spurious, but the rolling-upgrade tests are making me nervous...18:06
patchbotpatch 668990 - swift - Authors/changelog for 2.22.0 - 4 patch sets18:06
*** rdejoux has quit IRC18:08
*** zaitcev_ has joined #openstack-swift18:13
*** ChanServ sets mode: +v zaitcev_18:13
rledisezi'm trying to write some code about task queue and I'm coming to a question: do we consider the object server as dumb? I mean, should the proxy pass through headers 2 parameters type=expirer ring=object or directly pass account=.task-expirer-object. how dumb dp we want the object-server be?18:53
timburkei'm generally a fan of making it pretty damn dumb -- the fact that it needs to know about swift_bytes for SLOs (for example) was probably a mistake18:58
*** tdasilva has quit IRC19:06
rledisezproxy is the brain, got it :)19:10
*** zaitcev_ has quit IRC19:10
*** zaitcev_ has joined #openstack-swift19:23
*** ChanServ sets mode: +v zaitcev_19:23
timburkeclient19:44
timburkeclient's the brain ;-) proxy only becomes the brain when the client can't or won't. and the object server only takes it on when it really *has to*19:45
*** zaitcev_ has quit IRC20:26
*** zaitcev_ has joined #openstack-swift20:38
*** ChanServ sets mode: +v zaitcev_20:38
*** dasp has quit IRC20:45
*** dasp has joined #openstack-swift20:47
*** pcaruana has quit IRC21:01
*** rcernin has joined #openstack-swift22:36
*** tkajinam has joined #openstack-swift22:53
*** zaitcev_ has quit IRC23:25
*** zaitcev_ has joined #openstack-swift23:36
*** ChanServ sets mode: +v zaitcev_23:36
mattoliveraumorning23:38
*** zaitcev_ has quit IRC23:53

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!