Tuesday, 2019-01-08

*** joeljwright has joined #openstack-swift00:11
*** ChanServ sets mode: +v joeljwright00:11
*** klamath has joined #openstack-swift00:25
klamathanyone have experience with swiftly?00:25
notmynamea very long time ago00:26
klamathtrying to figure out how to purge an account, documentation around nuking an auth target seems non-existent00:26
notmynameyou want to delete a whole account and everything in it?00:27
klamathyes, struggling with cleanup when a user gets removed from keystone the reaper wont run against the user because the delete flag is never toggled00:28
klamathi found swift-account-caretaker but it seems the cleanup is done in another utility00:28
notmynamein general, you need to use a superuser token to delete an account. in most clients, that looks like specifying the superuser creds and explicitly specifying the storage url as somethign different than what's returned from auth (ie specify the one to delete)00:29
klamathunderstood, I think this might be a syntax error in how swiftly is doing its thing, figured id stop in and ask here before diving down the rabbit hole more.00:32
notmynameyeah, I can't really offer anything more than general "here's how it's supposed to work" for swiftly00:37
timburkenotmyname: you reminded me of https://bugs.launchpad.net/swift/+bug/1740326 again01:03
openstackLaunchpad bug 1740326 in OpenStack Object Storage (swift) "tempauth: Account ACLs allow users to delete their own accounts" [Undecided,New]01:03
*** itlinux has joined #openstack-swift01:08
*** two_tired has joined #openstack-swift01:29
*** mikecmpbll has quit IRC02:35
*** psachin has joined #openstack-swift02:41
klamathturns out this is the command: swiftly --verbose --direct=/v1/AUTH_########### --direct-object-ring=/etc/swift/object.ring.gz delete --until-empty --recursive  --yes-i-mean-delete-the-account  --yes-i-mean-empty-the-account02:44
mattoliveraulol02:53
*** two_tired has quit IRC03:03
zaitcevI saw something similar in other commands, too. Even longer.03:55
*** spsurya has joined #openstack-swift04:22
*** spsurya has quit IRC05:10
*** spsurya has joined #openstack-swift05:13
*** gyee has quit IRC06:24
kota_tdasilva: happy new year to you too!06:42
*** rcernin has quit IRC06:58
*** [diablo] has quit IRC07:04
*** pcaruana has joined #openstack-swift07:42
*** hseipp has joined #openstack-swift07:48
*** gkadam has joined #openstack-swift07:48
*** gkadam is now known as gkadam-afk07:50
*** jungleboyj has quit IRC08:06
*** jungleboyj has joined #openstack-swift08:06
*** ccamacho has joined #openstack-swift08:09
*** e0ne has joined #openstack-swift08:16
*** gkadam-afk is now known as gkadam08:24
*** mikecmpbll has joined #openstack-swift09:08
*** [diablo] has joined #openstack-swift09:52
*** ccamacho has quit IRC11:03
*** ccamacho has joined #openstack-swift11:33
*** ccamacho has quit IRC12:20
*** hseipp has quit IRC12:26
*** gkadam has quit IRC12:29
*** ccamacho has joined #openstack-swift12:53
*** ccamacho has quit IRC12:54
*** ccamacho has joined #openstack-swift12:54
*** szaher has joined #openstack-swift13:08
*** zigo has joined #openstack-swift13:28
*** psachin has quit IRC13:33
*** szaher has quit IRC13:47
*** szaher has joined #openstack-swift13:52
*** itlinux has quit IRC15:21
*** e0ne has quit IRC15:56
*** szaher has quit IRC16:08
*** baojg has joined #openstack-swift16:09
*** szaher has joined #openstack-swift16:09
*** baojg has quit IRC16:10
*** baojg has joined #openstack-swift16:11
*** pcaruana has quit IRC16:20
*** itlinux has joined #openstack-swift16:20
*** baojg has quit IRC16:22
*** baojg has joined #openstack-swift16:22
*** e0ne has joined #openstack-swift16:25
*** ybunker has joined #openstack-swift16:34
ybunkerHi all, quick question, is possible to migrate from juno version to mitaka directly? is there any special consideration to take?16:35
ybunkerwe have keystone and swift on juno release and we need to finish with queens16:36
*** ccamacho has quit IRC16:38
*** gyee has joined #openstack-swift16:38
*** hseipp has joined #openstack-swift16:39
zaitcevI don't see why not. Even the storage policies could pick up old data in the cluster.16:40
zaitcevHowever16:41
zaitcevOnce in a while you hit these annoying upgrades with special instructions. Like the one where we went from pickles to JSON. They have ordering, like update storage first, proxies next.16:42
zaitcevSo16:42
zaitcevWhen you jump versions, you need to look at all the release notes in the middle, and adhere to those.16:42
zaitcevAlthough, if the cluster has a maintenance window, it's much easier. Just shut down user access, then upgrade and reboot everything, restore user access.16:43
zaitcevThere's also special-casing, where things get deprecated for a release or two, but still work. I think log formats were like that. Jumping Juno to Mitaka means that you get all of that at once without a grace period.16:45
zaitcevI don't remember specifics, it was a while...16:45
DHEybunker: direct updates of swift are actually pretty well supported, though the onus is on you to not use any new features until the whole cluster is upgraded. I'd say start at the object servers and work your way backwards to the proxy servers16:55
DHEah, you beat me to it16:56
ybunkerthanks a lot for the notes, will take a deep look to release notes and procedure. also, is there any know bug on juno about the consuming space of the data nodes?, because we expand the cluster with two new nodes, and the ring balance the data, but the other nodes instead of lower space, they keep growing.. :S16:58
DHEwell there will be a phase where the replication process consumes additional space as data is "moved" via the "copy and delete" method.16:59
ybunkerthe thing is that we dont have the object-replication service running all the time, because when it does it kicks the latency exponentially and the clients complain about that, so we have a cron were the obj-repl service runs on a specify window17:01
notmynamegood morning17:04
DHEsounds like you need QoS of sorts...17:08
*** e0ne has quit IRC17:08
DHEI'm actually going to do that using the replication network feature and just mark the whole network as low priority. let the network deal with it.17:09
ybunkeri change some features on the object-server.conf file to "limit" the bandwidth but it seems that its not using those params17:10
notmynameyeah, I'd look at tuning parameters before simply letting the network QoS it17:10
ybunkerhttp://pasted.co/e64802f617:14
ybunkerhere is the object-server conf file with some params, any ideas?17:14
notmynameybunker: how many drives do you have in each server?17:19
ybunker12 disks, first 3 for acct container, and 9 drives for data (obj)17:21
ybunkerand a total of 8 data nodes17:21
*** hseipp has quit IRC17:24
notmynamea couple of things stand out to me. first, you've only got a concurrency of 1. I'd suggest setitng replicator_workers to 6 and concurrency to 2 (or 4). that should allow you to process a replication cycle *much* faster. also make sure you're using servers-per-port in the object server. and that you have an rsync module per disk (ie combined with that last option)17:25
notmynamebasically, all of that should help dramatically reduce the length of replication cycles and reduce a single slow drive's impact on affecting performance everywhere else17:25
notmynamenotice that in answer to your issue of "replication is causing contention for client requests", my initial recommendation is to tune things so replication works faster (or at least much more efficiently). replication is critical, so getting it done quickly and efficiently is generally the best way to remove issues facing client requests17:26
notmyname(to a point of course. obviously there are situations where there is enough contention in hardware that replication must be restricted in favor of client requests)17:27
ybunkeri have defined an object-replicator section on each device17:28
ybunkerin that case i also need to use replicator_workers to 6? or leave it by default to 0?17:29
notmynamethe sample config file (https://github.com/openstack/swift/blob/master/etc/object-server.conf-sample) is well-documented and the docs at https://docs.openstack.org/swift/latest/deployment_guide.html and https://docs.openstack.org/swift/latest/admin_guide.html can provide good guidance17:29
notmynameyes, you should set replicator workers17:29
notmynamenote these options were introduced in 2.18.017:31
notmyname(I'd strongly recommend upgrading to the latest: 2.20.0. you can upgrade directly to this version from any previous version without having to go through some midpoint or have any cluster downtime)17:31
ybunkeri see, but i have version 2.2.017:32
ybunkerfrom 2.2.0 directly to 2.20.0?17:32
notmynameyep17:33
notmynameread the https://github.com/openstack/swift/blob/master/CHANGELOG first17:33
notmynameupgrade impacts are listed there, along with other changes17:33
notmynamethe things you'll need to specifically note are things that have been deprecated. there are likely a couple of things we've removed since 2014. (although it's rare we remove stuff)17:34
notmynameso you'll want to update configs to use new options, if necessary, before you upgrade. you can run old code with new config options with no problem. swift won't complain if you have "extra" stuff in the config file17:34
notmynameah! you just reminded me to update https://wiki.openstack.org/wiki/Swift/version_map :-)17:36
notmynameybunker: have you done rolling upgrades on your swift cluster before?17:36
*** mikecmpbll has quit IRC17:37
ybunkerno :(17:40
notmynameybunker: no worries. a long time ago I wrote https://www.swiftstack.com/blog/2013/12/20/upgrade-openstack-swift-no-downtime/ on my company blog. it's general enough to still be correct18:07
notmynamebut note that since it's very general, it also means that it won't account for subtleties of your own deployment18:07
ybunkerthanks a lot :) really appreciated, will take a deep look18:08
notmynameeg how you've deployed services on hardware or how you've done load balancing or any other networking or etc18:08
notmynameand as always, please stay around in here and ask if you have questions.18:08
notmynamesometimes it gets quiet in here, but there's a lot of ops expertise. don't be shy :-)18:09
timburkegood morning18:22
ybunkerwe have a separate replication network, and a public network for client access18:22
ybunkeralso, the balancing on the proxy nodes is being running on an F5 vserver pool18:23
*** e0ne has joined #openstack-swift18:23
ybunkergot some errors on replicator:   object-replicator: STDOUT: error: [Errno 105] No buffer space available18:23
*** ybunker has quit IRC18:38
openstackgerritTim Burke proposed openstack/swift master: s3api: Look for more indications of aws-chunked uploads  https://review.openstack.org/62105519:24
*** e0ne has quit IRC19:39
*** e0ne has joined #openstack-swift19:43
*** ccamacho has joined #openstack-swift19:54
openstackgerritTim Burke proposed openstack/swift master: Verify client input for v4 signatures  https://review.openstack.org/62930119:55
*** ccamacho has quit IRC19:58
*** spsurya has quit IRC20:41
*** mikecmpbll has joined #openstack-swift21:20
zaitcevI'm looking at https://review.openstack.org/547969 and in particular function native_str_keys()21:40
patchbotpatch 547969 - swift - py3: Port more CLI tools (MERGED) - 5 patch sets21:40
zaitcevIt is always invoked after json.loads(), but can it ever produce a dictionary with bytes? I don't think JSON has a concept of that. In fact... Even on py2 json.loads('{"a": "0"}') returns {u'a': u'0'}.21:43
zaitcevMy main question was if we ever allow binary metadata. It appears that our metadata on is in "native strings" now, on both py2 and py3. On py2 this would also allow binary. Just checking if I determined the consensus correctly.22:27
*** e0ne has quit IRC22:27
mattoliveraumorning22:30
timburkezaitcev: as i recall, when we used simplejson it might return unicode *or* bytes, depending on whether it was all ascii22:38
zaitcevoh good lord22:38
zaitcevThe key is if we permit binary metadata or not22:39
timburkeso it's probably fine when we're deserializing from JSON... but if there's anything that's been pickled...22:39
zaitcevX-Meta-Foo: <binary, but not EOL>\r\n22:39
timburkeon account/container, we don't. on object, we do (!)22:39
zaitcevHow unfortunate22:39
timburkeyeah :-(22:40
timburkeso moving away from pickle for object metadata will be... ugly...22:40
zaitcevBut22:42
zaitcev>>> json.dumps({'aaa':'\xff'})  ends in  UnicodeDecodeError: 'utf8' codec can't decode byte 0xff in position 0: invalid start byte22:42
timburkeoh hey, i even called it out in the commit message: "even swift-object-info when there's non-utf8 metadata on the data/meta file"22:42
timburkeyeah, i don't actually remember where all native_str_keys is getting used -- i'm just thinking about https://github.com/openstack/swift/blob/2.20.0/swift/obj/diskfile.py#L22822:44
*** itlinux has quit IRC22:48
*** rcernin has joined #openstack-swift22:53
zaitcevtimburke: Sorry if I confused this simple issue, but where does it allow binary metadata values?23:28
zaitcevThe serialized metadata is binary, sure... Well, not so much if JSON.23:29
zaitcevI need to make an executive decision here: https://github.com/openstack/swift/blob/2.20.0/test/unit/obj/test_server.py#L787223:30
zaitcevLet's say, keys are native strings, in memory of Python interpreter. Values, then, what? always bytes? Or Unicode with surrogates?23:31
timburkei could've sworn that we had tests that *inadvertently* verified it... but now i'm having a hard time finding it. i *did* at least find a comment i made on https://review.openstack.org/#/c/452112/ ... and it looks like i consciously *avoided* checking object metadata in https://review.openstack.org/#/c/285754/ (pretty sure out of fear of a compat break)23:53
patchbotpatch 452112 - swift - Fix encoding issue in ssync_sender.send_put() (MERGED) - 6 patch sets23:53
patchbotpatch 285754 - swift - Require account/container metadata be UTF-8 (MERGED) - 2 patch sets23:53
zaitcevYes, but as you said object is different.23:56
zaitcevhmm23:58
zaitcev"the diskfile read_metadata() function is also changed so that all returned unicode metadata keys and values are utf8 encoded"23:58
zaitcevthat's not a raw binary though23:58
zaitcevGreat, thanks a lot.23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!