Monday, 2020-08-31

*** djhankb has quit IRC01:50
*** djhankb has joined #openstack-swift01:55
*** baojg has quit IRC02:20
*** baojg has joined #openstack-swift02:21
*** rcernin has quit IRC02:58
*** rcernin has joined #openstack-swift03:19
*** neonpastor has quit IRC04:00
*** rcernin has quit IRC04:12
*** rcernin has joined #openstack-swift04:14
*** m75abrams has joined #openstack-swift04:20
*** evrardjp has quit IRC04:33
*** evrardjp has joined #openstack-swift04:33
*** baojg has quit IRC04:37
*** baojg has joined #openstack-swift04:38
*** rcernin has quit IRC05:48
*** baojg has quit IRC05:53
*** baojg has joined #openstack-swift05:53
*** rcernin has joined #openstack-swift06:03
*** djhankb has quit IRC07:24
*** djhankb has joined #openstack-swift07:25
*** zigo has joined #openstack-swift07:46
*** rcernin has quit IRC07:51
zigotimburke: Hi there! When reading your patch here:07:51
zigohttps://opendev.org/openstack/swift/commit/7d429318ddb854a23cdecfe35721b1ecbe8bcccc07:51
zigoI am wondering what's the implication of the last sentence:07:51
zigo"When switching from Python 2 to Python 3, first upgrade Swift while on Python 2, then upgrade to Python 3."07:51
zigoHow can one do that? That's not how distro packages are working... How should we do it?07:51
*** baojg has quit IRC07:53
*** baojg has joined #openstack-swift07:54
*** aluria has quit IRC08:00
*** DHE has quit IRC08:00
*** irclogbot_3 has quit IRC08:02
*** aluria has joined #openstack-swift08:05
*** DHE has joined #openstack-swift08:05
*** irclogbot_1 has joined #openstack-swift08:08
-openstackstatus- NOTICE: due to a new release of setuptools (50.0.0), a lot of jobs are currently broken, please do not recheck blindly. see http://lists.openstack.org/pipermail/openstack-discuss/2020-August/016905.html09:17
*** baojg has quit IRC09:39
*** baojg has joined #openstack-swift13:10
*** baojg has quit IRC13:49
*** baojg has joined #openstack-swift13:49
*** openstackgerrit has quit IRC14:37
*** TViernion has quit IRC14:37
*** josephillips has joined #openstack-swift14:40
*** TViernion has joined #openstack-swift14:43
*** josephillips has quit IRC15:05
*** m75abrams has quit IRC15:20
*** josephillips has joined #openstack-swift16:03
*** josephillips has quit IRC16:04
*** josephillips has joined #openstack-swift16:10
*** baojg has quit IRC16:11
*** baojg has joined #openstack-swift16:12
*** djhankb has quit IRC17:10
*** djhankb has joined #openstack-swift17:11
ormandjon a rebalance, when does the corresponding purge happen from the source as stuff is shuffled around? let's say a reasonably full cluster, and you were adding a new node with a bunch of drives, not doing the step-stone approach as you just wanted to online capacity ASAP - does the reblanace have to fully complete before the corresponding 'emptying' happens on the nodes where the data migrated from?17:19
ormandjwe're testing this in a dev cluster with a bunch of data, and it does not appear the drives are draining in the original servers, while the new server is most definitely filling up17:20
ormandj(just added the new server with full eventual weight to see what would happen)17:20
timburkezigo, that's part of why i'm working on backporting the fix -- plan is to have new tags on ussuri and train (at least; should i go back further?) and tell operators to upgrade to latest tag for their stable release before attempting a rolling upgrade that would change the major python version used for swift17:28
timburkeif you can tolerate a stop-the-world upgrade, that's fine, too -- but there's currently no way to have new swift on py3 write down encryption metadata (for paths with any non-ascii characters) that will be readable on old swift running on py217:30
timburkeormandj, for each moved partition, the former-primary needs to get acks that data's durable on all three current-primaries before it'll be willing to delete the data. meanwhile, the former-and-current primaries will *also* want to push data to the new node17:35
timburke(with default configs)17:35
timburkeyou'll want to look at a couple config options to make your rebalances go faster (and free space off of disks more quickly): https://github.com/openstack/swift/blob/master/etc/object-server.conf-sample#L287-L30417:36
timburkehandoffs_first tells those former-and-current primaries to cool their heels so the former-primary has more of a chance to replicate17:37
timburkehandoff_delete tells the former-primary that it's OK to delete even when data is less than fully-replicated17:39
timburkei'd start with handoffs_first, see how well that gets things moving. if you've been running with a nearly-full (>90% avg fill?) for a while, you might need handoff_delete as well17:41
timburkeof the two, handoff_delete is the more dangerous option, since you're willing sacrificing durability to free space faster17:42
*** gyee has joined #openstack-swift17:43
timburkehandoffs_first is mainly about how we schedule work to be done, not how much work to do. there's a similar option for EC -- handoffs_only -- that *does* affect how much work we do in a given cycle; i put that somewhere in between. it's good to use on occasion, but after the expansion settles you'll want at least *some* time with it turned off to ensure you're still fully-durable17:47
*** renich has joined #openstack-swift18:21
renichGood $tod, swift-minded people! o/18:23
renichI am trying to figure out SSL certs with swift. I am trying to use letsencrypt certs. At first, swift-proxy couldn't read them due to permissions, so I used a post-hook to put them at /etc/swift; with owner/group swift and permissions 400. It reads them now, I presume. The thing is, I am getting an empty response when I try: openstack container list18:23
renichUnable to establish connection to http://os.sof.cloudsigma.com:8080/v1/AUTH_8ac555e42913493e95808b305e628474: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))18:24
renichand I can't find anything on the logs.18:24
renichBTW, the config file I edited was /etc/swift/proxy-server.conf18:25
renichJust the cert_file, key_file settings. I am using the exact same endpoints and stuff.18:25
renichJust to add to the context, other openstack commands work fine so I am 70% sure this is a swift issue.18:26
timburkerenich, looks like it's still trying to connect over http (not https) -- maybe you need to update the endpoint_url in your keystone catalog?18:27
renichtimburke: so, I need to change the endpoint to https, then?18:27
renichOK18:27
renichlet me try that18:27
renichI can use openstack endpoint set --url whatever some-id18:28
renichhttps://paste.centos.org/view/fabc7d3a18:31
renichDoes that seem correct?18:31
renichgot this error now: https://paste.centos.org/view/ac27c98c18:32
timburkerenich, do you see anything in the proxy server logs?18:44
renichtimburke: no. I am gonna turn log_level to DEBUG or something18:45
renichtimburke: nothing in the logs; not even with DEBUG log level. I'll try to revert back to http18:55
renichIt's strange because curl doesn't want to use port 8080 for https...18:58
renichroot@keystone0:~# curl https://os.sof.cloudsigma.com:8080/v1/18:58
renichcurl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to os.sof.cloudsigma.com:808018:58
renichmaybe I should change the port or something?18:59
renichOr, maybe, keystone should be under https as well?19:00
ormandjtimburke: awesome info tim, will look into that.19:21
ormandjthank you19:21
ormandjtimburke: only at 75% fill or so, so no worries on that bit19:22
ormandjwe just had to drop the number of replication workers from 1x drive to 1/8th drive count so the cluster didn't self-immolate, so it's going to take ~10 days to finish up based on the average data across drives. it was beating the cluster to death with one worker per disk. not sure if servers_per_port would help, since that's our next step once we've finished expanding19:25
ormandjthe cluster was 500ing out the wazoo with the full 56 replication workers :)19:25
ormandj(even set ionice_class to idle, but seems to have made 0 difference - the rsync appears to pick a bunch of source dirs at once, so even though it's working as a single process, it drives iops up, and not sure ionice idle class handles scheduling that well in cfq19:26
*** dsariel has joined #openstack-swift19:39
*** dsariel has quit IRC20:26
*** baojg has quit IRC21:18
*** baojg has joined #openstack-swift21:19
*** openstackgerrit has joined #openstack-swift22:18
openstackgerritTim Burke proposed openstack/swift master: Add sorting_method=none to get a consistent node order  https://review.opendev.org/74731022:18
*** baojg has quit IRC22:46
*** baojg has joined #openstack-swift22:47
*** rcernin has joined #openstack-swift22:51
timburkemade a doodle for the PTG meeting slots: https://doodle.com/poll/ukx6r9mxugfn7sed23:29
seongsoochodone!23:36

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!