Thursday, 2018-09-20

*** gyee has quit IRC00:42
tdasilvaanybody know/remember why we pop policy_type from info? https://github.com/openstack/swift/blob/master/swift/common/storage_policy.py#L28700:47
*** two_tired2 has joined #openstack-swift00:51
claygfatema_: yes, after that patch x-timestamp because relevant and useful on all proxy->object requests00:56
*** cloudnull has quit IRC01:00
*** mathiasb has quit IRC01:00
*** Guest58757 has joined #openstack-swift01:03
notmynametdasilva: we've historically hidden the details of the durability from the api clients. I think the argument is that an operator can expose that via the storage policy name, if they choose.01:44
notmynamein general, an API client cannot make a smart decision based on the value of that field. there's nothing they can do differently if it's "replicated", "ec", or "fancy_flash_optimized" or anything else we choose it have in the future01:45
notmynameso therefore we hide it01:45
notmyname(at least, that's why I think we hide it, and that argument makes sense to me)01:46
*** hoonetorg has quit IRC02:04
*** hoonetorg has joined #openstack-swift02:17
*** AJaeger has left #openstack-swift03:17
notmynamemattoliverau: kota_: maybe one final test update follow-on from last week is https://review.openstack.org/#/c/603870/. I just left a comment there showing why the patch is good (ie lower-constraints no longer skip ~1400 tests)04:23
patchbotpatch 603870 - swift - set up a lower constraints job that uses an XFS tm... - 2 patch sets04:23
notmynamezaitcev: ^04:23
* notmyname is now off to bed04:24
*** two_tired2 has quit IRC05:05
*** Guest58757 is now known as cloudnull05:32
*** gkadam has joined #openstack-swift06:46
*** rcernin has quit IRC07:02
*** e0ne has joined #openstack-swift07:16
alecuyerseongsoocho: We also have 64GB of memory on the object servers. I don't think we saw performance issues before hitting 200m+ files (over 36 disks), but maybe your workload is different, or you have finer monitoring :) i didn't ask how much degradation you're seeing ?. More RAM would probably help (in memory inode would use about 300 bytes) . Also, we are working on a way to store small objects in large files on disk. It's not ye07:57
alecuyert ready but if you'd like to try there is a dev environment available here : https://github.com/alecuyer/vagrant-swift-all-in-one/tree/losf-v2.18.007:57
seongsoochoalecuyer:   In my case, 100m+ files per disk (and there are 300disks on cluster). The most of workload are very small object(<100kb). Thank you for sharing your experience.  I will try to have more memory.08:01
alecuyerseongsoocho: ouch, I assumed per-server, not per disk. We've never been over 70millions inodes per disk. So I don't know how many disks per server your have, but yes more RAM will help. Let us know how that goes08:47
seongsoochoOk . I have 11 disks per server and there are 29 object-servers.  Thanks!08:50
*** jlvillal has quit IRC09:54
*** jlvillal has joined #openstack-swift09:54
*** pcaruana has joined #openstack-swift10:04
*** jlvillal has quit IRC10:50
*** jlvillal has joined #openstack-swift10:53
*** pcaruana has quit IRC11:15
*** pcaruana has joined #openstack-swift11:20
onovyalecuyer: seongsoocho: fyi, we have 6m inodes / disk, 23 disks / store, 128G RAM11:23
*** pcaruana has quit IRC11:32
*** pcaruana has joined #openstack-swift11:39
*** pcaruana has quit IRC11:50
*** gkadam has quit IRC12:46
*** gkadam has joined #openstack-swift12:48
*** gkadam has quit IRC12:48
*** arete0xff has joined #openstack-swift12:56
seongsoochoonovy:  oh thanks.  Do you have any trouble with rebalance?  (when add a new disk or new node, there is a problem that degrades the performance of object server)13:05
onovyseongsoocho: no13:08
onovybut we are adding weight by 10% per round13:08
onovyor maybe 15% - 20% :)13:08
seongsoochowhat does it mean 'round' ?? time?13:09
onovyround=waiting for rebalance finish13:09
onovywe are using swift-dispersion for checking, if cluster is rebalanced yet13:11
onovyso if we want to add new server, we add it with weight '1' then check it. Then we set weight to ~10% of final value, rebalance and waiting for rebalance end13:11
onovyand then add another 10%, and so on13:11
onovyit's described in docs why it's good idea to do it this way13:12
onovyafk13:12
seongsoochoa ha...  I was added weight by 5% with 11disks,  there are huge degrades the performance. It takes about 10~50 seconds tp upload 50kb file . (frequently)13:13
seongsoochoswift-dispersion..  I have never used it. thanks.13:14
seongsoochoThe way to know whether rebalance is finished or not is using that tools? (swift-dispersion)13:18
DHEyou'd have to do log parsing to be sure it's done, or run the replicator/reconstructor synchronously13:20
seongsoochook. Actually, It is the first time to add new node. I built a huge size cluster  (1PB) at first.13:22
DHEraw storage or useful storage ?13:23
seongsoochoFor public storage (like aws s3)13:23
*** jistr is now known as jistr|call13:32
*** arete0xff has left #openstack-swift13:36
*** e0ne has quit IRC13:48
onovyseongsoocho: are you 'shaping' rsync+replicator?14:17
onovyit's good idea to limit number of threads and rsync incoming connection14:18
onovyafter this, you will not notice replicator at all :)14:18
*** nguyenhai has quit IRC14:18
*** nguyenhai has joined #openstack-swift14:19
seongsoochoYes .14:19
onovywe limit max connections=4 per disk14:19
onovyin rsync14:19
seongsoochoconcurrency of replicator is 1 and bwlimit is 819214:20
onovybwlimit is useless :). you need to limit request concurency14:20
seongsoochooh.. ok i will check it. ..14:20
onovyhttps://github.com/openstack/swift/blob/master/etc/rsyncd.conf-sample#L2514:21
onovyare you using this?14:21
onovyrsync_module per disk?14:21
seongsoochoyes yes.14:21
seongsoochonot same, but simillar with it14:22
onovycool14:22
onovy7k or 10k disks?14:22
seongsoocho10k disk14:22
seongsoochoand rsync per disk14:22
onovysame as we14:22
onovydo you have dedicated con/acc servers and/or disk for them?14:22
seongsoochoyes I seperate the node for con/acc.14:23
onovycool. we are using same servers, dedicated disk (SSD)14:23
onovyand concurrency: 2 for object-replicator14:24
seongsoochoAnd  I have a 10G network replication, but the traffic is not that much..14:24
onovywe are using 1x 10G with trunk (two vlans). one for traffic and one for replicator14:25
onovyand in old stores 2+2 1G14:25
onovydid you tuned vm.vfs_cache_pressure ?14:26
onovynet.ipv4.tcp_tw_recycle and net.ipv4.tcp_tw_reuse and net.ipv4.tcp_syncookies ?14:26
seongsoochocool.  Do you use apache for running object server? or just running with swift-init ?14:26
onovyno apache, no swift-init :)14:26
onovyusing Debian packages, which runs daemons directly14:26
seongsoochovfs_cache_pressure is default. I think 100?14:27
onovyso almost same as swift-init, but without swift-init14:27
onovytry vfs_cache_pressure 7514:27
onovyworks much better for us14:27
seongsoochook I will try.14:27
onovyit will keep more inodes in cache and less data14:27
onovywhich is what you want14:27
onovyyou really want/need to have all inodes in memory14:27
seongsoochoIs there any good example of kernel configuration for object server?14:28
seongsoochoYes, I want to have all inodes in memory, so I will try 512GB RAM for all object server14:29
onovyvm.vfs_cache_pressure=75, net.ipv4.tcp_tw_recycle=1, net.ipv4.tcp_tw_reuse=1, net.ipv4.tcp_syncookies=014:29
onovythis is our "good example" :)14:29
onovy(for stores)14:29
alecuyer"you really want/need to have all inodes in memory" <- this :-)14:30
seongsoocho:-)  Cool14:31
seongsoochoonovy:  thank you for your help! It's time to go to the bed here. I will tell you more story about my swift cluster later. thanks.14:33
tdasilvaseongsoocho: please do share, when you have a chance, it's always good to know how people are using swift :)14:34
*** jistr|call is now known as jistr14:34
seongsoochotdasilva: Ok. I really want to share my experience to all of our community.14:34
*** e0ne has joined #openstack-swift14:56
*** silor has joined #openstack-swift15:48
*** SkyRocknRoll has joined #openstack-swift15:58
*** gyee has joined #openstack-swift15:58
notmynamegood morning16:26
notmynamecheck the ML! all* the openstack mailing lists are combining into one new list16:36
notmyname*not really "all", but probably all the ones you care about16:36
notmynamehttp://lists.openstack.org/pipermail/openstack/2018-September/047005.html16:37
notmynametdasilva: next outreachy round will kick off after the new year, so if you're still interested in being a mentor, there are a few months to figure out some project options16:39
*** silor has quit IRC16:40
tdasilvanotmyname: thanks for the heads up, i'm assuming i'd need to sign up before that?16:41
*** e0ne has quit IRC16:41
notmynametdasilva: no, not really. from what I remember, it's just important to write down the project(s) in the right place, let the organizers (ie mahatic) know you're interested, and then if someone signs up as an intern, it's only then that you need to get registered on the outreachy site16:42
notmynamesharding state is dumped into recon files, right? so I can use swift-recon (or the /recon endpoint) to get that info from a particular storage node?16:49
openstackgerritTim Burke proposed openstack/swift master: s3 secret caching  https://review.openstack.org/60352917:11
*** gkadam has joined #openstack-swift17:14
timburkenotmyname: yeah, as i recall. i think when i was testing i just went straight to the json file17:40
notmynametimburke: thanks. turns out I didn't have a new-enough swift, which is why I wasn't seeing anything :-)17:40
*** e0ne has joined #openstack-swift18:10
*** SkyRocknRoll has quit IRC18:29
*** e0ne has quit IRC19:03
*** e0ne has joined #openstack-swift19:30
*** e0ne has quit IRC19:34
*** e0ne has joined #openstack-swift19:43
*** gkadam has quit IRC20:10
*** zaitcev has quit IRC20:28
*** zaitcev has joined #openstack-swift20:40
*** ChanServ sets mode: +v zaitcev20:40
openstackgerritTim Burke proposed openstack/swift master: s3api: Increase max body size for Delete Multiple Objects requests  https://review.openstack.org/60420820:41
*** mrjk has quit IRC21:05
*** mrjk has joined #openstack-swift21:06
*** e0ne has quit IRC21:36
*** timss has quit IRC22:05
*** timss has joined #openstack-swift22:33
*** spsurya has quit IRC22:48
*** spsurya has joined #openstack-swift22:50
*** rcernin has joined #openstack-swift22:53
openstackgerritMerged openstack/swift master: Use templates for cover and lower-constraints  https://review.openstack.org/60073223:19
openstackgerritTim Burke proposed openstack/swift master: s3api: Increase max body size for Delete Multiple Objects requests  https://review.openstack.org/60420823:31
*** rcernin has quit IRC23:36
*** rcernin has joined #openstack-swift23:36
*** gyee has quit IRC23:41

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!