Monday, 2017-03-27

*** vint_bra has joined #openstack-swift00:35
*** Renich__ has joined #openstack-swift00:44
*** Renich__ is now known as Renich00:45
*** jamielennox is now known as jamielennox|away01:01
*** jamielennox|away is now known as jamielennox01:19
kota_good morning01:50
kota_onovy: so cute! congrats!01:51
*** vint_bra has quit IRC01:52
mattoliveraukota_: morning02:35
mattoliverauonovy: \o/02:36
kota_mattoliverau: o/02:36
mahaticgood morning02:57
mahaticonovy: congratulations!02:57
*** links has joined #openstack-swift03:09
*** sams-gleb has joined #openstack-swift03:20
*** sams-gleb has quit IRC03:25
*** ChanServ sets mode: +v mahatic03:28
*** chosafine has joined #openstack-swift03:32
mattoliveraumahatic: morning03:34
mahaticmattoliverau: o/03:34
*** chosafine has quit IRC03:36
*** chosafine has joined #openstack-swift03:36
*** psachin has joined #openstack-swift04:04
*** jamielennox is now known as jamielennox|away04:14
*** gkadam has joined #openstack-swift04:15
*** jamielennox|away is now known as jamielennox04:22
*** sams-gleb has joined #openstack-swift05:22
*** rcernin has joined #openstack-swift05:26
*** sams-gleb has quit IRC05:27
*** tone_zrt has joined #openstack-swift05:30
*** ChubYann has quit IRC05:32
*** chosafine has quit IRC05:44
*** oshritf_ has joined #openstack-swift05:47
*** oshritf_ has quit IRC05:59
*** oshritf_ has joined #openstack-swift06:10
*** jaosorior has joined #openstack-swift06:13
*** psachin has quit IRC06:33
*** oshritf_ has quit IRC06:38
*** bkopilov_ has joined #openstack-swift06:39
*** oshritf_ has joined #openstack-swift06:45
*** oshritf_ has quit IRC06:50
*** oshritf_ has joined #openstack-swift06:54
*** jaosorior has quit IRC06:58
*** oshritf_ has quit IRC07:03
*** sams-gleb has joined #openstack-swift07:12
*** sams-gleb has quit IRC07:16
*** jaosorior has joined #openstack-swift07:18
*** pcaruana has joined #openstack-swift07:20
*** tanee is now known as tanee_away07:24
*** tanee_away is now known as tanee07:25
*** tesseract has joined #openstack-swift07:44
*** amoralej|off is now known as amoralej07:55
*** ujjain has quit IRC08:00
*** cbartz has joined #openstack-swift08:11
*** psachin has joined #openstack-swift08:12
*** ujjain has joined #openstack-swift08:12
*** ujjain has joined #openstack-swift08:12
acolesgood morning08:20
acolesonovy: congratulations! enjoy08:21
*** PavelK has joined #openstack-swift08:26
*** openstackgerrit has quit IRC08:33
mahaticacoles: good morning08:41
acolesmahatic: o/08:41
*** gabor_antal_ has quit IRC08:51
*** gabor_antal_ has joined #openstack-swift08:52
*** oshritf has joined #openstack-swift09:44
*** geaaru has joined #openstack-swift10:08
acoleshi kota_10:12
acolesmahatic: thanks for looking at patch 449310, that's great. it really is WIP though so expect some changes10:18
patchbothttps://review.openstack.org/#/c/449310/ - swift - WIP use composite ring metadata to prevent bad things10:18
acolesmahatic: I'm not yet convinced that the UUID idea is going to be the answer we need10:19
mahaticacoles: sure, np. I would like to test it over, will do that10:22
*** jordanP has joined #openstack-swift10:22
jordanPhi10:23
jordanPaccording to http://logs.openstack.org/70/449270/1/check/gate-swift-dsvm-functional-ubuntu-xenial/1cdabc2/console.html10:23
jordanPthe tox -e func command is ran twice for no reason10:23
mahaticacoles: oic, but md5 definitely seems not to be an option. You could think of conflicts with UUID as well? Per comments from kota_ he seems to be expecting UUID patch10:23
jordanPRan: 484 tests in 188.0000 sec. and Ran: 484 tests in 176.0000 sec.10:23
jordanPthat's because of https://github.com/openstack-infra/project-config/blob/11eb3a36419411b56230ed843e95bb51a4dfcffc/jenkins/jobs/swift.yaml#L3310:24
acolesjordanP: that's correct10:25
jordanPbut it seems weird that the exact same number of tests are ran10:25
mahatichmm come to think of it, UUID may also face similar issues as md5? I gotta look over some prior comments10:25
acolesjordanP: the two runs use different configs (via the different SWIFT_TEST_CONFIG_FILE you linked to in the yaml)10:26
acolesjordanP: you'll see different number of skipped tests for each run10:27
jordanPacoles, yeah you are right, thanks10:28
jordanPI thooght of a mis config, but all is well10:28
*** silor has joined #openstack-swift10:28
jordanPso I am working on not running every swift-daemon the default gate jobs. Currently devstack does "swift-init all start" and I'd like to selectively enable only PACO + container-sync10:29
*** mvk has quit IRC10:29
jordanPthere's no job (no tests: nor functional tests nor tempest tests) that require the entire swift daemon collection10:30
jordanPso I thought maybe we could save some RAM if we just don't run those services10:30
acolesmahatic: potential problem with UUID is (clayg pointed this out to me...) that swift-ring-builder provides a command write_builder to "recover" a builder file from ring file - so we can't assume that a builder file (with UUID burnt into it) will never be lost :/10:31
acolesmahatic: plus, someone could screw up by copying a builder (with it's UUID) to use as a "template" for another (copy builder file, remove all devs, add new devs)10:32
acolesmahatic: so I'm still pondering it10:32
acolesjordanP: yeah that job definition can be confusing when you first look at it10:32
acolesjordanP: but there is a reason behind it :)10:33
jordanPyep, it's good, makes sense and it's clever10:33
acolesjordanP: the two runs are for keystone auth and v1/tempauth10:34
mahaticacoles: ohh , this "so we can't assume that a builder file will never be lost" seems like a potential blocker. thanks for the info - food for thought :)10:37
mahaticgetting my feet wet into the ring land ;)10:39
*** silor1 has joined #openstack-swift10:40
*** silor has quit IRC10:40
*** silor1 is now known as silor10:40
*** oshritf has quit IRC10:59
*** zhurong has joined #openstack-swift11:31
*** kei_yama has quit IRC11:32
*** mvk has joined #openstack-swift11:44
*** sams-gleb has joined #openstack-swift11:55
*** oshritf has joined #openstack-swift11:58
*** catintheroof has joined #openstack-swift12:08
*** zhurong has quit IRC12:12
*** tone_z has joined #openstack-swift12:15
*** oshritf has quit IRC12:16
*** tone_zrt has quit IRC12:16
*** amoralej is now known as amoralej|lunch12:16
*** oshritf has joined #openstack-swift12:19
*** gkadam has quit IRC12:25
*** links has quit IRC12:40
*** klamath has joined #openstack-swift12:44
*** chlong has joined #openstack-swift12:51
*** amoralej|lunch is now known as amoralej13:00
*** silor1 has joined #openstack-swift13:03
*** silor has quit IRC13:06
*** silor1 is now known as silor13:06
*** openstackgerrit has joined #openstack-swift13:19
openstackgerritAlistair Coles proposed openstack/swift master: WIP Use uuid to differentiate component rings in composite  https://review.openstack.org/44931013:19
openstackgerritAlistair Coles proposed openstack/swift master: Add Composite Ring Functionality  https://review.openstack.org/44192113:19
*** silor1 has joined #openstack-swift13:33
*** silor has quit IRC13:35
*** silor1 is now known as silor13:35
jrichlionovy: congratulations!13:38
*** vint_bra has joined #openstack-swift13:40
openstackgerritMerged openstack/swift master: Factor out a bunch of common testing setup  https://review.openstack.org/44618513:48
*** chosafine has joined #openstack-swift13:55
*** silor1 has joined #openstack-swift14:00
*** gkadam has joined #openstack-swift14:00
*** silor has quit IRC14:01
*** silor1 is now known as silor14:01
*** chlong has quit IRC14:09
*** zhurong has joined #openstack-swift14:11
*** gyee has joined #openstack-swift14:13
*** gyee has quit IRC14:13
*** _JZ_ has joined #openstack-swift14:29
*** silor1 has joined #openstack-swift14:30
*** gkadam has quit IRC14:32
*** silor has quit IRC14:32
*** silor1 is now known as silor14:32
openstackgerritAlexandre Lécuyer proposed openstack/swift master: Remove unused returned value object_path from yield_hashes()  https://review.openstack.org/45026914:33
*** jordanP has quit IRC14:36
*** jordanP has joined #openstack-swift14:36
*** zhurong has quit IRC14:41
*** szaher has quit IRC14:45
*** szaher has joined #openstack-swift14:46
*** furlongm has quit IRC14:50
*** oshritf has quit IRC14:54
*** rcernin has quit IRC15:07
openstackgerritAlexandre Lécuyer proposed openstack/swift master: Remove unused returned value object_path from yield_hashes()  https://review.openstack.org/45026915:19
*** silor has quit IRC15:20
*** silor1 has joined #openstack-swift15:20
*** silor1 is now known as silor15:22
*** chlong has joined #openstack-swift15:25
*** cbartz has left #openstack-swift15:33
*** chosafine has quit IRC15:42
*** chsc has joined #openstack-swift15:47
*** chsc has joined #openstack-swift15:47
openstackgerritKaren Chan proposed openstack/swift master: Store version id if restoring object from archive  https://review.openstack.org/43752315:54
jrichliSounds interesting.  I plan to review this ^^ - but no promises16:05
*** SkyRocknRoll has joined #openstack-swift16:16
*** chsc has quit IRC16:19
*** JimCheung has joined #openstack-swift16:22
*** karenc has joined #openstack-swift16:29
*** mvk has quit IRC16:29
*** psachin has quit IRC16:32
*** jordanP has quit IRC16:33
openstackgerritOpenStack Proposal Bot proposed openstack/swift master: Updated from global requirements  https://review.openstack.org/8873616:34
*** jordanP has joined #openstack-swift16:34
*** zaitcev has joined #openstack-swift16:36
*** ChanServ sets mode: +v zaitcev16:36
*** sams-gleb has quit IRC16:36
jordanPnotmyname, what do you think of https://review.openstack.org/#/c/450207/ ?16:36
*** sams-gleb has joined #openstack-swift16:36
*** rcernin has joined #openstack-swift16:37
*** sams-gleb has quit IRC16:40
*** jaosorior has quit IRC16:41
*** gyee has joined #openstack-swift16:54
*** sams-gleb has joined #openstack-swift16:59
*** tesseract has quit IRC16:59
*** rcernin has quit IRC17:08
*** chsc has joined #openstack-swift17:10
*** chsc has joined #openstack-swift17:10
*** mvk has joined #openstack-swift17:12
*** pcaruana has quit IRC17:12
*** chsc has quit IRC17:16
openstackgerritTim Burke proposed openstack/swift master: Version DLOs, just like every other type of object  https://review.openstack.org/44614217:17
*** chlong has quit IRC17:22
*** Renich__ has joined #openstack-swift17:25
*** amoralej is now known as amoralej|off17:27
*** Renich has quit IRC17:29
*** Renich__ is now known as Renich17:31
*** Renich has quit IRC17:32
timburkegood morning! i'm gonna keep chugging away at https://review.openstack.org/#/c/423906/ -- trying to get through the meaty changes in container/backend.py and container/sharder.py!17:35
timburkeno patchbot? no... we just haven't fixed the regex yet17:35
timburkehttps://review.openstack.org/#/c/423906/17:35
patchbotpatch 423906 - swift - Add container sharding to Swift containers17:35
jrichlitimburke: its ok, container/sharder.py sort of gave it away :-)17:36
*** chlong has joined #openstack-swift17:37
*** tonanhngo has joined #openstack-swift17:44
*** tonanhngo_ has joined #openstack-swift17:47
*** tonanhngo has quit IRC17:48
*** tonanhngo has joined #openstack-swift17:50
*** tonanhngo_ has quit IRC17:51
*** ChubYann has joined #openstack-swift17:57
*** jordanP has quit IRC18:02
*** catintheroof has quit IRC18:15
*** chsc has joined #openstack-swift18:15
*** d0ugal has quit IRC18:31
*** Renich has joined #openstack-swift18:37
notmynamegood morning18:43
notmyname(barely)18:43
*** d0ugal has joined #openstack-swift18:47
notmynamehttps://review.openstack.org/#/c/450207/ is interesting18:50
patchbotpatch 450207 - openstack-dev/devstack - Swift: only start the necessary services18:50
notmynameon the one hand, it's probably right. all those services don't need to be run in a 1-replica policy, and tempest and swift func tests aren't testing them anyway18:50
notmynameon the other, how much of a "real" cluster do we want in the gate18:51
notmynameoh, FYI both clayg and timur are out this week18:51
*** PavelK has quit IRC19:07
zaitcevhttp://zaitcev.livejournal.com/238656.html VMware OpenStack? I'm curious if they have Swift. VMware owns EMC or vice versa, so that's a ready-made storage business.19:22
*** samueldmq has quit IRC19:23
*** samueldmq has joined #openstack-swift19:24
*** silor has quit IRC19:24
notmynamezaitcev: from what I've seen, most big packaged openstack products (like vmware and even red hat to an extent) have had "(most of the rest of) OpenStack! plus our own in-house storage system with a Swift API"19:24
openstackgerritNelson Marcos de Almeida proposed openstack/python-swiftclient master: Removing duplicated doc from client-api  https://review.openstack.org/45042619:25
*** NM has joined #openstack-swift19:26
*** jordanP has joined #openstack-swift19:29
notmynamejordanP: oh, I totally agree that 350MB is something. just seems like it's unlikely to be a large percentage of what's being used. of course, a bunch of small things can really matter, too19:34
jordanPnotmyname, we've been chasing memory consumption for a month now, and we kinda lack of fresh ideas. I get your point, and I agree, but 350 Mo of out 8Go, is something. If you have any idea where to save some more Mo, that would be great19:35
notmynamejordanP: to be clear, I'm agreeing with you19:36
jordanPwe tried mysql here: https://review.openstack.org/#/c/438668/ but was reverted19:36
jordanPbecause of some side effects19:36
jordanPwe have a nice tool that prints regularly which process is consuming what amount of memory. for instance: http://logs.openstack.org/68/438668/5/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/a3230f9/logs/screen-peakmem_tracker.txt.gz19:38
jordanPfwiw19:38
notmynameinteresting19:39
*** zacksh has quit IRC19:41
*** zacksh has joined #openstack-swift19:42
notmynameif any of us in our products were getting OOM errors and only had 8GB of memory, we'd simply buy more RAM. it's by far cheaper ($ and opportunity cost) than having a team of devs looking for every scrap of memory usage to trim down19:43
notmynamehowever, when the hardware is donated (a la infra cloud) and the cost is to an open source community, the incentives aren't aligned to solve it with more hardware19:43
notmynameso we end up getting a global community to spend months (cumulatively, if not literally) debugging OOM errors and finding places to save19:44
jordanPyeah, honestly, we've been running with 8Gb of ram since 2015, so maybe we could ask for an increase now19:44
jordanPbut we have more and more projects, and more and more jobs to run19:44
notmynameyep19:44
notmyname"run every cloud infrastucture service in a VM simulating a cloud test environment, and do it in 8GB of RAM" seems a little aggressive to me :-)19:45
jordanPyeah, I think the same. But I don"t know what else to do.19:46
notmynamecontainers!19:46
notmyname(of course)19:46
jordanPmany projects suffer from a high rate of false negative19:46
jordanPit's really hurting19:46
notmynameI'm only half joking about the containers thing19:47
notmynamemostly about the "containers" part of it19:47
jordanPI am not sure I see how containers would help here. every container will run on the same host19:48
notmynamebut running different services on different nodes seems to be much closer to reality (and significantly raises the amount of HW available per service)19:48
jordanPdevstack is not super equiped to do multinode testing yet, and it will means more test VMs19:54
*** SkyRocknRoll has quit IRC20:04
*** jamielennox is now known as jamielennox|away20:13
*** catintheroof has joined #openstack-swift20:23
*** jamielennox|away is now known as jamielennox20:24
*** catintheroof has quit IRC20:39
clarkbI mean we do have multinode testing20:47
clarkb(and I'd like to delete our single node base jobs and replace with multinode but I think that scares some people due to complexity)20:48
notmynameclarkb: ironic that openstack devs would be scared of texting complexity for openstack ;-)20:49
clarkbin their defense a lot of the complexity and "external" to openstack in that you have to set up networking and things in clouds that don't let you control that directly20:53
clarkbbut definitely things like live migration etc have had openstack specific complexity issues there (it turns out that we were enforcing cpu flags on qemu booted VMs for a long time when we didn't need to because its qemu and not kvm)20:54
*** jordanP has quit IRC21:22
*** lcurtis has joined #openstack-swift21:25
lcurtishello everyone...looking at changing standard drive size in our swift object servers from 4T to 10T...concerned that these higher-weighted drives will get 2x the I/O. Is anyone currently running with mixed drive sizes?21:26
*** geaaru has quit IRC21:29
mattoliverauMorning21:31
mattoliverautimburke: thanks for looking at sharding btw! It's great to have your eye over it cause I find myself somewhat shard blind to the code now :p21:41
timburkemattoliverau: yeah, happy to! we all want it, most of us need it, and you've been working on it a while. let's get it landed!21:41
notmynamemattoliverau: yep I'm just now talking to orion in the office about it :-)21:43
mattoliverau\o/21:43
*** NM has quit IRC21:44
mattoliverauI know there is some ways to go, but glad its finally not a POC anymore.. but that also means there are defintley places we can clean up or do better. I put a question on the trello regarding how to send new pivot ranges (after sharding).. that might bypass .pending for changes to the pivot ranges which could be good. Just FYI21:45
*** vint_bra has quit IRC21:49
*** sams-gleb has quit IRC22:02
*** chlong has quit IRC22:03
notmynamelcurtis: hey. sorry for the delay22:15
notmynamelcurtis: you've just explained why I'm so scared about all these bigger drives hitting the market. they're a *lot* bigger, but the bus isn't any faster and they have essentially the same IOPS. terrifying!22:15
lcurtisnotmyname: agreed!22:16
notmynamelcurtis: most clusters will end up with mixed drive sizes over time. basically, you end up adding whatever drive size is the best $/GB at the moment22:16
notmynamelcurtis: but I've always seen one drive size per box (but different in different boxes)22:17
lcurtisdo u think it a problem to have disparate drive sizes? it would seem to generate hotspots in the cluster22:17
lcurtisie, a rack of servers with 10T drives , higher weight22:17
lcurtismore I/O on those servers?22:17
notmynameno, I don't think it's a problem in general. larger drives (8-10TB) make dense nodes more scary to me (those 60, 80 bay chassis)22:18
notmynameregardless of drive sizes, there are other things we're working on in swift to fix hot spot issues :-)22:18
lcurtisawesome22:18
lcurtispure magic22:18
notmynamethat's the impetus behind a lot of the replication/reconstruction and golang work you may have heard about22:18
notmynamebut it's a lot of work, and a long-term project22:19
notmynamebut back to your current question... big drives22:19
notmynameI personally like smaller nodes as drives get bigger. but then clayg gripes at me for solving problems with hardware and I feel bad. (but I still like less dense nodes myself)22:20
notmynamelcurtis: it depends on your workload (lots of small files vs lots of big files vs high reads vs high writes vs etc), but the biggest problems you'll see with big drives is with background processes, not client performance22:21
notmynamelcurtis: but the specific answer to your question is that a cluster with mixed drive sizes is totally normal, expected, and supported22:22
lcurtislike auditor, etc?22:22
notmynameauditing, replication, reconstruction, etc22:22
lcurtismakes sense22:22
lcurtisbut wouldnt a device with higher weight see more i/o than peers with lower weight?22:23
notmynameyes22:23
lcurtiswe are pretty maxed with i/o at the moment on object tier22:23
notmynameadd more ;-)22:23
lcurtisokay...i just wanted to make sure my thinking was in line22:23
notmynameto clarify (especially for the searchable channel logs) the "issues" or "problems" here are relative. swift is still awesome at big clusters, lots of data, and lots of traffic22:23
lcurtisyes..thinking was to add more 10t nodes, backfill rest of cluster22:24
lcurtisof 4ts22:24
lcurtisbut that seems like time consuming endeavor22:24
notmynameand although the drives are 5x the size of when the project started (2TB vs 10TB), that's a relatively small multiple. we're not talking orders of magnitude differences like you'd get with flash22:24
lcurtistrue22:24
notmynamewhere is the disk IO going today? is it in the client requests or in stuff like rsync or the background daemons?22:25
lcurtisjust sheer puts22:25
lcurtisand rebalancing22:26
*** gkadam has joined #openstack-swift22:26
notmynamecool (and congrats) :-)22:26
lcurtisat the moment22:26
notmynameyeah, the rebalancing is the major io issue that's being addressed for dense nodes22:26
notmynamerebalancing = either EC reconstruction or replication or both (just "rebalancing" is a lot shorter to type)22:27
notmynamethe good news for the PUT load is that it should go up pretty smoothly with more spindles22:27
notmynameyou'll need more tuning for getting rebalancing to go faster22:28
lcurtisgood stuff as always, notmyname22:34
lcurtisthank you22:34
*** sams-gleb has joined #openstack-swift23:02
*** sams-gleb has quit IRC23:08
openstackgerritMerged openstack/python-swiftclient master: Removing duplicated doc from client-api  https://review.openstack.org/45042623:21
*** lcurtis has quit IRC23:24
*** kei_yama has joined #openstack-swift23:29
notmynameI have added all of the session topics mentioned on https://etherpad.openstack.org/p/BOS-Swift-brainstorming to the formal submission session23:45
notmyname(I think) you can see all the submitted stuff at http://forumtopics.openstack.org23:45
notmynameat least I can23:45
notmynamethere's an opportunity to comment on any of the suggested sessions. please don't hesitate to add comments23:46
notmynameI suspect we'23:46
*** chsc has quit IRC23:46
notmynameI suspect we'll need to advocate for and show community support for the swift sessions23:46
notmynameand if there's another session you want to see submitted, please let me know.23:47
notmynameI love swob https://review.openstack.org/#/c/423366/23:49
notmynamehttps://review.openstack.org/#/c/423366/23:50
patchbotpatch 423366 - glance - Fix incompatibilities with WebOb 1.723:50
*** klamath has quit IRC23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!