Wednesday, 2015-04-08

*** thumpba_ has quit IRC00:04
*** km has joined #openstack-swift00:15
*** tgohad has quit IRC00:15
*** zhill has quit IRC00:25
*** dmorita has joined #openstack-swift00:29
hogood morning!00:36
*** erlon has quit IRC00:41
mattoliverauho: morning01:03
*** mjfork has joined #openstack-swift01:07
homattoliverau: morning!01:13
*** jrichli has joined #openstack-swift01:27
*** kota_ has joined #openstack-swift01:44
*** tdasilva has quit IRC01:49
*** nottrobin has quit IRC02:00
*** tdasilva has joined #openstack-swift02:00
*** jdaggett has quit IRC02:00
*** donagh has quit IRC02:00
*** fanyaohong has quit IRC02:01
*** matt__ has joined #openstack-swift02:01
*** donagh has joined #openstack-swift02:01
*** Trozz_ has joined #openstack-swift02:01
*** goodes has quit IRC02:02
*** zacksh has quit IRC02:02
*** mattoliverau has quit IRC02:02
*** sudorandom has quit IRC02:02
*** Trozz has quit IRC02:02
*** CrackerJackMack has quit IRC02:02
*** jroll has quit IRC02:02
*** matt__ is now known as mattoliverau02:02
*** ChanServ sets mode: +v mattoliverau02:03
*** zacksh has joined #openstack-swift02:03
*** goodes has joined #openstack-swift02:04
*** jdaggett has joined #openstack-swift02:04
*** sudorandom has joined #openstack-swift02:07
*** CrackerJackMack has joined #openstack-swift02:07
*** jroll has joined #openstack-swift02:07
*** jroll has quit IRC02:08
*** jroll has joined #openstack-swift02:08
*** fanyaohong has joined #openstack-swift02:14
*** nottrobin has joined #openstack-swift02:15
*** thumpba has joined #openstack-swift02:52
*** jrichli has quit IRC02:54
*** mjfork has quit IRC03:26
*** thumpba has quit IRC03:56
*** km_ has joined #openstack-swift04:01
*** km has quit IRC04:02
*** kota_ has quit IRC04:05
*** ppai has joined #openstack-swift04:25
*** annegentle has joined #openstack-swift04:49
*** torgomatic has quit IRC04:51
*** torgomatic has joined #openstack-swift04:52
*** ChanServ sets mode: +v torgomatic04:52
*** annegentle has quit IRC04:54
*** km_ has quit IRC05:00
*** cdelatte has quit IRC05:10
*** km has joined #openstack-swift05:11
*** nshaikh has joined #openstack-swift05:13
*** tsg has joined #openstack-swift05:20
*** gyee has quit IRC05:23
*** SkyRocknRoll has joined #openstack-swift05:34
*** SkyRocknRoll has joined #openstack-swift05:34
*** thumpba has joined #openstack-swift05:39
*** thumpba has quit IRC05:41
*** cdelatte has joined #openstack-swift05:47
*** delattec has joined #openstack-swift05:47
*** silor has joined #openstack-swift05:52
*** thomaschaaf has joined #openstack-swift06:27
thomaschaafHello is there any way to decrease the partition size? Or is there a good tutorial on how to do it?06:29
thomaschaafI think my 18 part partition is to large for our project and causing the system to do way to much io for idle usage06:30
*** krykowski has joined #openstack-swift06:37
*** silor has quit IRC06:41
*** welldannit has quit IRC06:47
hothomaschaaf: i think we don't have a function without any downtime so far so you need to re-create a builder files with new part_power then over-write them.06:47
thomaschaafwould I have to move over all files or would they just stay in their current folders?06:48
hothomaschaaf: but I'm not sure that replicators copy replica to approprete place or you need to copy by yourself now.06:48
thomaschaafas in export via web interface and then reimport via web interface?06:48
thomaschaafby webinterface i mean api..06:49
thomaschaafdecreasing the part count would reduce io workload correctly?06:49
hothomaschaaf: as for the workload, I think so and there is a info https://swiftstack.com/blog/2012/05/14/part-power-performance/.06:51
hothomaschaaf: as for the export/import apis, I think swift doesn't have it.06:52
*** tsg has quit IRC06:52
thomaschaafokay hmm maybe thats not my problem then..06:52
thomaschaafshouldn't replication die down and basically end up at doing nothing if no new files are added to the system?06:53
*** kota_ has joined #openstack-swift06:56
hothomaschaaf: a partition is created when object is uploaded. replication works for each partions. i think replicator checks all partitions but copy is only for new partions.06:59
thomaschaafokay so if I decrease the partition count it would actually lower the replication time for object.. but if it is done quicker it doesn't save io but just does it more often correct?07:00
thomaschaafdoes this: https://dl.dropboxusercontent.com/u/5910/Jing/2015-04-08_0901.png look normal to you for a system that is not getting files added?07:01
thomaschaafhttps://dl.dropboxusercontent.com/u/5910/Jing/2015-04-08_0902.png07:02
hothomaschaaf: wait a minute.07:06
*** foexle has joined #openstack-swift07:07
hothomaschaaf: your understanding is same as me. we can configure interval time b/w replications.07:10
hothomaschaaf: do you have big server (around 18 cores)?07:10
thomaschaaf8 ocres07:11
*** joeljwright has joined #openstack-swift07:15
hothomaschaaf: you configure the number of object-server. right? uuun... each io (read) looks not high07:17
thomaschaafits actually the container server eating all the cpu07:18
thomaschaafwhich I think is weird07:18
thomaschaafhttps://dl.dropboxusercontent.com/u/5910/Jing/2015-04-08_0918.png07:19
hothomaschaaf: is your concern cpu usage of container-replicator? yeah, it looks high.07:19
thomaschaafjust not happy with the performance and looking for any possible bottle necks..07:20
hothomaschaaf: first bottle neck should be the cpu usage of container-server/replicator.07:21
thomaschaafhttps://dl.dropboxusercontent.com/u/5910/Jing/2015-04-08_0923.png07:23
thomaschaafthe syslog just doesnt look like it should use that much cpu07:23
*** geaaru has joined #openstack-swift07:27
*** nshaikh has quit IRC07:28
hothomaschaaf: the interval seems to be 30s and each replication takes 16s.07:30
thomaschaafI guess I can increase that.. still don't quiet understand why it would be so "expensive" cpu wise. Is it doing computation? I thought it was pretty much just network07:31
hothomaschaaf: it looks like no replication here (no_change value) means not network. maybe sql related cpu usage???07:33
*** mmcardle has joined #openstack-swift07:35
thomaschaafmight have to do with: https://bugs.launchpad.net/swift/+bug/126046007:38
openstackLaunchpad bug 1260460 in OpenStack Object Storage (swift) "Container replication does not scale for very large container databases" [Wishlist,New]07:38
hothomaschaaf: I will check it. btw  what is your value of concurrency of [container-replication] in container-server.conf?07:39
hothomaschaaf: Does size of your container db look like this big?07:53
thomaschaaf92 M07:54
thomaschaafand 159 M07:54
thomaschaafhttps://dl.dropboxusercontent.com/u/5910/Jing/2015-04-08_0955.png07:55
*** jistr has joined #openstack-swift08:02
hothomaschaaf: thanks! I don't have access to our cloud so i'm not sure whether it's big08:03
hothomaschaaf: if this meets the bug, i think the interval change is effective.08:07
thomaschaafbut that would cause the files to take longer to arrive at other computers08:08
thomaschaafas in replication would take longer08:09
thomaschaafit seems really what I should do is create more containers so that it is balanced better08:09
hothomaschaaf: yeah. it's trade off problem.08:09
thomaschaafwhat I could do is create a container for each file name beginning such so for abcdef.jpg it would be containername_ab08:10
thomaschaafand then I have way smaller containers08:10
hothomaschaaf: could be. I think it is better to know whether your problem is same as the bug (maybe current spec) first.08:13
*** jordanP has joined #openstack-swift08:14
*** cppforlife_ has quit IRC08:15
*** cppforlife_ has joined #openstack-swift08:16
thomaschaafhow can I reduce the number of containers? there should only be 8 but there seem to be 5048..08:23
hothomaschaaf:  i read the response of notmyname and he said "use many containers". sorry I mis-understood something.08:26
*** acoles_away is now known as acoles08:29
hothomaschaaf: You only have 3 containers which are over M bytes. I'm not sure your situation meets the bug report. Therefore it might be a good idea to tunning your configuration to reduce the container-replicator overload. My idea for this is changing the interval of replications and reducing the number of the concurrency (container-replicator) and check the stats.08:33
hothomaschaaf: i have to leave office now. see you!08:33
*** ho has quit IRC08:34
thomaschaaf:)08:34
thomaschaafthanks08:34
*** km has quit IRC08:43
*** thomaschaaf has quit IRC08:57
*** theanalyst has quit IRC09:04
*** theanalyst has joined #openstack-swift09:07
*** maniOS has joined #openstack-swift09:11
*** maniOS has quit IRC09:12
*** bkopilov has quit IRC09:52
*** jordanP has quit IRC09:55
*** jordanP has joined #openstack-swift10:20
*** jordanP has quit IRC10:35
*** jordanP has joined #openstack-swift10:38
*** annegentle has joined #openstack-swift10:53
*** annegentle has quit IRC10:58
*** mmcardle has quit IRC11:03
*** mahatic has joined #openstack-swift11:30
*** mahatic has quit IRC11:35
*** delatte has joined #openstack-swift11:40
*** aix has joined #openstack-swift11:42
*** cdelatte has quit IRC11:43
*** delattec has quit IRC11:43
*** bkopilov has joined #openstack-swift11:50
*** bkopilov has quit IRC11:52
*** bkopilov has joined #openstack-swift11:53
*** annegentle has joined #openstack-swift11:54
*** bkopilov has quit IRC11:57
*** bkopilov has joined #openstack-swift11:57
*** mahatic has joined #openstack-swift11:57
*** otoolee has quit IRC11:58
*** annegentle has quit IRC11:59
*** bkopilov has quit IRC12:01
*** bkopilov has joined #openstack-swift12:01
*** otoolee has joined #openstack-swift12:02
*** bkopilov has quit IRC12:05
*** nshaikh has joined #openstack-swift12:08
*** nshaikh has left #openstack-swift12:12
*** bkopilov has joined #openstack-swift12:12
openstackgerritMike Fedosin proposed openstack/swift: Fix broken test_policy_IO_override test  https://review.openstack.org/17159312:13
*** dmorita has quit IRC12:15
*** bkopilov has quit IRC12:15
*** bkopilov has joined #openstack-swift12:17
*** mmcardle has joined #openstack-swift12:19
*** delattec has joined #openstack-swift12:22
*** bkopilov has quit IRC12:22
*** delatte has quit IRC12:22
*** bkopilov has joined #openstack-swift12:24
*** bkopilov has quit IRC12:28
*** winggundamth has joined #openstack-swift12:34
winggundamthhi. I have problem with object expirer. the files has been found by object expirer and in log said objects expired but in dashboard or swift list command still random shows the file12:39
winggundamthanyone successful on doing object expirer before? did it work correctly and the file is completely deleted or not?12:40
ppaiexpired objects get cleaned up by object-expirer daemon12:47
ppaimake sure the daemon is running12:47
*** kota_ has quit IRC12:52
*** ppai has quit IRC12:53
*** annegentle has joined #openstack-swift12:55
*** jkugel has joined #openstack-swift12:59
winggundamthyes. I'm sure that daemon already running. and already check in log file that it do its job13:02
winggundamthI'm trying to download expired object too but it always showing 404 not found13:03
mandarinewinggundamth: do you have several expirer running?13:03
mandarinealso, did you set the "processes" and "process" variables ?13:04
cschwedewinggundamth: the 404 for an expired object is expected?13:08
*** bkopilov has joined #openstack-swift13:19
*** bkopilov has quit IRC13:19
*** zul has quit IRC13:22
*** zul has joined #openstack-swift13:22
winggundamthmandarine: only 1 object expirer running on proxy node. I have 1 proxy node and 3 storage nodes13:24
winggundamthmandarine: I did not configure processes and process variables.13:24
*** bkopilov has joined #openstack-swift13:24
winggundamthcschwede: yes 404 is expected but I just wonder why swift list still random showing object that already expired. even I wait for a night but it still there13:25
winggundamthwhen I listed with both swift list and horizon dashboard. sometimes it shows but sometimes not13:26
cschwedewinggundamth: granted, a night is a bit long. your updaters and replicators are all running? sounds like one of the container servers has an old DB, and because of that the listing sometimes includes this object and sometimes not13:26
winggundamthyes I suspected that too so I check the storage node log to see when I'm doing swift list so maybe some clue13:27
*** bkopilov has quit IRC13:28
winggundamthbut I found that when I list it random get list from each node but the expired object can show on every node that list13:28
winggundamthfor example I list many times and check the logs on storage node A. it has request but it still random shows and not shows the expired object13:29
*** bkopilov has joined #openstack-swift13:30
winggundamtheven it request to same storage node A13:30
winggundamththat's really make me hit the wall now13:30
*** bkopilov has quit IRC13:32
*** bkopilov has joined #openstack-swift13:33
*** bkopilov has quit IRC13:36
*** SkyRocknRoll has quit IRC13:38
winggundamthanyone has any thought?13:44
ctenniswinggundamth: I think you need to show what you're seeing in a gist13:49
ctenniswinggundamth: if you only have one proxy, the any "list" you are doing is hitting that proxy.  if you're getting inconsistent results back from doing a "swift list" then you have inconsistent object or container data, which means your replicators and updaters aren't talking between your storage nodes.13:50
*** joeljwright has quit IRC13:50
*** joeljwright has joined #openstack-swift13:53
winggundamthI posted detail in here http://lists.openstack.org/pipermail/openstack/2015-April/012195.html13:54
*** jistr has quit IRC13:54
*** jistr has joined #openstack-swift13:54
winggundamthlet me know if the data to solve the problem is not enough and I'll get it for you13:55
*** bkopilov has joined #openstack-swift13:56
*** aix has quit IRC13:56
*** chlong has quit IRC13:58
winggundamthctennis: please see the output from swift-recon here http://pastebin.com/0P7QUZ2J. Do you think it's not consistent between each node?13:59
ctenniswinggundamth: look at the process list on your nodes.  are there replicator and updater processes running?14:00
winggundamthctennis: here process list http://pastebin.com/yFk8mYRp14:03
ctenniswinggundamth: "swift list" ultimately gets information back from a container server.  It sounds like 1 of your 3 container servers has inconsistent data and can't reconcile with the others.14:03
ctennisis that list the same on all three of your storage nodes?14:04
winggundamthchecking right now14:04
winggundamthyes. every nodes are the same14:07
winggundamthanyway can I troubleshoot to check inconsistent between node?14:08
ctennisyeah..14:08
ctennisyou can run "swift-get-nodes /etc/swift/container.ring.gz ACCOUNTNAME CONTAINERNAME"14:09
ctennisand it will give you back a list of info about where that container is stored...more interestingly it will give you back a list of curl commands14:09
ctennisyou can run those curl commands within the cluster to get info about the container db on each machine14:09
*** jistr has quit IRC14:10
ctennisI'm changing locations..but I would recommend in seeing if you find out if/why between your three container databases one is different.14:11
*** jistr has joined #openstack-swift14:12
winggundamthok thank you so much. I'll try it14:14
openstackgerritMike Fedosin proposed openstack/swift: Fix broken test_policy_IO_override test  https://review.openstack.org/17159314:16
winggundamthctennis: very nice command. never find this command in any document before...14:18
*** jrichli has joined #openstack-swift14:23
*** khivin has quit IRC14:24
*** aix has joined #openstack-swift14:24
*** vinsh has quit IRC14:27
tdasilvagood morning! just noticed some comments on the ec docs...I'm planning push fixes unless somebody else is already working on it...14:38
tdasilvaI'll add a comment on the patch too...14:38
acolestdasilva: just adding some comments14:39
tdasilvaacoles: hey! welcome back, hope you had a good time14:39
acolestdasilva: yes thanks!14:40
*** vinsh has joined #openstack-swift14:44
openstackgerritKamil Rykowski proposed openstack/swift: More user-friendly output for object metadata  https://review.openstack.org/16401914:44
*** jistr has quit IRC14:45
*** lpabon has joined #openstack-swift14:47
*** jistr has joined #openstack-swift14:51
acolestdasilva: i didn't get far but posted a few doc comments on gerrit14:54
*** erlon has joined #openstack-swift14:55
tdasilvaacoles: cool, thanks! just noticed your +1 on the ec_m, ec_k discussion14:55
tdasilvasounds good to me, I'll go with that14:56
cschwedetdasilva: don’t take my comments too serious, i’m not a native speaker and might be wrong…14:57
tdasilvacschwede: hehe...english is not first language either, so hopefully I'm not writing anything too broken...counting on native speakers to pick up any mistakes14:58
tdasilva*not my first*14:59
*** Akshat has joined #openstack-swift15:10
AkshatHi clayg15:10
AkshatHi ctennis15:10
AkshatHi acoles15:11
AkshatI am tuning my swift cluster for better tps15:11
Akshatcan you please provide me some pointers/best practices to get good numbers15:11
*** jistr is now known as jistr|mtg15:16
openstackgerritJoel Wright proposed openstack/python-swiftclient: Log and report trace on service operation fails  https://review.openstack.org/17169215:16
openstackgerritJoel Wright proposed openstack/python-swiftclient: Log and report trace on service operation fails  https://review.openstack.org/17169215:21
*** takoTuesday has joined #openstack-swift15:26
ctennisAkshat: http://shop.oreilly.com/product/0636920033288.do15:28
takoTuesdayHey guys, Im trying to modify a django project that currently uses django.views.static.serve to serve static html files, and I am wanting to serve documents from swift15:29
*** jistr|mtg is now known as jistr15:30
takoTuesdaydoes any one know of any open source projects I could look at that do this?15:30
ctennistakoTuesday: can you explain in more detail?15:31
ctennistakoTuesday: you can easily just serve static content from swift, I don't think you need an open source project to do anything15:32
takoTuesdayctennis: oh I just meant as an example to see how they serve static content from swift15:32
takoTuesdayctennis: thats all I wanted to look at an open source project for.15:32
ctennistakoTuesday: everything in swift is already a URL, as long as it's publicly readable you can just directly point to the swift object15:33
*** setmason has joined #openstack-swift15:33
cschwedetakoTuesday: have a look at https://github.com/blacktorn/django-storage-swift15:36
takoTuesdaythanks guys Im going to check it out15:36
*** tsg has joined #openstack-swift15:37
*** gvernik has joined #openstack-swift15:39
*** peluse_ has joined #openstack-swift15:42
*** ozialien has joined #openstack-swift15:44
*** peluse has quit IRC15:46
*** ozialien has quit IRC15:47
*** pberis has joined #openstack-swift15:53
gverniknotmyname: you here?15:53
notmynamegvernik: yup. I was just checking emails/IRC buffers15:54
notmynamegvernik: what's up?15:54
gverniknotmyname: in your PTL email you wrote  "There's a company that has written a tape library connector for Swift and is open-sourcing it". Can you give some info on it?15:54
egonWow, that's kind of awesome.15:54
notmynamegvernik: ya. they're working on open-sourcing it, but it's going slow (since it's a new thing for them to do). but I did get permission from them to talk about it. just haven't too much since they haven't actually opened it yet15:55
notmynamebut the company is http://www.bdt.de15:55
notmynamegvernik: and I know your company is talking about it at the summit, too ;-)15:55
notmyname(IBM)15:55
*** jistr has quit IRC15:56
takoTuesdaycschwede: "Once installed and configured, use of django-storage-swift should be automatic and seamless." Does this mean that use of django.views.static.serve will now serve files from swift seamlessly?15:57
gverniknotmyname,: my company is huge, i was not aware someone speak about it actually... thanks for info :) and what is the other drive vendor that make sure that Swift support their media?15:57
notmynamegvernik: but the overall point is that all the different storage media people getting involved in swift is really exciting to me :-)15:57
notmynamegvernik: drive vendor? are you talking about the SMR drives?15:58
mtreinishnotmyname: could you use ltfs to use swift on tape today?15:58
gverniknotmyname,: you wrote "many different media vendors are coming to the Swift community asking how they can ensure that Swift natively supports their media." and than mentioned Kinnetic, SMR...15:59
*** ozialien has joined #openstack-swift15:59
notmynamemtreinish: sort of, maybe. the hard part of swift+tape is the auditing and replication where swift is churning and walking all the data15:59
eikkewouldn't on LTFS be more for a single-copy scenario?16:00
notmynamegvernik: ya, so the seagate kinetic stuff has been talked about for a while. and seagate is working with other drive vendors to ensure that it's not a seagate-only tech. and swift can talk to kinetic devices today16:00
eikkeor even mixed: 2 copies on HD, 1 on tape, prefer the HD copies for retrieval if available16:00
cschwedewohoo, that are some cool news (that swift on tape stuff)16:01
gverniknotmyname,: right...i now recall we even spoke about it in Paris16:01
notmynamegvernik: smr is shingled magentic recording. it's a trick that spinning drives are using to get more density, but it comes with some "interesting" performance considerations16:01
*** ozialien has quit IRC16:01
mtreinishnotmyname: sure, I could see that. I was just thinking about that, it's might not be a good idea16:01
takoTuesdaycschwede: Im confused as to how to actually use django-storage swift16:01
notmynamegvernik: and every drive vendor (both of them!) are working on that tech.16:01
notmynamemtreinish: my thoughts exactly, and unfortunately I'm still waiting to see code rather than just hear about "there is a thing..."16:01
gverniknotmyname,: thanks16:02
notmynamegvernik: with SMR there are some considerations about how to write data, and eventually SMR drives will require some knowledge outside of the drive (ie the kernel, filesystem, or even the app). and that's the kind of stuff I'm looking at playing with this year in swift16:03
openstackgerritMike Fedosin proposed openstack/swift: Fix broken test_policy_IO_override test  https://review.openstack.org/17159316:03
notmynamegvernik: not to mention all the cool stuff around flash16:03
*** ultgeek has joined #openstack-swift16:03
notmynamegvernik: the overall summary is "make swift work better for the specific media it's running on"16:03
gverniknotmyname,: is there some session or talk in Vancouver about Swift and specific media?16:04
*** theanalyst has quit IRC16:04
notmynameand I'm not sure exactly what that looks like in all cases. but it is important as we look at improvements in speed and density in swift clusters16:04
notmynamegvernik: conference talk or swift tech sessions?16:04
gverniknotmyname,: swift tech sessions16:05
notmynamegvernik: you can search the conference sessions for the IMB tape ones. and the swift tech sessions haven't been set yet. in fact that's an agenda item for the meeting today :-)16:05
*** ozialien has joined #openstack-swift16:05
ultgeekGreetings all, for the upcoming Kilo release, what version of Swift will be at release? (2.2.2, 2.3.0, other?)16:06
gverniknotmyname,: great... thanks16:06
*** ozialien has quit IRC16:06
notmynameultgeek: 2.3.016:06
ultgeekty16:06
*** Gu_______ has joined #openstack-swift16:06
notmynameultgeek: we're working on getting an RC for it very soon (friday or this weekend or early next week)16:06
*** theanalyst has joined #openstack-swift16:07
*** peluse_ is now known as peluse16:08
*** ChanServ sets mode: +v peluse16:08
Akshatctennis,clayg: What is significance of backlog and max_clients param in config files16:09
ultgeeknotmyname: I saw on launchpad that the 2.3.0-rc1 only has 1 bug fix targeted and no blueprints listed, is there another source for finding what the new features will be?16:09
Akshathow can they impach perf16:09
notmynameultgeek: ya, I haven't done my LP duties yet.16:10
peluseacoles, welcome back from your very short time away....16:10
notmynameultgeek: the biggest thing that will be in 2.3.0 is a beta of erasure codes. but there are several other things. there will be a full change log by the time of the RC16:10
notmynameultgeek: I'm currently working on that (or rather, it's in progress)16:11
ultgeeknotmyname: excellent! thanks. Look forward to seeing it.16:11
notmynameultgeek: me too ;-)16:11
peluseme three :)16:12
notmynameultgeek: I'll likely have it up on gerrit later this week16:12
*** dencaval has joined #openstack-swift16:12
*** foexle has quit IRC16:15
notmynameok, I'm going to finish getting ready and go to the office16:16
*** tsg has quit IRC16:17
cschwedetakoTuesday: i’ll have a look later after dinner, will get back to you16:17
*** ozialien has joined #openstack-swift16:19
*** ultgeek has quit IRC16:20
*** gyee has joined #openstack-swift16:20
*** welldannit has joined #openstack-swift16:21
*** ozialien has quit IRC16:23
*** krykowski has quit IRC16:23
*** aerwin has joined #openstack-swift16:24
gvernikI am writting some middleware, that runs in proxy and object servers. Is there a way inside middleware's code to know if it run in proxy or storage nodes? i guess i can check path for device and partition, but i wonder if there is something else16:25
pelusenotmyname, you there?16:27
*** krykowski has joined #openstack-swift16:30
*** setmason has quit IRC16:30
tdasilvagvernik: are you writing a middleware that could be run on either the proxy or the object servers? or one for each?16:30
*** setmason has joined #openstack-swift16:31
gverniktdasilva,:  same middleware, i activate it all servers, but i want to have some "if" inside code...like "if running in proxy" then... else "in object" ....16:31
openstackgerritMike Fedosin proposed openstack/swift: Fix broken test_policy_IO_override test  https://review.openstack.org/17159316:33
gverniktdasilva,: my question is , is this enough "device, partition, account, container, obj = split_path(req.path_info, 5, 5, True)" to figure out where the middleware runs ( in proxy or other nodes ), but i wonder if there is somethign better to use16:36
tdasilvagvernik: got it...haven't run into this before...trying to look around in the code16:37
gverniktdasilva,: thanks16:37
tdasilvagvernik: would looking at the instance of the app help?16:38
gverniktdasilva,: you mean it's different instance on each server... right?16:39
acolesgvernik: idk, maybe your middleware could pass a /info request down the pipeline, only a proxy will return 2xx?16:40
acolespeluse: hi! yeah, it was too short16:40
tdasilvagvernik: like isinstance(self.app, BaseStorageServer) ??16:41
gverniktdasilva,acoles: thanks... i will check if this will work for me...16:42
*** zhill has joined #openstack-swift16:47
notmynamepeluse: about to get on my bike. what's up?16:52
pelusecschwede, had asked for clarrification the other day and I might have missed it in the scrollback, on the review branch should we go ahead and +A something on 2nd +2 right away?16:53
notmynamepeluse: cschwede: yes. if you're teh 2nd +2 on an ec_review patch, go ahead and +A it16:53
pelusecool16:53
cschwedepeluse: notmyname: thx - i did this already, and nothing exploded :)16:53
notmyname:-_16:54
notmyname:-)16:54
notmynameclayg has put a -2 on the first one, so that prevents the whole chain from landing until it gets removed16:54
winggundamthI try swift-get-nodes but got this Attention! mismatch between ring and item detected!. what does it means?16:54
notmynamewe'll get everything landed to ec_review, then one merge commit patch to master at the end16:55
winggundamthis Account = Tenant in Swift?16:55
notmynameok, I'm getting on my bike now16:55
ctenniswinggundamth: can you post the error?16:57
Akshatctennis : What is significance of backlog and max_clients param in config files17:00
Akshatctennis : can they impact performance17:00
winggundamthctennis: http://pastebin.com/yQpUWrM3 here you go17:01
winggundamthI use Account name is my Tenant name17:01
ctenniswinggundamth: I don't see the "Attention!" piece you were noting17:01
winggundamthwhoops let me recheck again17:03
winggundamthctennis: but do I understand correct about Account = Tenant?17:05
*** Gu_______ has quit IRC17:05
ctenniswinggundamth: Tenant is a Keystone concept,if you aren't using keystone then it's not a swift term.17:05
winggundamthI'm using Keystone17:06
ctenniswinggundamth: then there is usually a correlation between your tenants and swift accounts yes17:06
winggundamth:)17:06
winggundamthctennis: I got it now. if I specify container.ring.gz than you have to combine Account and Container together or else it will show that error17:07
timburkeklrmn: feature-icr-ui-improvements (pr is https://github.com/swiftstack/SwiftStack/pull/673)17:08
winggundamthctennis: just wonder even I put the Container name that isn't really exist but why it still show information without any error?17:08
ctenniswinggundamth: it's just a lookup table, it doesn't verify existence17:09
*** mmcardle has quit IRC17:10
*** annegentle has quit IRC17:10
winggundamthctennis: I check with the Account and Backup that really exist. But when I go to directory that it shown I couldn't find that directory. What do you think?17:11
winggundamthctennis: If I using curl. it will show 404 not found17:11
*** Gu_______ has joined #openstack-swift17:12
*** cutforth has joined #openstack-swift17:13
winggundamth*check with Account and Container that really exist17:16
*** acoles is now known as acoles_away17:16
winggundamthctennis: so it turns out that Account = AUTH_somerandomhash that I get from swift stat -v17:18
*** lcurtis has joined #openstack-swift17:20
*** ozialien has joined #openstack-swift17:22
winggundamthctennis: yes you are right. http://pastebin.com/j3YqDMJ9 there is inconsistent between each storage container. How can I fix this?17:28
*** pberis has quit IRC17:34
notmynamehello17:35
notmynameso this looks interesting https://www.stellar.org/blog/stellar-consensus-protocol-proof-code/17:40
*** jordanP has quit IRC17:44
cschwedetakoTuesday: here you go, a small quickstart guide: https://github.com/cschwede/django-storage-swift#quickstart17:44
*** Gu_______ has quit IRC17:47
*** setmason has quit IRC17:48
torgomaticgvernik: probably the easiest way for a middleware to discover where it's running is to tell it via configuration, i.e. put a line in its config like "running_in = object(/container/account/proxy/whatever)"17:49
*** setmason has joined #openstack-swift17:49
torgomaticfailing that, I believe you could make an OPTIONS request for /, and account/container/object servers will tell you what they are via a header17:49
torgomatic(the Server header, maybe? dunno)17:50
*** foexle has joined #openstack-swift17:50
*** Gu_______ has joined #openstack-swift17:51
gverniktorgomatic,: thanks17:55
*** tsg_ has joined #openstack-swift17:59
*** delatte has joined #openstack-swift18:00
*** annegentle has joined #openstack-swift18:01
*** geaaru has quit IRC18:01
*** krykowski has quit IRC18:01
*** Akshat has quit IRC18:02
*** delattec has quit IRC18:03
*** delattec has joined #openstack-swift18:12
*** delatte has quit IRC18:15
pelusewow, some really good comments on the EC review chain!18:17
claygmorning!18:17
claygwhat'd I miss?18:17
pelusehopefully nothing :)18:18
claygi'm going to start addressing all the great comments on ec_review so you guys can get a clean slate in the am18:19
peluseand good morning!18:19
claygbut please keep looking at them - and assume everything currently marked today will be fixed tomorrow18:19
pelusethat'd be awesome, most at this point are nits (most not all) which is way cool18:19
claygI want to say someone saw something in the async_update/container update stuff about policy vs. int(policy) - but I also think it was one of those things where the policy instance was *more* correct - but the int works because the only thing we do with it is convert it to an int for the headers or pass it to get_async_dir (which can take either an int or policy instance)18:20
*** Akshat has joined #openstack-swift18:20
clayg... maybe there's a real brokeness - but I bet it was just one of those works anyway kind of bugs18:21
pelusecschwede made that comment18:21
claygtorgomatic's fixup on the proxy PUT commit message was great too18:21
pelusein a delete at update call or something like that18:21
claygacoles_away: are you back?  I'm still leaning towards making purge leave .durables and having hash_cleanup_listdir clean them out on the next suffix rehash (vs. trying to sneak fragment indexes into the .durable file names)18:22
peluseclayg, he was here earlier but I'm cool with that.  I like the benefit of the hard linked .durable though18:22
pelusebut I think I'm more in favor at this point in time with going "as is" for beta in that area....18:23
cschwedeclayg: peluse: i think it works because somewhere down the path in async_update it falls back to policy index 018:23
*** aix has quit IRC18:24
claygcschwede: oh, well that sounds less good :P18:25
cschwedeclayg: yeah, that’s what i thought. and it’s not catched in the test because of this fallback (the code itself is covered)18:26
*** delatte has joined #openstack-swift18:26
*** Gu_______ has quit IRC18:26
peluseahhh18:26
claygcschwede: well it must be a fairly week assertion - unless it's only calling it with policy-018:27
claygi *though* i had some memory of wrting a test to hit the async_update code with a non-zero policy that checked the outgoing headers18:27
pelusenot looking at the code now but yeah seems like we can tighten that up18:27
*** annegentle has quit IRC18:28
*** Gu_______ has joined #openstack-swift18:29
*** Akshat has quit IRC18:29
*** Akshat has joined #openstack-swift18:30
*** annegentle has joined #openstack-swift18:30
*** delattec has quit IRC18:30
cschwedeclayg: peluse: i might be wrong though, but either way line 445 or 449 in https://review.openstack.org/#/c/169988/4/swift/obj/server.py is not consistent18:32
claygcschwede: yeah i think it should use policy vs. index - i was just questioning if we're dealing with a lack of testing - or if the tests are doing the right thing and it just works either way18:32
claygcschwede: maybe try to foce it to a literal 0 and see if some test called "make sure async knows how to policy" fails with a 1 != 0 kind of assestion in the headers or the async dir or something?18:33
claygcschwede: anyway - i'll look at it - if we need another test we'll write it ;)18:33
*** lpabon has quit IRC18:34
*** lcurtis has quit IRC18:34
*** delattec has joined #openstack-swift18:35
*** delatte has quit IRC18:38
*** ho has joined #openstack-swift18:50
*** mahatic has quit IRC18:53
*** annegentle has quit IRC18:53
notmynameyou know what time it is?18:55
notmyname5 minutes until the best time of your week: the swift team meeting!18:56
peluseoh yeah18:56
tdasilvanotmyname: the lack of water in your state may be making you guys drink too much tequila18:57
notmynamelol18:57
*** kota_ has joined #openstack-swift18:58
*** acoles_away is now known as acoles18:59
mattoliveraumorning18:59
kota_morning18:59
hogood morning18:59
cschwedeyou guys rock!19:00
notmynameyes thay do!19:00
acolesclayg: i'm back19:01
*** Akshat has quit IRC19:02
acolesclayg: i didn't yet think much more about the FI.durable other than that it might be possible to save the .durable inode using *sym*links, which negates the 2nd of my reasons to do #FI.durable19:06
*** Gu_______ has quit IRC19:07
acolesclayg: may have been my comment about policy being a controller attribute but the i think i could see how that could go wrong when handling COPY etc19:09
*** takoTuesday has quit IRC19:09
*** Gu_______ has joined #openstack-swift19:10
*** petertr7 has joined #openstack-swift19:12
*** annegentle has joined #openstack-swift19:15
claygacoles: well my current plan is to have purge just clean up the .data and make hash_cleanup_listdir remove a hanging .durable if it's the only file in the dir - same as it does with a single expired .ts19:16
acolesclayg: so yeah if thats my comment ignore it i think i figured out the reasons not to, the d[policy] was jus tantalising19:16
claygacoles: yes it's very sexy!  we'll do it once the middleware extractions get done19:17
acolesclayg: yup. i could buy that.19:17
claygacoles: I kept thinking that we don't want per-fi fragments for the same reason we dont' want per-fi tombstones19:18
claygthere's no such thing as a tombstone for a single fragment - either the object is deleted or not; there's no such thing as a durable for a single fragment - either the object is durable or it's not.19:18
claygin the hash suffix syncing we'd have to parse the fragment index out of the durable when we hash to make sure that a durable fi#X is in sync with a durable fi#X+1 - but on the otherhand if the fi#X+1 only has a durable for "the wrong" fi# it's not really "stable" - so I worry about either having to create a .durable for the fi's you're leaving behind - or well i'm just worried :P19:20
acolesclayg: hmm, yeah would have to compare the timestamps of what is going into the suffix hash. yuk.19:22
*** Gu_______ has quit IRC19:23
acolesclayg: you want me to work up a diff for that i.e. purge only .data, HCL takes out stray .durables?19:24
claygacoles: I want to move the part cleanup "if not suffixes" into the start of the "next" run anyway - if that plus "have purge just remove the .data" works out I'll go with that for tomorrow and we can think more about if we have the ondisk file format right - of if we need to make another run at it19:25
claygacoles: maybe?  i've got a good chunk of the reconstructor pulled apart to clean up some stuff in process job and build_job_info19:25
claygbut I technically haven't gotten to ripping the revert job cleanup out of ssync and moving the part cleanup to the top of "the next run"19:26
acolesclayg: moving the revert cleanup out of ssync was also on my list but i'd shelved it in favor of reviewing other stuff19:27
claygI think I can get it all tonight - but any code you write is better than any code I write - so I'm sure anything you throw out I can make use of - I'm just not sure when I'll need it - or if any tests you write would still apply19:27
acoles^^ that ain't true !19:27
claygacoles: yeah np, that's for the better!19:27
claygI'll publish my wip branch - if you're itching to write some reconstructor code today you can play with it19:27
claygsince I'm currently addressing comments (and will be for a couple hours) - anything you do would be fresh and clean to me when I go to pull it later19:28
acolesclayg: no it will be my tomorrow before i can do anything (parents in law visiting!), i can check status of the review tomorrow and start on anything you haven't got round to, or just leave me a note19:29
claygacoles: perfect!19:30
clayghave a good one - thanks19:30
acolesclayg: so were you thinking to wait for reclaim age before clearing out stray .durables, or clear them out immediately they are found?19:32
claygif there's only a .durable - I think we can rip it out?19:33
claygwe only need one for it work work in th eend19:34
claygand they don't do much for us if they're not with a .data file19:34
claygreconstruction will always "commit" the fragment on the remote end before it calls purge on the local - so any .durable that doesn't have a .data with it should have been moved to another node and the .durable should be there.19:35
claygacoles: do you think anyone tested that the ECDiskFileManger's yield hashes doesn't yeild files unless they're durable?19:35
acolesclayg: whoa! does it?19:37
acoles:/19:37
claygacoles: anyway - here's the wip-fix-ec-recon branch - currently has failing tests -> https://github.com/clayg/swift/tree/fix-ec-recon19:38
mattoliverauk, I'm going back to bed, see y'all in a few hours.19:38
kota_mattoliverau: me, too.19:39
*** kota_ has quit IRC19:39
claygacoles: I *think* it skipps files unless they have a .durable - I think that's the correct behavior - i just don't know if it got an explicit test for it - there's a bunch of reconstructor tests that were on my todo19:39
claygw/e it's just beta :P19:39
* clayg is moving to the next patch!19:40
acolesclayg: it *should* only yield objects that have a valid fileset, idk if there is an explicit test tho19:42
claygcschwede: thanks for http://paste.openstack.org/show/199247/ - acoles says he liked it so i'm going to apply it now19:43
claygcschwede: just running tests...19:43
acolescschwede: see clayg shifting blame around there :P19:43
cschwedeclayg: you’re welcome, hope it helps a little bit!19:44
claygacoles: i'm just a figure head - i take credit for none of this19:46
clayggit briancline19:47
clayg^ that is what git bra<tab> get's you in irc19:47
claygi was on fix-ec-recon btw if anyone was wondering :P19:47
acolesclayg: TestEcDiskFileManager.test_yield_hashes_ignores_bad_ondisk_filesets kind of covers it19:50
*** gvernik has quit IRC19:51
*** thumpba has joined #openstack-swift19:52
*** acoles is now known as acoles_away19:55
*** ozialien has quit IRC19:59
*** joeljwright has quit IRC20:01
*** joeljwright1 has joined #openstack-swift20:04
*** Gu_______ has joined #openstack-swift20:06
*** bkopilov has quit IRC20:14
*** annegentle has quit IRC20:17
*** annegentle has joined #openstack-swift20:23
*** bkopilov has joined #openstack-swift20:24
*** tsg_ has quit IRC20:26
*** bkopilov has quit IRC20:32
*** dencaval has quit IRC20:35
*** annegentle has quit IRC20:38
*** annegentle has joined #openstack-swift20:39
*** winggundamth has quit IRC20:42
*** annegentle has quit IRC20:45
*** Gu_______ has quit IRC20:58
*** annegentle has joined #openstack-swift21:00
*** cutforth has quit IRC21:01
*** annegentle has quit IRC21:09
*** Gu_______ has joined #openstack-swift21:09
*** annegentle has joined #openstack-swift21:10
*** Gu_______ has quit IRC21:10
*** setmason has quit IRC21:26
hotdasilva: around?21:26
*** Gues_____ has joined #openstack-swift21:29
*** jrichli has quit IRC21:33
hoI tried to upload a patch with removing a space for "Erasure Code Docs". I got a message "Do you really want to submit the above commits?" and listed up all patches on ec-review. when I executed git review.21:41
hofirst time to see it. so i think it's better to not do this by me. (i'm really nervous about it)21:42
*** Gues_____ has quit IRC21:44
*** Gues_____ has joined #openstack-swift21:47
*** Gues_____ has quit IRC21:48
*** Gues_____ has joined #openstack-swift21:51
mattoliverauMorning.. Again21:56
homattoliverau: morning again :-)21:57
*** jkugel has quit IRC22:01
*** chlong has joined #openstack-swift22:01
*** sandywalsh has quit IRC22:04
claygho: was it just doc changes?22:05
*** sandywalsh has joined #openstack-swift22:05
claygho: you want it to not change any of the dependent patches sha's - but as long as it's the last patch in the chain everything will be automatically rebased - but nothing *should* have changed that would have caused that22:05
claygho: so it's probably fine22:05
claygho: but if you were really unsure you could paste the the output where it's asking you to type yes - and anyone in channel can probably confirm it's safe to push22:06
mattoliverauho: if your going to push a new docs patchset have you addressed all the comments on the patchset?22:07
*** annegentle has quit IRC22:09
*** jamielennox is now known as jamielennox|away22:11
*** annegentle has joined #openstack-swift22:20
claygmattoliverau: even if a fixup doesn't address everyone comments I'll try to audit and make sure any lingering comments since the last time I pushed are marked done.22:20
claygmattoliverau: I'd like to avoid discouraging folks for pushing a fix because they don't feel like they can address *all* the commnets22:21
mattoliverauclayg: cool, good point :)22:21
claygalso why is anyone making comments to the doc patch - just push - if someone thinks they can word it better - then *they* can push ;)22:21
claygmattoliverau: that being said - I thought I saw a comment in email from peluse where I'd apparently dropped an earlier comment in a subsequent revision - the system is not without risk of human failure22:22
mattoliverauclayg: although your a machine, your also still human.. Wrote that sentence makes no sense :p BTW you've been doing an amazing job!22:25
mattoliveraus/wrote/wow22:26
clayglol - i had to read it twice to parse it as nonsensical - i was like "yeah, robot humans, sounds awesome"22:27
*** bkopilov has joined #openstack-swift22:30
vinshSmall question here... in swift.. will a value of "r01z01" for read/write affinity be interpreted the same as "r1z1"  ?22:31
claygvinsh: I'd have to go read the parser - it's probably in proxy.server or proxy.controller.base22:32
claygtorgomatic: ^ do you know off hand?22:32
hoclayg: i'm back. really sorry. I send your time (I don't want to). http://paste.openstack.org/show/200756/22:32
vinshI'm wondering if I can be a bit lazy in some puppet code here :)22:32
torgomaticclayg: vinsh: I'd guess it would; it all gets run through int() before getting jammed into the ring22:32
vinshRight on.  I'll test it out then.22:33
vinshThanks.22:33
claygho: looks good - just type yes - should only effect the docs change (that's the only one where the sha changed)22:35
hoclayg: thanks :-)22:35
*** aerwin has quit IRC22:36
*** annegentle has quit IRC22:36
*** annegentle has joined #openstack-swift22:47
*** Gues_____ has quit IRC22:53
*** annegentle has quit IRC23:03
*** zhill has quit IRC23:04
*** annegentle has joined #openstack-swift23:05
*** chlong has quit IRC23:05
*** bkopilov has quit IRC23:06
*** bkopilov has joined #openstack-swift23:06
homattoliverau: thanks for the advise. i was really nervous about it23:10
*** jamielennox|away is now known as jamielennox23:12
*** annegentle has quit IRC23:16
*** kei_yama has joined #openstack-swift23:32
claygpeluse: that get_object_ring/load_object_ring is a rabbit hole23:39
claygpeluse: but in fairness I think you're right - account.reaper container.sync and obj.reconstructor should probably all be updated to go with use policy instead of policy_index, and load_object_ring instead of get_object_ring23:39
claygpeluse: proxy.server is a special case I think because we don't want to go with policy everywhere until we're ready to use a controller policy attribute - which we should wait to do until after versioned writes and COPY middleware23:40
*** km has joined #openstack-swift23:50
vinshtorgomatic: clayg: Testing has confirmed.  r01z01... works same as r1z1 for read/write affinity in the conf files.23:52
*** vinsh has quit IRC23:55
*** jrichli has joined #openstack-swift23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!