Thursday, 2014-09-04

*** annegent_ has joined #openstack-swift00:03
*** jdag__ has quit IRC00:03
*** jdag___ has joined #openstack-swift00:06
*** annegent_ has quit IRC00:08
*** dmsimard_away is now known as dmsimard00:09
*** dmsimard is now known as dmsimard_away00:10
taras_redbo: are you there?00:11
taras_i did some more testing00:11
*** ekarlso- has joined #openstack-swift00:11
*** swat30_ has joined #openstack-swift00:11
taras_on smaller dbs index doesnt really help00:11
taras_on big ones it helps a lot00:11
taras_but it uses up 30% of the db here00:11
*** stenchlarge has quit IRC00:12
*** swat30 has quit IRC00:12
*** ekarlso has quit IRC00:12
*** thurloat has quit IRC00:12
*** esmute has quit IRC00:12
*** esmute has joined #openstack-swift00:12
*** swat30_ is now known as swat3000:12
*** annegent_ has joined #openstack-swift00:12
*** annegent_ has quit IRC00:12
*** thurloat has joined #openstack-swift00:13
taras_redbo: however one could simulate a hash index with a trigger a custom hash func and another table00:14
tab_how is the thing with multi-site data replication within Swift storage policy. If I put 2 replicas of data in US-region datacetner and 2 replicas of the same data into EU-region data center. Are both geo regions writable or is the second ("backup") region in EU just for read purposes?00:19
taras_though that only cuts ~10%00:19
mattoliverautab_: both regions are writable00:22
taras_i think the proper thing to do is to implement hash indexes in sqlite and get it over with :(00:22
taras_redbo: the other thing to look into is to have a bimodal access pattern00:25
taras_not use the index until some threshold is crossed by db00:25
taras_fadvise for dbs under a few mb00:25
taras_and use indexes for bigger ones00:25
*** dmorita has joined #openstack-swift00:27
mattoliverautab_: If have a 2 region cluster (us and eu) swift will place replicas on both regions, as it will try and keep the repicas as seperated as it can, first by regon then zone, then drive, etc. You can tweak these by giving wieghts to different regions etc.00:28
taras_err nm00:28
taras_now i cant reproduce index-less scans being slower00:29
taras_nm, was using wrong file00:30
torgomatictab_: regions, zones, servers... it's all sort of the same thing, just getting higher latency as you go00:31
openstackgerritBilly Olsen proposed a change to openstack/swift: Fix getaddrinfo if dnspython is installed.  https://review.openstack.org/11661800:31
redboeh.. on SSDs, using the index is way faster than not using the index.00:35
redbohttp://paste.openstack.org/show/105499/00:37
tab_torgomatic, mattoliverau: thx for answers.00:37
openstackgerritBilly Olsen proposed a change to openstack/swift: Fix getaddrinfo if dnspython is installed.  https://review.openstack.org/11661800:39
taras_redbo: depends00:46
taras_it's faster for large db sizes00:46
taras_it's identical for db sizes of a few mb00:46
taras_while being 30% smaller and more likely to fit into ram00:46
*** esmute has quit IRC00:50
*** esmute has joined #openstack-swift00:50
tab_let's say I have 10 disks in the system, and only 6 of them in use (formated xfs and ready to go). In case i suddenly loss 2 or 3 disks, wil the rest of 4 disks out of then (that are not in formated and in use) become automatically available for further swift replication proces - or could this be solved only manual or by some shell script?00:57
claygtab_: if they're not in the rings they're not getting used00:58
claygtab_: if you have unformatted drives in the rings they'll get error limited pretty fast :\00:59
*** tgohad has joined #openstack-swift01:00
claygtab_: I think the short answer is that swift doesn't do anything like that; you'd format and add capacity in the rings as it comes online and swift will start replicating and evenly distributing data to use the new capacity fairly01:01
*** tgohad__ has joined #openstack-swift01:03
*** tgohad has quit IRC01:03
*** tsg has quit IRC01:03
tab_clayg: thx. maybe also this question for you. Is Swift write speed mostly dependant on the way sqlite works? -> http://www.sqlite.org/wal.html, it says "Write transactions are very fast since they only involve writing the content once (versus twice for rollback-journal transactions) and because the writes are all sequential. "01:05
tab_But there is problem on read, when this sqlite databse file is getting biger: "On the other hand, read performance deteriorates as the WAL file grows in size " , how is this solved in Swift?01:07
redboWe don't use WAL journaling.  We should look at it, though.  The answer to that problem is to force wal checkpoints before the wal file gets too big.01:08
*** addnull has joined #openstack-swift01:09
tab_What is than in use in Swift?01:09
tab_default automatic commit?01:10
redboyeah, it just uses delete journaling, which I think is the default.  I don't think we've really benchmarked any others very well.01:11
claygwell, there no easy answer for what is the "write bottleneck" in swift - it depends totally on your deployment and work load - did I miss some context where we're specifically looking at how fast a container server can eat object PUT's?01:12
*** bill_az_ has quit IRC01:12
redboWe tried WAL out a couple of years ago when it was really new, and we did have problems with the WAL file getting big under heavily concurrent writes.  We couldn't seem to force it to checkpoint.  Not sure if that's been changed or not.01:12
*** tgohad__ has quit IRC01:13
tab_clayg: No, you are not missing the context. I would just like to understand the how is Swift in advantage over strongly consistency SDS systems. Reading the Joe Arnolds book, it says: "To focus on availability over consistency, Swift has no transaction or locking delays." , meaning that Swift some how should be faster .....01:19
*** tsg has joined #openstack-swift01:32
*** tsg has quit IRC01:41
*** tsg has joined #openstack-swift01:41
*** pitterfumped has joined #openstack-swift01:44
*** pitterfumped has quit IRC01:45
claygtab_: well, idk, you might have to ask joearnold - i don't want to put words in his mouth.  But if you're talking about the tradeoffs of leaning on the A; it's almost always in the context of a failure scenario or some kind.01:47
clayghere it seems like we're talking about how fast a single container db can store object updates - and that's not nessecarily represenative of the aggregate scaling story for the storage system as a whole01:48
*** tab_ has quit IRC01:50
*** HenryG has quit IRC01:51
*** nosnos has joined #openstack-swift01:54
*** dmsimard_away is now known as dmsimard01:57
*** addnull has quit IRC02:07
*** dmsimard is now known as dmsimard_away02:09
*** theanalyst has quit IRC02:13
*** theanalyst has joined #openstack-swift02:15
openstackgerritMatthew Oliver proposed a change to openstack/swift: Add concurrent reads option to proxy  https://review.openstack.org/11771002:33
*** HenryG has joined #openstack-swift02:40
*** addnull has joined #openstack-swift02:42
zaitcevhttps://ask.openstack.org/en/questions/scope:all/sort:activity-desc/tags:swift/page:1/ - this is all kinds of horrible02:44
claygzaitcev: hey you've been active!  i was doing that for awhile...02:46
*** addnull has quit IRC02:52
*** addnull has joined #openstack-swift02:53
claygzaitcev: wait... what part is all kinds of horrible?02:54
zaitcevclayg: the desperation of people who look for information02:55
*** addnull has quit IRC02:58
*** tsg has quit IRC02:58
*** addnull has joined #openstack-swift03:04
*** tongli has quit IRC03:05
*** Sanchit has joined #openstack-swift04:12
SanchitHi. I am trying to put a container in an account. Just wanted to get some understanding about the X-timestamp (created_at) field and put_timestamp field in account_stat info db file.04:15
SanchitWhich timestamp remains intact and which timestamp changes when performing operations on an account?04:15
*** addnull has quit IRC04:16
*** tkay has joined #openstack-swift04:16
claygzaitcev: sounds mean when you say it like that04:20
claygSanchit: you could poke around at it with swift-[account|container]-info; but as a general rule if it's not exposed in the API it's subject to change (and in fact recently did, at least for containers, wrt storage policies)04:23
claygSanchit: that is to say - why do you ask :D04:23
*** morganfainberg_Z is now known as morganfainberg04:29
claygSanchit: maybe you were just asking who fills in the X-Timestamp header on a container GET/HEAD and that would be a top of container/server.py - it comes from the created_at field in the db04:29
Sanchitclayg: I am very new to this development environment. I am accessing account_stat, container_stat using sqlite3 from the db file. While all the other fileds have updated, the Put-Timestamp hasn't changed even after adding containers and objects within.04:29
claygportante: k, i got my lwn article all set up04:30
claygSanchit: ok04:30
Sanchitclayg: So I just wanted to know what does these timestamp actually depict?04:30
claygSanchit: when the account was PUT probably04:30
claygat least I wouldn't guess it to get updated with every update from the container-updater - but who knows04:32
Sanchitclayg: Although I have figured out that X-Timestamp is the timestamp when the account was actually created, I am still unclear about the put_timestamp field.04:33
claygwell... that's not exposed in the API04:35
claygbut it happens to get updated when a PUT request for an account hits the account server FWIW04:35
claygSanchit: I'm about to head out; I might be more helpful if I understand what you were acctually trying to get at - rather than chipping around at the edges04:37
claygSanchit: but undirected learning is good too!  just saying I'm about to have to bolt04:37
clayg;)04:37
Sanchitclayg: I am myself unclear at the moment but still Thanks for replying and helping in the best way possible04:38
claygheh - good luck!  feel free to keep posting in channel here while you look around at it - maybe someone will be on later, or some non US folks come online, or we'll see it and responding in the morning04:39
claygg'night04:39
Sanchitclayg: Sure thing. Have a good Sleep!04:39
*** tkay has quit IRC04:53
*** echevemaster has quit IRC05:02
*** ppai has joined #openstack-swift05:05
*** zaitcev has quit IRC05:22
*** addnull has joined #openstack-swift05:28
*** nshaikh has joined #openstack-swift05:31
*** ppai has quit IRC05:42
*** ppai has joined #openstack-swift05:57
*** ttrumm has joined #openstack-swift06:08
*** ppai has quit IRC06:08
*** ttrumm_ has joined #openstack-swift06:16
*** ttrumm has quit IRC06:18
*** nshaikh has quit IRC06:19
*** ppai has joined #openstack-swift06:21
*** bvandenh has joined #openstack-swift06:33
*** kopparam has joined #openstack-swift06:34
*** ttrumm has joined #openstack-swift06:46
*** ttrumm_ has quit IRC06:50
*** nshaikh has joined #openstack-swift06:54
*** joeljwright has joined #openstack-swift06:58
openstackgerritMatthew Oliver proposed a change to openstack/swift: Add concurrent reads option to proxy  https://review.openstack.org/11771007:04
*** ttrumm_ has joined #openstack-swift07:13
*** ttrumm has quit IRC07:16
*** addnull has quit IRC07:23
*** homegrown has joined #openstack-swift07:32
*** geaaru has joined #openstack-swift07:48
*** Sanchit has quit IRC08:25
*** addnull has joined #openstack-swift08:34
*** addnull has quit IRC08:38
*** addnull has joined #openstack-swift08:57
*** foexle has joined #openstack-swift08:58
*** mkollaro has joined #openstack-swift09:10
mkollarodoes anyone know how to debug the S3 tests in tempest? it's just giving me the cryptic message "S3ResponseError: 412 Precondition Failed\nNone" in create_bucket09:14
mkollaroand I have no clue as to what to do with it09:14
*** jdag___ has quit IRC09:51
*** jdag___ has joined #openstack-swift09:53
claygmkollaro: can you read the body of the response?09:55
mkollaroclayg: this is all I'm getting http://pastebin.com/zAyNE0HG09:57
*** dmsimard_away is now known as dmsimard10:03
*** addnull has quit IRC10:05
*** dmsimard is now known as dmsimard_away10:06
*** nosnos has quit IRC10:14
*** nosnos has joined #openstack-swift10:15
*** nosnos_ has joined #openstack-swift10:18
*** nosnos has quit IRC10:19
openstackgerritA change was merged to openstack/swift: code shuffle post expired headers refactor  https://review.openstack.org/11280410:20
*** k4n0_ has quit IRC10:33
*** k4n0 has joined #openstack-swift10:35
*** dmsimard_away is now known as dmsimard10:36
*** aix has joined #openstack-swift10:42
*** erlon has joined #openstack-swift10:46
*** dmorita has quit IRC10:58
*** ttrumm has joined #openstack-swift11:09
*** bkopilov has quit IRC11:12
*** ttrumm_ has quit IRC11:13
*** ppai has quit IRC11:22
*** bkopilov has joined #openstack-swift11:29
*** ppai has joined #openstack-swift11:38
*** pandemicsyn2 has joined #openstack-swift11:49
*** acorwin_ has joined #openstack-swift11:50
*** jroll|dupe has joined #openstack-swift11:50
*** rpedde_ has joined #openstack-swift11:50
*** jroll has quit IRC11:50
*** acorwin has quit IRC11:51
*** rpedde has quit IRC11:51
*** pandemicsyn has quit IRC11:51
*** rpedde_ is now known as rpedde11:51
*** jroll|dupe is now known as jroll11:51
*** ttrumm_ has joined #openstack-swift11:52
*** ttrumm has quit IRC11:55
*** tongli has joined #openstack-swift11:55
*** xrandr has joined #openstack-swift12:26
xrandrHi, I was wondering if OpenStack Swift can also connect to Google Drive, Drop Box, and BOX.NET the way that ownCloud can?12:27
*** mrsnivvel has joined #openstack-swift12:31
*** dosaboy_ is now known as dosaboy12:38
*** tab_ has joined #openstack-swift12:44
*** vr2 has joined #openstack-swift12:49
*** kopparam has quit IRC12:55
*** miqui has joined #openstack-swift12:56
*** kenhui has joined #openstack-swift12:58
*** tsg has joined #openstack-swift12:59
*** foexle_ has joined #openstack-swift13:00
*** foexle has quit IRC13:04
*** tsg has quit IRC13:10
*** ppai has quit IRC13:10
*** k4n0 has quit IRC13:12
btorchxrandr: what what I see on https://owncloud.org/ that seems to be some sync and share software that has the ability, it seems, to connect to other storage like S313:18
btorchxrandr: so should the question not be if ownCloud can connect to openstack-swift ? I don't see it in the list of External Storage support13:19
*** bvandenh has quit IRC13:23
*** bvandenh has joined #openstack-swift13:23
*** tdasilva has joined #openstack-swift13:28
tab_btorch: you can use Swift as a backend for owncloud: http://blog.adityapatawari.com/2014/01/using-openstack-swift-as-owncloud.html13:45
*** tsg has joined #openstack-swift13:46
btorchtab_: cool13:47
tab_btorch: also this: http://doc.owncloud.org/server/6.0/admin_manual/configuration/custom_mount_config.html13:48
*** nosnos_ has quit IRC13:50
*** nosnos has joined #openstack-swift13:51
*** portante has quit IRC13:52
*** nosnos has quit IRC13:55
*** ttrumm_ has quit IRC14:05
*** themadcanudist has joined #openstack-swift14:07
*** peluse_ has joined #openstack-swift14:08
*** annegent_ has joined #openstack-swift14:09
themadcanudisthey guys… does the openstack proxy server issue multiple calls to the storage backend and log them independently?14:13
themadcanudistthat's what it appears in my proxy-server.log14:13
*** cschwede_ has joined #openstack-swift14:13
vr2themadcanudist: what do you mean by "log them independently" ?14:16
*** mkollaro has quit IRC14:17
*** ekarlso- has quit IRC14:17
*** peluse has quit IRC14:17
*** cschwede has quit IRC14:17
*** lpabon has quit IRC14:17
*** annegentle has quit IRC14:17
*** pconstantine_ has quit IRC14:17
themadcanudisttwo requests (with slightly different time-to-completion fields)14:18
themadcanudist1 external request generates two log lines14:19
themadcanudistlike the proxy is going to two storage servers?14:19
vr2possible if you have 2 copies14:19
themadcanudistThat's odd14:20
themadcanudistsec14:20
themadcanudistonly one call to object server14:20
themadcanudistor one log line in object log14:20
themadcanudistbut two lines with diff times in proxy-log14:20
vr2can you print the 2 lines ?14:20
themadcanudisthrm, it's all sensitive info14:21
themadcanudistanything in particular you're looking for?14:21
*** pconstantine_ has joined #openstack-swift14:22
*** ekarlso- has joined #openstack-swift14:22
vr2PUT, GET DELETE ?14:23
themadcanudistHEAD14:23
*** portante has joined #openstack-swift14:23
*** ChanServ sets mode: +v portante14:24
*** mkollaro has joined #openstack-swift14:25
themadcanudistit's so strange. all the calls get double logged14:25
themadcanudistsame tx id's14:25
themadcanudistonly one entry in object-server.log though14:25
tdasilvathemadcanudist: I might be missing something, but the request would go to two different object-servers14:26
vr2and with a GET ?14:26
themadcanudistall calls14:26
themadcanudistGET/HEAD14:26
ahaleproxy head would only go to one object server, as long as it replied quick and wasnt an x-newest ?14:26
ahaleeven then it would only log once right? and have a error log entry for failing to hit object14:27
themadcanudistwould proxy-logging in two spots of my pipeline:main be the issue?14:27
tdasilvaahale: sorry, didn't see themadcanudist it was a HEAD request14:27
tdasilva*mention it was a HEAD request14:28
vr2possible it is a log configuration issue14:28
themadcanudistvr2 ^ what I posted about having proxy-logging in two spots in my main:pipeline?14:29
themadcanudistthat could be it, no?14:29
*** kenhui has quit IRC14:29
*** kenhui has joined #openstack-swift14:30
ahaleprobably not, its meant to be there twice14:30
themadcanudisthrm14:32
*** 17SAA4BEV has joined #openstack-swift14:33
*** peluse has joined #openstack-swift14:33
*** cschwede has joined #openstack-swift14:33
*** annegentle has joined #openstack-swift14:33
*** sendak.freenode.net sets mode: +v peluse14:33
*** 17SAA4BEV has quit IRC14:33
*** annegentle_ has joined #openstack-swift14:33
*** peluse has quit IRC14:34
*** cschwede has quit IRC14:34
*** annegentle has quit IRC14:34
*** annegentle_ is now known as Guest1862114:34
*** bkopilov has quit IRC14:34
*** lpabon has joined #openstack-swift14:34
*** annegent_ has quit IRC14:37
*** annegentle has joined #openstack-swift14:38
vr2I'm running the functional tests and it seems there is some kind of state (the setup tries to erase old files). Is it possible to force the start from a clean state ?14:38
peluse_vr2, run resetswift before you start14:40
vr2where ?14:40
peluse_its a script provided as part of the SAIO setup, I assume you are using Swift All In One?14:41
vr2no I'm using devstack14:41
peluse_vr2, OK, I've not tried that....14:41
vr2it seems dangerous since it wipes the storage14:41
peluse_well, its for a development env14:41
vr2I'm just interested to know if there is a state in the functional test framework14:42
vr2because it is trying to erase files which were created in previous runs14:42
vr2(it seems)14:42
peluse_read towards the bottom of http://docs.openstack.org/developer/swift/development_saio.html14:42
vr2it seems functests delete everything in the configured accounts14:43
peluse_yes14:43
vr2how to avoid that ?14:43
*** peluse_ is now known as peluse14:44
vr2without wiping the store14:44
*** themadcanudist has left #openstack-swift14:44
vr2I assume it is not possible14:44
peluseits not how the func tests are used, they're meant for testing in a dev env14:44
*** ChanServ sets mode: +v peluse14:44
vr2but there is an option --failed in functests (nosetest)14:46
vr2which replays failed tests14:46
vr2so there is a "memory" somewhere14:47
*** kopparam has joined #openstack-swift14:48
pelusevr2, DK off the top of my head, log file perhaps somewhere there off of where you run it14:49
*** kopparam has quit IRC14:51
*** bkopilov has joined #openstack-swift14:51
*** Dafna has quit IRC14:52
*** kopparam has joined #openstack-swift14:54
notmynamegood morning14:56
notmynamevr2: that "memory" is either in .tox or .testrepository14:56
vr2ok thanks14:56
*** tsg has quit IRC14:56
vr2I applied a "more radical" approach14:56
*** mahatic has joined #openstack-swift14:56
*** acorwin_ is now known as acorwin14:56
*** sandywalsh has quit IRC14:57
*** sandywalsh has joined #openstack-swift14:57
*** nshaikh has quit IRC14:59
pelusenotmyname, good morning15:00
*** kenhui has quit IRC15:04
*** kenhui has joined #openstack-swift15:04
*** dencaval has joined #openstack-swift15:08
*** kenhui has quit IRC15:09
*** kenhui has joined #openstack-swift15:10
acolesnotmyname: re gvernik's migration patch https://review.openstack.org/#/c/64430/ ...15:10
notmynameacoles: what about it?15:12
acolesnotmyname: as it stands if one drops the middleware in pipeline with conf in proxy-server.conf-sample, then any user can copy files off proxy server filesystem15:12
acolesnotmyname: which some might say is undesirable !15:12
notmynameouch15:12
acolesnotmyname: so wondering how we document/present default config so folks are aware before they deploy?15:12
acolesnotmyname: is it sufficient to have the filesystem driver commented out in sample config? so you need to explicitly configure it to be active?15:14
* acoles likes to avoid accidents15:14
tdasilvaacoles: in addition to commenting out, there should also be some kind of warning message above that line, what do you think?15:18
acolestdasilva: agree15:20
notmynameacoles: is there no configured root that all fs migrations have to ba anchored at? eg you can migrate any file on the local filesystem as long as its path starts with /foo/bar/baz15:22
notmynameand strip out '..' too15:22
acolesnotmyname: not currently - i had same thought15:23
acolesnotmyname: yeah, strip .. ;)15:23
acolesbut i keep all my passwords under /foo/bar/baz ;)15:23
dencavalhey guys, would it be a possible scenario to configure one-to-one user/container with read/write permission? For example, if I have 100 containers, I would have 100 users and each of them would have access only to its container?15:23
*** kenhui has quit IRC15:24
*** kenhui has joined #openstack-swift15:25
tdasilvaacoles, notmyname: I think gvernik's original idea was to have a root dir, but it was later removed15:27
tdasilvastill, it was set as a header and not part of the config file15:27
acolestdasilva: yes, the 'path' - but as you say that was set in header by user so no restriction15:29
tdasilvaexactly, it would cause same issue...15:29
acolesi'm wondering what a suitable default anchor dir would be, i figure there'd need to be a default value in the conf15:30
notmynameacoles: I'm ok with making that anchor directory required and not putting in a default15:31
acolesnotmyname: yup, so an explicit step needed to open up access15:32
notmynameright15:32
acolesnotmyname: tdasilva: so require anchor dir AND comment out fsystem driver, or just require anchor dir to be configured?15:33
tdasilvaacoles: i would comment out both lines and add a little msg above, otherwise it will cause an error if they don't define the anchor dir15:35
* tdasilva is checking other swift config files for similarities15:36
notmynameacoles: you should be required to set the drivers and the if using the fsdriver, you should set the anchor path15:36
acolesdencaval: are you using keystone auth?15:37
acolesnotmyname: tdasilva: cool, consensus - i'll feed that back to gvernik on gerrit15:38
tdasilvaacoles: another topic on migration middleware...15:38
*** joeljwright has quit IRC15:38
acolestdasilva: sure15:39
tdasilvaI was thinking a little bit further on that idea of a future driver for same cluster container migration, and I think it would make sense to make use of COPY request for that15:40
tdasilvathe issue, is that the current driver API requires a get_object to return the data and the migration middleware handles the putting the object in the new container15:41
tdasilvaso I was wondering if it would make sense to move the upload_object to the DataMigrationDriver class15:42
tdasilvaso that all data migration is handled by the driver15:42
tdasilvaand the middleware is more of a controller15:42
*** tsg has joined #openstack-swift15:44
tdasilvaacoles: does that make any sense? it was just an idea....15:44
*** themadcanudist has joined #openstack-swift15:45
acolestdasilva: yes it makes sense, same had occurred to me but not thought it through yet15:46
themadcanudisthrm, i'm having some really odd behaviour here, where when tracing down these swift transactions that take really long, 10 sec+, and return a 499, the object-server log shows that it found the object REALLY fast… What's the best way I can trace down why the proxy-server may be taking too long, when the object servers are super-fast?15:46
*** geaaru has quit IRC15:46
acolestdasilva: do you think it can be refactored after this patch lands?15:47
notmynamethemadcanudist: 499 is the catch-all for client disconnects, so if the object server is fast, the things to look at are the proxy cpu or netowrk15:47
* themadcanudist nods15:48
themadcanudistnotmyname: is there some technique/tool so that I may be able to track it down more effectively and possibly definitively?15:50
themadcanudistnetwork and cpu seem fine15:51
themadcanudistit's buys, but nowhere near capacity15:51
tdasilvaacoles: I'm just trying to think of how "big problem" it is right now...I keep trying to come up with other examples of driver that would have the same issues. Basically if we know we definetely need to do this, then I'd say let's get it right the first time15:54
tdasilvaif we can come up with other possible solutions or if this is a corner case, then it can and probably should wait for a future patch15:54
notmynamethemadcanudist: can you reproduce the issue on-demand?15:56
themadcanudisti can't. I see it happeneing to external clients because there is so much traffic, but when I try to retrieve the same assets, they're very fast for me =(15:56
*** geaaru has joined #openstack-swift16:00
acolestdasilva: looking at it, maybe its a pretty simple refactor now we have the base class16:00
acolestdasilva: will think more about other use cases16:00
*** mahatic has quit IRC16:01
themadcanudistnotmyname: I suppose I'm just looking for a way to identify if the proxy is stuck at keystone, object-get/head/put via object-server, some other operation, or network transfer?16:03
notmynamethemadcanudist: I was just looking back through some email. something you said reminded me of something clayg said in a review recently16:03
themadcanudistah16:03
themadcanudistthanks!16:03
*** mahatic has joined #openstack-swift16:03
notmynamethemadcanudist: something along the lines of sending a 404 back and then the client not reading the body and it gets logged as a 499. I don't remember the exact details16:04
themadcanudisthrm… interesting16:04
notmynamethemadcanudist: so maybe, just maybe, the client is intentionally dropping the connection after getting a certain status code, and that leads to a timeout and 499 on the server side16:06
themadcanudistah, gotcha16:06
themadcanudistthat is a possibility16:06
notmynamethemadcanudist: that answer has the convenient ideas of both being reasonable and calling it the client problem ;-)16:08
notmynamethemadcanudist: which is kinda scary, because it may or may not be the answer16:08
themadcanudistlol16:09
themadcanudistok, how about this one16:09
mahatichello, is there any reason the file system suggested in installing swift is XFS?16:09
themadcanudistmy swift proxy-server is logging every transaction twice, the only difference being the transaction time field at the end, they're different. the object-server logs the request once.16:09
themadcanudisti'm concerned, because sometimes the xaction time is significantly different.. ie. sub seconds for one, many seconds for the second one… it happens for all HTTP requests16:10
notmynamemahatic: yes. because xfs is the least-bad when it comes to lots and lots of inodes as the drive gets more full16:10
notmynamethemadcanudist: can you paste your proxy pipeline?16:10
themadcanudistpipeline = catch_errors healthcheck proxy-logging cache swift3 tempurl s3token authtoken keystone proxy-logging proxy-server16:10
mahaticnotmyname, okay16:11
*** Snivilis has joined #openstack-swift16:11
notmynamethemadcanudist: what version of swift?16:12
themadcanudistfolsom16:13
themadcanudist1.7.416:13
themadcanudistsorry, 1.7.516:13
*** Snivilis is now known as m|k|e16:13
notmynamewas proxy-logging supposed to be in the pipeline twice that long ago? maybe that was only done more recently16:14
themadcanudisthrm, possibly16:14
themadcanudisti may have mixed new docs with old install16:14
notmynamethemadcanudist: yup, looks like it. proxy-logging twice was added in 1.8.016:15
themadcanudistkeep the earlier or later one?16:15
notmynamethemadcanudist: I don't remember. let me check16:16
themadcanudistkk, that did work. I kept the second last one, seems to have fixed the issue16:16
notmynamethemadcanudist: the last one16:16
themadcanudisthrm, there is utility to this. I can see how long the operations in the pipeline took =)16:17
notmynamethemadcanudist: also, because this is an open-source project and we're chatting on IRC, don't the "rules" say I have to tell you to upgrade? :-)16:18
*** mwstorer has joined #openstack-swift16:20
acolesclayg: re 103777 ... aaah, thanks for spelling it out! just tried against cluster. 201 is crazy too though ;)16:23
*** kenhui has quit IRC16:24
*** gyee has joined #openstack-swift16:25
*** tkay has joined #openstack-swift16:26
*** tab__ has joined #openstack-swift16:27
*** vr2 has quit IRC16:32
*** jergerber has joined #openstack-swift16:32
*** bsdkurt has joined #openstack-swift16:37
*** annegent_ has joined #openstack-swift16:37
*** annegentle has quit IRC16:38
*** annegent_ has quit IRC16:39
notmynamepeluse: yesterday you said you were going to work on the trello board. have you had a chance yet?16:50
pelusenotmyname, will check it out right now...16:51
pelusenotmyname,  OK, its up to date.  There are a few cards getting close to moving to the right but not there yet...16:56
notmynamepeluse: thanks16:56
pelusetsg,  when you get a chance pelase scrub the PyECLib cards and update as needed.  thanks16:56
*** kopparam has quit IRC16:56
*** kopparam has joined #openstack-swift16:57
*** mahatic has quit IRC16:57
tsgpeluse: i have a couple of updates, will get those posted shortly16:59
*** mahatic has joined #openstack-swift17:00
*** chuck_ has joined #openstack-swift17:05
*** shri has joined #openstack-swift17:07
*** tsg has quit IRC17:07
*** tsg has joined #openstack-swift17:09
*** kopparam has quit IRC17:10
*** bvandenh has quit IRC17:10
*** kopparam has joined #openstack-swift17:11
*** kopparam has quit IRC17:15
tdasilvaacoles: sounds good. I've been thinking about writing another driver just for testing purposes, just haven't gotten around to it yet17:17
*** chuck_ has quit IRC17:19
*** aix has quit IRC17:19
*** annegentle has joined #openstack-swift17:23
*** zaitcev has joined #openstack-swift17:26
*** ChanServ sets mode: +v zaitcev17:26
themadcanudistnotmyname: LOL, re: opensource and telling me to upgrade17:29
notmynamethemadcanudist: :-)17:29
themadcanudistyou would believe it man… I get that in ALL OSS channels I go to.17:29
themadcanudist=D17:29
themadcanudistit's hilarious17:29
notmynamethemadcanudist: "hilarious"17:29
notmynamethemadcanudist: my favorite is the "your use case is invalid and you should do it differently"17:30
themadcanudistLOL17:30
themadcanudisti feel you are the only one on the internet who understands.17:30
themadcanudistbased on those two comments17:30
notmynameswift: the friendly open-source project ;-)17:31
themadcanudistswift: we're self reflective too ;)17:31
*** homegrown1 has joined #openstack-swift17:31
notmynamethemadcanudist: actually, speaking of upgrades, is there anything on my side that I can do to help you do upgrades? of course I prefer people to use later versions. sometimes people don't upgrade because of things in the upstream code. other times it's for internal reasons. I can help with the first, I hope17:33
homegrown1Can anyone point me to the ordering of the middleware in swift? Currently have pipeline = pipeline = catch_errors healthcheck cache ratelimit swift3 s3token authtoken keystone cname_lookup domain_remap staticweb proxy-server17:35
themadcanudistwell, what's the fear factor of upgrading from 1.7.5 to latest… in terms of object/container/account data consistency and explosion?! also, keystone 2012.4-> latest17:35
notmynamethemadcanudist: I understand. of course, it gets scarier the longer you wait :-)17:35
themadcanudistno kidding17:35
notmynamehomegrown1: what version of swift are you using?17:35
themadcanudistbasically has anyone lost all their data ;)17:35
themadcanudisti can always upgrade in steps anyway, right.17:36
notmynamethemadcanudist: you should read https://swiftstack.com/blog/2013/12/20/upgrade-openstack-swift-no-downtime/17:36
* themadcanudist nods17:36
themadcanudisti have17:36
themadcanudistthanks!17:36
notmyname:-)17:36
notmynamethemadcanudist: no, I don't know of anyone who has lost data stored in swift17:36
notmynamehomegrown1: the sample proxy-server config file has a good example: https://github.com/openstack/swift/blob/master/etc/proxy-server.conf-sample#L7917:37
themadcanudistsweet, that's awesome!17:37
homegrown12.0317:37
notmynamehomegrown1: hmm...that doesn't look like a version number that swift or swiftclient has released. Swift is currently at 2.1.017:39
themadcanudistnotmyname: last question, is there anything in the upgrade path from 1.7.5 -> latest that would change the data/metadata in a way that downgrading would be impossible?17:39
homegrown1notmyname: thanks, having a look. specifically interested in cname_lookup & domain_remap which seems to break things for me at the moment17:40
notmynamethemadcanudist: I don't think so. of course, check the changelog file. the biggest change (and the reason we bumped the major version to 2) is because of storage policies. if you start using multiple storage policies and then downgrade, you can't get the data back. but aside from that, no17:40
notmynamethemadcanudist: I'll go look at it too, just to check17:40
themadcanudistgotcha17:40
themadcanudistthanks dude! much appreciated17:40
homegrown1notmyname: it was built using mirantis fuel (icehouse on ubuntu)17:41
*** annegentle has quit IRC17:42
*** annegentle has joined #openstack-swift17:43
notmynamethemadcanudist: no, nothing that affects the data on disk. https://github.com/openstack/swift/blob/master/CHANGELOG17:44
notmynamethemadcanudist: but there have been some config file changes.17:44
themadcanudistperfect17:44
themadcanudistsorry, one last question.  just curious if there are any noteable efficiency/speed increases in the replicator/updater daemons? =)17:45
notmynamethemadcanudist: wow. you don't have global clusters, static large objects, bulk requests, per-disk threadpools, /info17:45
*** annegentle has quit IRC17:47
notmynamethemadcanudist: maybe17:47
* themadcanudist nods17:48
themadcanudistsweet!17:48
notmynamethemadcanudist: eg there is now support for using a separate replication network so you can separately QoS it17:49
notmynamethemadcanudist: and there is now support for running replication to move misplaced partitions first17:50
themadcanudistah, cool17:50
claygwow this default port thing is getting all chicken and the egg on us17:50
claygnotmyname: no wonder we ignored it for so long?17:50
notmynamethemadcanudist: there are also some disk io improvements like using threadpools. this helps the whole server keep up performance when a drive is down17:51
notmynameclayg: what do you mean?17:51
themadcanudistyeah, i saw that17:51
themadcanudisti like17:51
notmynameclayg: did I miss something?17:51
claygnotmyname: i was just looking at the triplo comments on the patch about making bind_port required17:51
notmynameoh, I did miss that. /me goes to look17:52
zaitcevTripple-O is possibly the only place that objects. They could've easily done the right thing months ago. There's a patch in their Gerrit for crissakes. We just need notmyname confront that l-something dude.17:52
themadcanudistnotmyname: do you recommend I go through a few upgrades from 1.7.5->latest or just jump?17:52
notmynamethemadcanudist: I was wondering that myself. if you're on 1.7.x, you might want to try 1.9.0 then 1.13.1 then 2.1.0. while I think going straight to current would probably work just fine, I think the important part is to have a checkpoint at 1.13.117:54
zaitcevActually lifeless' comment was: "Making this a parameter to the overall deployment feels very wrong given that this is all discoverable via keystone." IOW he didn't like the extra knob.17:54
zaitcevAnd then rwsu folded up and didn't improve the patch as his PTL suggested.17:54
themadcanudistnotmyname: good call17:54
zaitcevOr we could've had Tripple-O write bind_port long ago.17:54
zaitcevI looked at their code, but it's like Prolog to me. I cannot propose that patch in Richard's stead.17:55
openstackgerritOpenStack Proposal Bot proposed a change to openstack/swift: Updated from global requirements  https://review.openstack.org/8873617:55
*** geaaru has quit IRC17:55
*** annegentle has joined #openstack-swift17:56
notmynameclayg: zaitcev: ya, I don't really get those objections either17:57
*** homegrown1 has quit IRC17:57
notmynameclayg: about catching errors in the config file, I didn't because we don't do that for other required config variables17:59
notmynameclayg: wow, I just changed the max_clients setting in the proxy config to abc and that resulted in a whole lot of hurt17:59
zaitcevthemadcanudist: I cannot remember, but could you find out if the format of memcached was changed away from pickle after 1.7.5 or in it or before? That change requires a special procedure: update packages, then flip the config, then restart once again (one by one).18:01
claygwell, this is case where we known we're taking a known working config file and making it blow up - i think it's worth the extra effort regardless of how bad validation is for other config options.18:01
notmynamezaitcev: themadcanudist: oh yeah. good call on remembering that change18:01
claygnotmyname: IMHO of course18:01
notmynameclayg: ya, I'm not disagreeing18:01
zaitcevthemadcanudist: If you can reboot the whole cluster, then it it's a non-problem, but otherwise you need to follow that proceedure to upgrade live.18:01
notmynamezaitcev: themadcanudist: that was in 1.7.0!18:02
notmynameso he's already there18:02
zaitcevsafe18:02
mahaticto edit the howto_intallmultinode doc, will i have to clone this https://github.com/openstack/openstack-manuals and edit?18:04
mahaticOr, i find the doc in here https://github.com/openstack/swift - or should i edit it here?18:05
notmynamemahatic: no. the doc you are referencing is in the swift source tree18:05
notmynamemahatic: yes, the second thing you said18:05
mahaticokay18:05
mahaticnotmyname, any recommended tool should i be using?18:06
notmynamemahatic: https://github.com/openstack/swift/blob/master/doc/source/howto_installmultinode.rst18:06
notmynamemahatic: it's restructured text, so just a text editor18:06
*** annegentle has quit IRC18:06
themadcanudistzaitcev: Thanks!18:06
mahaticnotmyname, okay. thank you.18:06
notmynamemahatic: you'll have to jump through the "getting started to contribute to openstack" steps18:06
themadcanudistso it doesn't affect me18:07
themadcanudist1.7.518:07
notmynamemahatic: clone it, install git-review, set up git-review, then do the edits, then push to gerrit18:07
*** wolsen has joined #openstack-swift18:07
notmynameoh, sign the CLA. and make sure you have a launchpad account. *sigh*18:07
notmynamethemadcanudist: correcty18:07
notmynames/y//18:08
mahaticnotmyname, heh, yup, have created the accounts, ssh etc. I had earlier tried to push a sample file.18:09
mahaticnotmyname, When i was trying to learn git18:11
*** annegentle has joined #openstack-swift18:18
*** mahatic has quit IRC18:19
*** mahatic has joined #openstack-swift18:24
*** mkollaro has quit IRC18:36
notmynamedid you see the email about a new openstack project? "Introducing Poppy - An Open API for CDN Provisioning"18:43
torgomaticwho can keep track any more?18:43
*** elambert has joined #openstack-swift18:44
zaitcevAt least Stacatto died. I think.18:45
notmynamewhat was that?18:45
*** jdag___ is now known as jdaggett18:45
zaitcevA service that copied images Glance-Glance.18:45
zaitcevThe guy who pushed it quit and went to work for Dell, and we did not hear about it since.18:46
zaitcevhttps://github.com/stackforge/staccato/commit/5876e8664d593ae31d8de59d255af15de13b7328 - last commit almost exacltly 1 year ago18:47
openstackgerritA change was merged to openstack/swift: Refactor object PUT to make EC easier to add  https://review.openstack.org/11408418:47
zaitcevhttps://github.com/stackforge - wow, what a zoo18:48
zaitcev11 pages18:49
dfghurricanerix_: https://github.com/dpgoetz/tcod/blob/master/txtime18:49
dfgoops18:49
*** mikehn_ is now known as mikehn18:57
zaitcevHi Pete, Are you familiar with OpenStack Swift but would like to learn more?  Now's your chance!19:01
zaitcevAbsolutely19:01
wolsenhey torgomatic, thanks for the review and feedback on https://review.openstack.org/#/c/116618/ - have a minute to talk about it?19:04
*** physcx has joined #openstack-swift19:12
*** themadcanudist has left #openstack-swift19:18
*** jasondotstar is now known as jasondotstar|afk19:24
*** StevenK has quit IRC19:38
*** IRTermite has quit IRC19:40
*** StevenK has joined #openstack-swift19:43
*** IRTermite has joined #openstack-swift19:52
*** tdasilva has quit IRC20:02
*** astellwag has quit IRC20:08
mahaticnotmyname, i checked out the branch source, edited the howto_installmultinode.rst and did a commit20:15
notmynamegreat20:15
notmynamemahatic: it's in one commit right? not several20:15
mahaticnotmyname, http://paste.openstack.org/show/106011/20:17
notmynamemahatic: great. now you need to do `git review` to send it to gerrit :-)20:17
openstackgerritMahati proposed a change to openstack/swift: Added instructions for mounting with a label.  https://review.openstack.org/11919320:20
mahaticnotmyname, done!20:20
*** tsg has quit IRC20:22
openstackgerritA change was merged to openstack/swift: fix my name in AUTHORS  https://review.openstack.org/11708120:26
*** tsg has joined #openstack-swift20:29
*** astellwag has joined #openstack-swift20:30
*** elambert has quit IRC20:31
*** pberis has joined #openstack-swift20:40
*** tgohad has joined #openstack-swift20:41
*** tsg has quit IRC20:43
openstackgerritA change was merged to openstack/swift: warn against sorting requirements  https://review.openstack.org/11869420:44
*** mkollaro has joined #openstack-swift20:54
*** jasondotstar|afk is now known as jasondotstar20:56
torgomaticwolsen: I'm here now if you still are21:00
wolsentorgomatic, I am :)21:00
*** mkollaro has quit IRC21:00
tab__how does per-disk thread pool work? does that mean that with configuration of this parameter someone specify how manny I/O it allows to a disk? Does configuration of this parameter applays to all disk or it could be per-disk configuration?21:00
wolsentorgomatic, so again thanks for your review - and I think you make some valid points about the blocking nature of the call21:00
torgomatictab__: it's pending-IO/disk/worker21:00
torgomatictab__: and it applies to each disk; if you want it per-disk, have one obj server per disk with a different config21:01
torgomaticwolsen: I hope it's not too harsh criticism... the good news is swift has a threadpool in it already that you could grab21:02
wolsentorgomatic, one thing about this change is that its not limited to the dns resolution in the cname_lookup middleware... unfortunately, when dnspython is in the mix and eventlet does the monkey patching, it patches the socket.getaddrinfo for an ipv4 only implementation, which affects things such as socket connections21:03
wolsentorgomatic, oh no not at all - its why we have the reviews and discussions :)21:03
torgomaticwolsen: right, but if rings have IP addresses in them and not hostnames, then I can't think of much else besides cname_lookup that requires connections to things...21:04
torgomaticI mean there's memcache servers, but typically one uses IPs for those as well21:04
torgomaticdo you have some more things that break? I'm sure I can't think of all of it21:04
*** annegentle has quit IRC21:05
wolsenhmm, I'm not fully aware of everything that would break - I'll need to do some more investigation on that - my first thought was going to the socket.create_connection21:05
wolsenthe ideal solution would be to get eventlet to support ipv6 :)21:07
torgomaticwolsen: sure, anything that feeds in a hostname there will have troubles; typically, though, cluster operators deploy configs with IPs everywhere possible so that DNS failures don't impact the cluster21:07
torgomaticwolsen: well, yes... go bang on that pull request some more ;)21:07
torgomaticFWIW, Swift will soon(ish) be requiring the latest eventlet instead of 0.9.whatever because of some new features in 0.15.221:07
torgomaticwell, one new feature21:07
*** IRTermite has quit IRC21:08
wolsenah interesting, well that will be good to be closer - I think the eventlet is just as interested in getting ipv6 in, but they want to make sure its baked properly21:08
*** IRTermite has joined #openstack-swift21:08
torgomaticwolsen: right, but notably that makes the dependency greater than what ships with Trusty *or* RHEL 7, so there'll be less resistance to bumping it even further21:09
torgomaticSwift typically tries to stay with versions that distros ship for ease of deployment21:09
torgomatic(library versions)21:09
wolsentorgomatic, yeah that makes sense - easier for everyone :)21:10
torgomaticit's like static friction vs. dynamic friction... it takes a lot to break free of a distro version, but then it's easier to keep it rolling21:10
wolsentorgomatic, so assuming I find nothing else that breaks, a threadpool wrapper around the dns resolution in the cname middleware would be preferred21:11
wolsenover what is proposed at least21:11
torgomaticwolsen: well, in order of preference... (1) fix eventlet (2) workaround in cname_middleware (inf) config for blocking DNS21:12
pelusetorgomatic, quick sanity check if you could.  It is the case that if I have one HDD die and it had N paritions on it (a) replcaitors would work to push copies to handoffs and (b) there could potentially be as many unique handoff nodes as paritions.  Is there anything incorrect about either of those statements?21:12
torgomaticwolsen: also some load testing with the blocking DNS would be required to make sure nothing else breaks... that can be done in parallel with the cname_lookup changes by removing cname_lookup from the test system's pipeline21:12
peluses/It is/In21:13
torgomaticpeluse: I'm not certain about (a), but (b) is correct21:13
torgomaticand that's not a veiled "I think you're wrong", but real uncertainty :)21:13
pelusetorgomatic, OK, thanks.  That's where I was least certain, I'll go dig into the code for (a).  Thanks!21:13
wolsentorgomatic, yeah that's my understanding of the preferred ordering as well21:13
wolsentorgomatic, thanks for the chat and review - I'll investigate this a bit more and come back as needed21:15
torgomaticwolsen: hope it helped21:15
wolsentorgomatic, it did until I have more questions :)21:16
torgomaticwolsen: I'm typically here. :)21:16
wolsentorgomatic, awesome21:16
torgomaticalso, I (like most Swift cores) have an IRC bouncer, so ask anytime and I'll respond when I get back to my desk21:17
wolsenack21:17
pelusetorgomatic, so FYI it looks like (a) is a correct statement.  Assuming the remote diskfile does a mount_check (def yes) and os.lstat errors if the drive is totally dead, the the remote server REPLICATE handler will return diskFul and the local replicator will skip it and move to the next in the node list so will indeed replicate to a handoff21:30
torgomaticpeluse: good to know, thanks21:30
pelusetorgomatic, relevant for the new section I'm putting in the reconstructor section in the EC spec based on your review of the reconstructor framework.... thx21:31
*** mahatic has quit IRC21:34
wolsentorgomatic, fyi it appears that you cannot bind to an ipv6 address with eventlet in the mix without the bypassing the greendns implementation in evently21:37
wolsens/evently/eventlet/21:37
torgomaticwolsen: hm... well, that's unfortunate21:37
wolsentorgomatic, indeed21:37
wolsentorgomatic, so it seems that another work around would need to be provided or wait for upstream21:38
torgomaticwolsen: does the proposed patch fix that, or is it another issue?21:38
*** annegentle has joined #openstack-swift21:38
wolsentorgomatic: the proposed patch does fix it - but at the cost that you mentioned - what the proposed patch does is cause the socket.getaddrinfo and such to not be patched and use the default (blocking) impl21:39
*** annegentle has quit IRC21:40
wolsentorgomatic, I'm getting a sense that this is quite a difficult problem beyond setting the env variable since it seems other openstack projects have gone the env variable route for resolving the ipv6 issue21:40
torgomaticwolsen: yeah, it gets tricky since Swift typically services lots more requests (concurrency and total) than other OpenStack projects do21:41
torgomaticyou don't call the Nova API once the VM is up, you know?21:42
wolsentorgomatic, oh I absolutely acknowledge that21:42
wolsenright21:42
wolsenyeah swift has a much different use case21:42
wolsenwith >>> volume of calls21:42
torgomaticwolsen: if tests with that patch and without cname_lookup don't show any other blocking-DNS slowdowns, then taking that patch (at least until eventlet gets sorted out) seems (to me) like a reasonable thing to do21:43
torgomaticwell, plus the cname_lookup changes, but whatever21:43
wolsentorgomatic, yeah and I think its probably best to config option it as well21:44
torgomaticwolsen: yes21:44
wolsenI suppose the question would be is it only proxy server that needs it21:45
openstackgerritpaul luse proposed a change to openstack/swift-specs: Updates to EC Design Spec  https://review.openstack.org/11648621:45
torgomaticwolsen: the unpatched DNS, or the threadpool stuff?21:46
wolsentorgomatic, was thinking the unpatched dns... threadpool around cname middleware would be proxy server I believe... thinking the unpatched dns goes for all21:47
torgomaticwolsen: yep... putting the patch in utils like that should get everyone, though21:47
wolsentorgomatic, yeah but not sure where/which config file to pull it from :)21:48
wolsenas it gets patched in prior to utils intentionally21:48
wolsenanyways, thanks for your help torgomatic - really appreciate it21:51
*** cutforth has quit IRC21:52
pelusetorgomatic, awesome replies to my questions on your GET splice patch - no followups if you can believe it :)21:55
torgomaticpeluse: :)21:55
pelusetorgomatic, re-running probtests now on the latest - my SAIO VM was a little off so I'm on a new one now, almost done w/no failures so far21:56
*** m|k|e has quit IRC21:58
mattoliverauMorning all21:58
*** tgohad has quit IRC21:58
*** tongli has quit IRC21:59
pelusehowdy21:59
mattoliveraupeluse: I have a heat template that builds an saio in the rackspace cloud if that helps.21:59
pelusemattoliverau, that's kinda cool but I like old school ground and pound or I forget how all this shit is stitched together :)22:00
mattoliverauLol, fair enough. :)22:01
torgomaticThe head bone's connected to the neck bone22:02
torgomaticThe neck bone's connected to the shoulder bone22:02
torgomaticThe shoulder bone's connected to... mysql? Who thought this was a good idea?22:02
peluseheh, sometimes I'm not sure what's connected to my neck bone - not always my brain though!22:03
pelusemattoliverau, so we were chatting about LCA a long time ago - my talk was accepted however I'm burning my travel budget for Q4 and Q1 on Paris so I can't go :(22:04
pelusemattoliverau, we're asking them if they'd be cool with notmyname covering it though which would still be way cool (but I really wanted to check out NZ)22:05
mattoliverauDamn! That's sux, wouldn't been awesome to have you down this way :(22:05
mattoliverauWould've22:05
mattoliverauDamn auto correct22:06
mattoliveraupeluse: but at least you get to come to Paris. I guess the downside is the summit closest to LCA each year is fairly close to the not american ones.22:08
mattoliverauWere you speaking in the OpenStack mini conference or the main one? I work with both chairs of the paper committee and one of the organisors of the OpenStack miniconf. So can push anything you need.22:10
*** erlon has quit IRC22:10
*** tsg has joined #openstack-swift22:12
*** occupant has quit IRC22:15
*** foexle_ has quit IRC22:26
openstackgerritpaul luse proposed a change to openstack/swift-specs: Updates to EC Design Spec  https://review.openstack.org/11648622:27
peluse^ slightly updated picture in the concurrency section for the reconstructor22:28
pelusemattoliverau, was that last question for me?22:29
*** stonemessenger has joined #openstack-swift22:38
mattoliverauYup, sorry22:39
*** stonemessenger has quit IRC22:40
*** tsg has quit IRC22:42
tab__is there any swift middleware that would move older data to slower storage and save all new data or hot data (that are read a lot) to faster storage, in means of cache-tiering?22:50
*** marcusvrn has quit IRC22:51
notmynamemattoliverau: peluse's talk was for the main conference. I don't think miniconfs have opened their CFP yet22:53
notmynametab__: no. but....22:53
notmynametab__: so we have storage policies now. and you can have a storage policy for each tier of your storage. and I'd expect that people will write daemons that do move data to different tiers based on some condition22:54
tab__notmyname: ok thx. So that is still work to do :)22:55
*** antelopebyzantiu has joined #openstack-swift22:55
notmynametab__: yes. but it's not work that I expect to be in upstream swift22:55
*** antelopebyzantiu has quit IRC22:56
*** dmsimard is now known as dmsimard_away22:56
*** geaaru has joined #openstack-swift22:57
tab__notmyname: yes, understood.22:57
MooingLemurstorage policies are only granular to the container level though, right?22:58
MooingLemurso it's not as if you can transparently shuffle old objects to slower disks23:00
notmynameMooingLemur: correct. they are set at the container level23:00
notmynameMooingLemur: yes, but you could create a static manifest in the original tier to point to the slower one. the manifest is small, and the url wouldn't change that way23:01
MooingLemuraha, right23:01
torgomaticalso note that middleware is the worst place to shuffle data between tiers; if you do it in a daemon, then clients don't have to wait for it23:01
notmynametorgomatic: peluse: are you ok with landing the living EC spec changes?23:27
notmynametorgomatic: peluse: https://review.openstack.org/#/c/116486/23:27
torgomaticnotmyname: sure23:27
notmynametorgomatic: ok. I'll do it23:27
openstackgerritA change was merged to openstack/swift-specs: Updates to EC Design Spec  https://review.openstack.org/11648623:29
notmynameoh that was fast23:32
torgomaticremember when code review was slow because of the humans?23:35
openstackgerritJohn Dickinson proposed a change to openstack/swift: make the bind_port config setting required  https://review.openstack.org/11820023:49
notmynameclayg: I think that's what you were looking for? ^^23:49
claygnotmyname: maybe, i'm pretty hard to make happy23:49
notmynamelol23:50
claygnotmyname: heh, is there supposed to be a sys.exit in there after that logger.error - or does it like fall back to the old defaults?23:52
notmynamedoh!23:53
notmynamesee I knew there was something to be unhappy with :-)23:53
notmynameclayg: actually, looks like it should be "return 1"23:54
openstackgerritJohn Dickinson proposed a change to openstack/swift: make the bind_port config setting required  https://review.openstack.org/11820023:55
notmynamethere23:55
claygHEH23:55

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!