Wednesday, 2014-09-03

taras_i sent email titled 'Observations re swift-container usage of SQLite', not subscribed to list, we'll see if it makes it00:03
notmynametaras_: hi00:05
* notmyname just read al the scrollback00:06
notmynametaras_: so, you are wondering about db replication00:06
notmynameit's more than just "rsync the DBs". db replication only falls back to moving the entire file if the replicas are really different00:08
taras_notmyname: yes00:08
taras_i'm gonna read the code later00:08
notmynameotherwise it only moves rows. and torgomatic pointed out the recent improvement for bulk inserts00:08
taras_oh, so it doesn't use rsync at all?00:08
notmynameno it does00:08
notmynamejust not as the default :-)00:09
taras_:)00:09
notmynamewell, not as the normal case00:09
taras_seems like an interesting problem00:09
taras_so stupid question00:09
taras_why this design00:09
taras_vs something like a clustered mysql?00:09
notmynameand IIRC swift writes everything to a local .pending file. then on db operations it flushes those to disk. like our own WAL00:09
taras_eg 1 master + 2 hot standbys00:09
taras_notmyname: yup, i'd advocate replacing sqlite with that .pending file altogether00:10
taras_in the long term00:10
notmynameinteresting00:10
*** tkay has quit IRC00:10
notmynameone thing we definitely want to do (and have talked about for years) is manage some container sharding to better support containers with high cardinality00:11
notmynameoriginally swift replaced a system that had a large, sharded postgres DB for storing the data placement (similar design to mogileFS)00:11
taras_infinite amounts of metadata with varying degrees of warmness is a very interesting problem :)00:11
taras_notmyname: was postgres storing data too?00:11
taras_or just metadata00:12
notmynameno, just the metadata00:12
taras_interesting00:12
notmynameswift's design with sqlite was chosen because it's simple and good enough. it's well-proven code that allows for the functionality needed (listings) and also has the advantage of being a db library so the data can be managed like files if needed (see rsync or other replication primatives)00:13
taras_sqlite is very good at being simple and robust00:13
notmynameso when there are failures in the cluster, it's easy to recreate the db replicas00:13
notmynameand we don't have to manage with different ha/durability patterns. basically everything is treated pretty much the same way00:14
notmynamethat is, eventually consistent replicas00:14
taras_i definitely appreciate your approach00:14
notmynamegives a lot of flexibility and robustness to the overall system00:14
taras_in that one can skim the code without going crazy00:14
notmyname:-)00:14
notmyname(i just saw your email to the ML)00:15
notmyname...reading00:15
notmynametaras_: " lack of index for LIST" I don't think that's true00:16
notmynamethere's an index on (deleted, name) so we can select the name and easily filter out the deleted ones00:17
taras_my sqlite failed me there00:18
notmynametaras_: also, note that almost every production swift cluster ends up using flash for the account and container storage. I'm not sure if that would change any of your views, but it's something important to know00:18
taras_notmyname: i noticed that00:18
taras_notmyname: i don't have any particular views, i certainly appreciate good-enough engineering00:19
notmynamereason being, buying a few SSDs is vastly cheaper than spending the engineering time to figure out how to effectively shard containers00:19
taras_i just have experience dealing with perf issues caused by similar patterns, was wondering if it's of any use to other projects00:19
taras_eg your index + timestamp format is likely to effectively double your db size00:20
*** erlon has quit IRC00:20
notmynametaras_: https://github.com/openstack/swift/blob/master/swift/container/backend.py#L18700:20
taras_which causes more cold io, etc00:20
notmynamewhat do you mean by index + timestamp format00:20
taras_yeah i mentioned that, just didnt think it through :(00:20
taras_notmyname: so as i understand, main piece of data that you store is key + some metadata00:21
taras_eg timestamp, etag, etc00:21
taras_you are likely to have paths in the keys00:21
taras_eg long-ish keys00:21
taras_so with that index not using a hash function, you double you db size on disk00:22
notmynamelet's be specific so we know what's going on :-)00:22
notmynamecontainer DBs00:22
notmyname2 tables: stat and objects (I'm ignoring the policy one for now)00:22
notmynamestat contains stuff about the container itself. generally just one row00:22
notmynamebut htere is one row per object in the objects table00:23
notmynameand that has a few columns (5? /mes goes to check)00:23
taras_right00:23
notmynamename, content type, etag, and size are the ones set by the user00:24
notmynamename can be long, as you mentioned. up to the name limit in the cluster (default is 1024)00:24
notmynamecontent type could be long, I guess, since that's ultimately just user-set data. so up to whatever the header max is. 8k IIRC00:25
taras_so INSERT into objects -> append rowid,name,created_at,size,content_type,etag,deleted,storage_policy_index, append deleted,ix_object_deleted_name00:25
taras_so INSERT into objects -> append rowid,name,created_at,size,content_type,etag,deleted,storage_policy_index; append deleted,ix_object_deleted_name00:25
*** dmorita has joined #openstack-swift00:25
notmynameand ya, you're right that the timestamp is stored as TEXT00:25
notmynameand the timestamp value will always come from https://github.com/openstack/swift/blob/master/swift/common/utils.py#L68400:26
notmynameie a normalize 10.5 string00:26
taras_so you end up with something like <page for objects><page for objects><page for ix_object_deleted_name>00:26
taras_so scanning the index is very likely to scan the whole db00:27
taras_sorry, not very likely00:27
taras_i'm actually not sure how likely it would be00:28
notmynameah, ok. had me confused for a second00:28
notmynameok. so we don't have any idea now ;-)00:28
pelusesomewhat likely? :)00:28
taras_but you are very likely00:28
notmynameit's going to read something into memory.00:28
taras_to have doubled your storage00:28
notmynamedoubled? from what?00:29
taras_from not having that index00:29
notmynameon00:29
notmynamean index ont he timestamp?00:29
taras_eg if you have a db handy, can do vacuum..drop index ix_object_deleted_name and see how much overhead it took up00:30
taras_notmyname: sorry, 2 issues: big timestamp and big index00:30
notmynameI don't have any prod DBs handy right now00:31
taras_i'm gonna bike home, back in a few hours00:33
notmynametaras_: ok. I'll be in and out for the rest of the evening myself00:33
notmynametaras_: I definitely want to talk more about all this00:34
notmynameand maybe if we're lucky redbo can join in too00:34
taras_i appreciate it :)00:34
redbosays everyone all the time00:35
notmynametaras_: also, if you visit the mothership in SF (I'm not sure how mozilla is structured) I'd be happy to chat in person too00:35
notmynameredbo: all the things?00:35
taras_i'm leaving moz00:35
taras_moving to MV00:35
notmynamewll ok then00:35
taras_would be great to chat00:36
taras_err to clarify:not looking for a job. gotta run now00:36
redboI didn't read through all that, but looked at the doc.  I've always meant to see how much caching db connections in an LRU would help, would love to see someone benchmark it.00:40
*** shri1 has quit IRC00:41
openstackgerritSamuel Merritt proposed a change to openstack/swift: Zero-copy object-server GET responses with splice()  https://review.openstack.org/10260900:56
*** marcusvrn has quit IRC01:01
*** mwstorer has quit IRC01:03
*** erlon has joined #openstack-swift01:04
*** gyee has quit IRC01:04
*** lnxnut has joined #openstack-swift01:06
*** tongli has quit IRC01:06
openstackgerritYuan Zhou proposed a change to openstack/swift: Fix delete versioning objects when previous is expired  https://review.openstack.org/8820401:10
*** lnxnut has quit IRC01:16
*** tab_ has quit IRC01:30
*** addnull has joined #openstack-swift01:40
*** nosnos has joined #openstack-swift01:52
*** haomaiwang has quit IRC02:02
*** 18VAATCEN has joined #openstack-swift02:03
*** tgohad has quit IRC02:15
*** nosnos has quit IRC02:16
*** haomaiwa_ has joined #openstack-swift02:19
*** bill_az_ has quit IRC02:19
*** 18VAATCEN has quit IRC02:22
*** config has joined #openstack-swift02:23
configHi02:23
configIs there a way to customize the auditor service for each service: account, container, and object?02:23
configBasically, to configure it in different ways?02:23
*** addnull has quit IRC02:29
*** addnull has joined #openstack-swift02:29
redbodata point: I created a 3M object container db (496MB) and did a 10,000 object GET from the middle (cache flushed).  It read 89.6MB to do that GET and took 1:30 (on my crappy VM).  After vacuuming, the same GET read 93.6MB and took 1:14.02:32
*** addnull has quit IRC02:34
*** bkopilov has quit IRC02:35
*** haomaiwa_ has quit IRC02:37
*** haomaiwang has joined #openstack-swift02:37
portanteredbo: read more data but went faster?02:37
portanteam I missing something?02:37
redboyeah, after a vacuum the data is probably more contiguous.  At least the index.02:38
*** addnull has joined #openstack-swift02:38
redboit wasn't using much CPU, so obviously it did a lot of seeks in there if it took a minute and a half to read 90MB.  It can dd 44 MB/s.02:39
torgomaticredbo: how are you measuring io?02:40
redbogrep read_bytes /proc/<PID>/io02:40
torgomaticthanks02:42
redboha.. doing the fadvise(WILLNEED) drops the whole thing to 10 seconds.  Wouldn't work on really big containers, though.02:48
configDoes anyone know whether the Auditor service is configurable for the account, container, objects?02:50
taras_yeah fadvise is awesome for smaller stuff02:53
taras_the thing to do is to trace number of bytes read02:53
taras_when using index or not using it02:53
taras_bytes read from disk..not read() calls02:53
taras_i'm gonna try a bunch of stuff on wed02:54
taras_redbo: if you use fadvise, you can also tell sqlite to use the mmap backebd02:56
redbointeresting.  without the index, that query took 27 seconds and read 353.7 MB.02:56
taras_then it's really fast02:56
taras_mmap is a perf hit otherwise02:56
taras_redbo: 27 vs 10s orvs 1:30?02:58
redbo10s with precache (including caching), 27 with no index and no cache, 1:30 with index and no cache.02:59
redbo18s with no index and mmap and no precache.02:59
taras_sounds like that index hurts03:00
taras_redbo: mind sharing that db?03:00
redbooh but this is after vacuuming.  I should have saved it and done that on a copy.03:02
redboyeah, let me recreate it03:02
taras_hehm if you are outrunning an index post-vacuum,it's not gonna be better prevacuum :)03:05
redboyeah.  I just want a better simulation of real life.03:05
configHey, I noticed that nobody answered my question about the Auditor, which is totally all right, but was just wondering if that's because it's just not an interesting topic to talk about, or if it's because the answer is no, you can't configure it?03:06
config:)03:06
portantepeluse: I'll look to see what I have around for using the in-memory object server for storage policy based functional tests03:07
redboconfig: I didn't know what you meant, but I'm pretty sure the answer is no.03:07
confighttps://swiftstack.com/openstack-swift/architecture/03:08
configsubsection: Auditors03:08
configunder Consistency Services03:08
redbohow would you want to configure it differently?03:09
configjust as an example, say, maybe you want it to run less frequently, or more frequently?03:09
*** erlon has quit IRC03:10
configthat's just an example03:10
redbooh.  that you can probably do.  But it's a little bit scattershot.  The object auditor supports rate limiting, the others I think you can only tune with concurrency and how long they wait between passes.03:11
configDo you know how this is done?03:11
redboif you look for example at the [object-replicator] second of https://github.com/openstack/swift/blob/master/etc/object-server.conf-sample03:13
zaitcevdaaaarn I was just going to paste that03:14
redboconfig: you can see there's a concurrency setting, and "run_pause" which is how long it waits between passes03:14
zaitcevI think it may have a delay between passes, I know I set it03:14
redbowell you want it set to something, or it'll just sit and spin when your cluster is empty :)03:14
zaitcevFor containers it's called "interval".03:15
redboalso if you look at [object-auditor] in that same file, you'll see it has bytes_per_second and files_per_second03:16
redbooh yeah, interval.  we should call it one thing everywhere.03:16
mattoliverauActually if I remember correctly, I think you can use run_pause or interval, they set the same variable in the code.03:21
configWow, thanks!!03:21
mattoliveraufor the container03:21
configI didn't even know that these samples were available :)03:22
configI'll probably be looking over them, which will probably lead to more questions later on.03:22
configI will* be looking over them03:22
configI think this Auditor issue might just be the beginning.03:23
config:)03:23
redboha.. yes, db and container replicator will support run_pause or interval.  but object replicator will only support run_pause and container updater will only support interval.03:24
redboer account and container replicator03:25
configSo it seems that there is a lot that you can do with not just the Auditor, but with the other services as well.03:25
redbosomeone should make it support interval everywhere, and both where it used to be run_pause.  sounds like it could be one of those low-hanging fruit thingies for new people.03:29
mattoliverauactually I lie, it is the account and container replicators where interval and run_pause is loaded into the same variable.03:29
*** kopparam has joined #openstack-swift03:35
*** kopparam has quit IRC03:35
*** kopparam has joined #openstack-swift03:36
*** kopparam has quit IRC03:40
portanteredbo, notmyname, torgomatic, clayg, _others_: did you see the article I posted earlier?03:48
torgomaticNope03:56
portanteit talks about the need to fsync the directory to persist the temp file name and the rename to the target file03:59
portanteobject server PUT operation03:59
portantetorgomatic: shall I repost the article?03:59
portanteor is it not interesting?04:00
torgomaticIf you don't mind, I'd appreciate it.04:00
*** bkopilov has joined #openstack-swift04:00
portantehttp://lwn.net/Articles/457667/04:01
portantethe code example which matches the PUT operation in swift is: http://lwn.net/Articles/457672/04:01
torgomaticHuh. I wonder if XFS needs the directory to be fsynced or not.04:43
redboit should probably be in there.  but from what I've seen, xfs is pretty protective of its metadata.  file data not so much.04:44
*** kopparam has joined #openstack-swift04:44
torgomaticIn other words, is Swift behavior 100% safe on the most common deployment and could be enhanced, or is it busted in a rare case?04:45
portantefrom what I heard today from the file system guys, a system failure might cause the newly created object to disappear, where the old directory contents are still on disk04:49
portanteso the old object is in place04:49
portanteso i think it would make the reconciler work harder, but still might be busted in a rare case04:50
*** nosnos has joined #openstack-swift04:50
portantetorgomatic: ^04:50
portanterare base being all replicas fail in the same way such that object was reported as persisted, but old directory contents are in play when replicas come back online04:51
portanteI might not be thinking about this correctly this late at night, so just going to bed ...04:52
torgomaticportante: interesting... sounds like it'll take quite the coincidence for this to do any damage.04:52
torgomaticI wouldn't lose sleep over it. :)04:52
portantemy threads background causes me to loose sleep over these kinds of "rare" conditions which all too often showed up with folks pointing fingers as the library that did not handle the condition properly04:53
redboWhat I really want is filesystem write ordering.  So if I update the hashes.pkl after the object is moved into place, I'd lose either the hashes.pkl update or both.  Either way the replicator can fix it from a good copy.04:54
redboand I wouldn't worry about fsyncing in my fantasy world04:54
portantemaybe just nightmares before I actually go to sleep ...04:54
config+portante: what "file system guys" are you referring to?04:55
config+portante: in your comment at xx:4904:55
config+portante: [xx:49]04:55
configIs anyone able to answer that?04:57
configThanks04:57
torgomaticI know he works for red hat and they employ lots of kernel developers, but that's as much as I know04:58
portanteric wheeler04:59
portantejeff moyer is a co-worker across the hall from me04:59
*** ppai has joined #openstack-swift04:59
config+portante: Which file system are they working on?04:59
portantethey work on many05:00
config+portante: and are they primarily working with the C language?05:00
portanteI would think so, given their kernel background05:01
redboit looks like with this database, doing fadvise to preload the database and keeping the index wins.  Though everything is faster than what we do now.  http://paste.openstack.org/show/105041/05:01
redboI'll try to grab a bigger database tomorrow and try it out.05:01
config+portante: Do they work with distributed file systems as well, say NFS, and maybe Swift and Ceph?05:02
redboAnd maybe try it with concurrency to multiple databases..  It is reading more data when prefetching or not using an index.  If it's forced to do so from multiple files, it might degrade to worse than the current method.05:07
*** chandankumar has joined #openstack-swift05:13
redboand none of that may hold on SSDs where the speed tradeoff for seeks/reading contiguous data is different.05:30
*** tsg has joined #openstack-swift05:32
*** zaitcev has quit IRC05:39
*** echevemaster has quit IRC05:49
*** kopparam has quit IRC05:53
*** kopparam has joined #openstack-swift05:54
*** kopparam has quit IRC05:58
*** kopparam has joined #openstack-swift06:03
*** nshaikh has joined #openstack-swift06:05
kopparamHello! How can I use glance to use S3 to create/store/delete images?06:08
*** config has quit IRC06:10
*** haomaiwang has quit IRC06:12
*** haomaiwang has joined #openstack-swift06:12
*** k4n0 has joined #openstack-swift06:18
*** Anju has joined #openstack-swift06:25
*** haomai___ has joined #openstack-swift06:28
*** occupant has quit IRC06:29
*** haomaiwang has quit IRC06:32
*** foexle has joined #openstack-swift06:56
*** geaaru has joined #openstack-swift07:00
*** kopparam has quit IRC07:05
*** kopparam has joined #openstack-swift07:06
*** chandankumar has quit IRC07:07
*** kopparam has quit IRC07:11
*** chandan_kumar has joined #openstack-swift07:13
*** geaaru has quit IRC07:22
*** geaaru has joined #openstack-swift07:26
openstackgerritA change was merged to openstack/swift: Merge master to feature/ec  https://review.openstack.org/11833107:26
*** chandan_kumar has quit IRC07:33
*** occupant has joined #openstack-swift07:35
*** occupant has quit IRC07:40
*** tsg has quit IRC07:42
*** chandan_kumar has joined #openstack-swift07:47
*** bvandenh has joined #openstack-swift08:00
mattoliverautime to call it a night, nigth all! (or good day to some)08:00
*** ttrumm has joined #openstack-swift08:05
*** joeljwright has joined #openstack-swift08:15
*** chandan_kumar has quit IRC08:29
*** kopparam has joined #openstack-swift08:31
*** homegrown has joined #openstack-swift08:56
homegrownI have a long running upload script to populate swift, but the auth-token expires. Can i give some service accounts longer auth-tokens?09:03
joeljwrighthomegrown: rather than extending the auth-tokens can you not re-authenticate in a similar way to the python-swiftclient?09:12
*** aix has joined #openstack-swift09:13
joeljwrighthomegrown: using keystone auth the swiftclient attempts to reauthenticate when a token expires09:18
homegrown joeljwright: will look into it, thanks09:19
*** mkollaro has joined #openstack-swift09:21
*** occupant has joined #openstack-swift09:36
*** dmorita has quit IRC09:37
*** occupant has quit IRC09:41
openstackgerritLorcan Browne proposed a change to openstack/swift: Add "--no-overlap" option to swift-dispersion populate  https://review.openstack.org/11841109:46
*** kopparam has quit IRC10:11
*** kopparam has joined #openstack-swift10:11
*** kopparam has quit IRC10:12
*** kopparam has joined #openstack-swift10:12
*** haomai___ has quit IRC10:26
*** bvandenh has quit IRC10:44
*** bvandenh has joined #openstack-swift10:45
*** DisneyRicky_ has quit IRC11:03
*** miqui has quit IRC11:05
*** ppai has quit IRC11:06
*** DisneyRicky has joined #openstack-swift11:09
*** ppai has joined #openstack-swift11:17
*** kopparam has quit IRC11:27
*** kopparam has joined #openstack-swift11:28
*** occupant has joined #openstack-swift11:37
*** occupant has quit IRC11:42
*** kopparam has quit IRC11:42
*** kopparam has joined #openstack-swift11:43
*** kopparam has quit IRC11:47
*** kopparam has joined #openstack-swift11:50
*** ppai has quit IRC11:51
*** erlon has joined #openstack-swift12:03
*** HenryG is now known as HenryG_afk12:04
*** ppai has joined #openstack-swift12:04
*** kopparam has quit IRC12:08
*** kopparam has joined #openstack-swift12:09
*** kopparam has quit IRC12:14
peluseportante, cool thanks.  I started one too but have no idea what I did with the code so I can start from scratch again if needed but yeah if you already did some work would be happy to carry it forwward :)  Let me know...12:14
*** nosnos has quit IRC12:16
*** nosnos has joined #openstack-swift12:17
*** nosnos has quit IRC12:21
*** miqui has joined #openstack-swift12:25
*** judd7 has joined #openstack-swift12:49
*** igor has joined #openstack-swift12:52
*** igor has quit IRC12:52
*** igor has joined #openstack-swift12:53
*** marcusvrn has joined #openstack-swift12:55
*** igor has quit IRC12:56
*** aix has quit IRC12:58
*** bill_az_ has joined #openstack-swift13:01
*** sandywalsh has joined #openstack-swift13:01
*** aix has joined #openstack-swift13:01
*** bkopilov has quit IRC13:04
*** tongli has joined #openstack-swift13:20
*** echevemaster has joined #openstack-swift13:26
*** tdasilva has joined #openstack-swift13:38
*** occupant has joined #openstack-swift13:38
*** occupant has quit IRC13:43
*** ppai has quit IRC13:46
*** ttrumm_ has joined #openstack-swift13:54
*** ttrumm has quit IRC13:58
*** ttrumm_ has quit IRC14:05
openstackgerritA change was merged to openstack/swift: Only bind SAIO daemons to localhost  https://review.openstack.org/11819714:08
*** addnull has quit IRC14:10
*** annegent_ has joined #openstack-swift14:17
*** nshaikh has quit IRC14:18
*** dmsimard_away is now known as dmsimard14:25
*** HenryG_afk is now known as HenryG14:36
*** Anju has quit IRC14:41
*** lpabon has quit IRC14:46
*** lpabon has joined #openstack-swift14:48
*** tsg has joined #openstack-swift14:51
*** mahatic has joined #openstack-swift15:15
*** bgmccollum_ is now known as bgmccollum15:19
*** mwstorer has joined #openstack-swift15:21
*** mahatic has quit IRC15:26
*** mahatic has joined #openstack-swift15:26
*** aix has quit IRC15:29
*** judd7 has quit IRC15:34
*** judd7 has joined #openstack-swift15:34
*** annegent_ has quit IRC15:39
*** judd7 has quit IRC15:39
*** occupant has joined #openstack-swift15:39
*** occupant has quit IRC15:44
*** zaitcev has joined #openstack-swift15:47
*** ChanServ sets mode: +v zaitcev15:47
*** aix has joined #openstack-swift15:48
*** bvandenh has quit IRC15:54
*** foexle has quit IRC15:56
*** mahatic has quit IRC16:01
*** evanjfraser has joined #openstack-swift16:04
*** cschwede has joined #openstack-swift16:04
*** homegrown has left #openstack-swift16:04
notmynamegood morning16:05
*** goodes_ has joined #openstack-swift16:07
*** dosaboy_ has joined #openstack-swift16:07
*** ondergetekende_ has joined #openstack-swift16:08
*** theanalyst has quit IRC16:08
*** cschwede_ has quit IRC16:08
*** ondergetekende has quit IRC16:08
*** k4n0 has quit IRC16:08
*** otoolee- has quit IRC16:08
*** sileht has quit IRC16:08
*** goodes has quit IRC16:08
*** peluse has quit IRC16:08
*** evanjfraser_ has quit IRC16:08
*** StevenK has quit IRC16:08
*** DisneyRicky has quit IRC16:08
*** ujjain2 has quit IRC16:08
*** acoles has quit IRC16:08
*** dosaboy has quit IRC16:08
*** k4n0_ has joined #openstack-swift16:08
*** theanalyst has joined #openstack-swift16:09
*** ujjain has joined #openstack-swift16:09
*** goodes_ is now known as goodes16:09
notmynameseems that I came down with a cold yesterday :-(16:10
*** sileht has joined #openstack-swift16:11
*** Anju_ has joined #openstack-swift16:12
*** StevenK has joined #openstack-swift16:13
*** DisneyRicky has joined #openstack-swift16:14
*** acoles has joined #openstack-swift16:16
*** ChanServ sets mode: +v acoles16:16
tdasilvanotmyname: hope you get better soon...sorta related..how's that collarbone btw?16:16
tdasilvaback to biking yet?16:17
notmynamenot yet16:18
notmynameshoulder is doing well, but still pretty weak16:19
*** gyee has joined #openstack-swift16:20
*** otoolee- has joined #openstack-swift16:20
tdasilvanasty little injury16:21
*** annegent_ has joined #openstack-swift16:25
*** vr1 has joined #openstack-swift16:26
vr1hello16:26
notmynamevr1: hi16:26
vr1is it possible to add en entry point in a non-dev package of swift ?16:26
notmynamevr1: what do you mean?16:26
vr1because in devstack or SAIO there is a setup.cfg, if you install from dpkg there is not setup.cfg16:26
vr1and no setup.py16:26
vr1but we need to register a new backend16:27
vr1I don't know if I am clear16:28
notmynamevr1: ya, I think I understand16:29
vr1in short, is it possible to quickly patch the swift redhat package by adding a new backend (without writing our own package)16:29
notmynamevr1: I think the answer is no. you'll need to make your own package to be deployed alongside the red hat packages16:30
*** annegent_ has quit IRC16:30
vr1that's OK thanks16:30
notmynamevr1: or alternatively build your own package with everything in it16:30
vr1ok we'll do16:31
vr1thanks a lot16:31
zaitcevWhy not write your own package though? It's trivial.16:33
zaitcevgit clone from fedora16:34
zaitcevadd your own patches16:34
zaitcevPROFIT16:34
zaitcevIt's all automated just for your convenience already for crissakes. Mind boggles that people still find an excuse not to roll their clone RPMs. Even workers at Oracle learned that trick, surely you can do it too.16:35
zaitcevOr better yet post a patch to Gerrit to support your backend in upstream Swift16:36
*** annegent_ has joined #openstack-swift16:38
vr1need to go, see you later16:38
*** vr1 has quit IRC16:38
*** mahatic has joined #openstack-swift16:40
*** annegent_ has quit IRC16:43
mahatichi notmyname , mattoliverau16:45
notmynamezaitcev: I'd like to see the backend stuff better selectable by a config (eg select a diskfile implementation with a use line in the config file)16:46
notmynamemahatic: hello16:46
*** btorch has joined #openstack-swift16:47
mahaticnotmyname, I realized that my processor needed an upgrade and I had to buy a new personal laptop. So i did that, installed fedora 20, setup Swift SAIO using virt-manager. And poking around it.16:47
notmynamecool16:47
mahaticThat's what i've been doing for the past two weeks.16:47
zaitcevHow does virt-manager relate to SAIO? Is that SAIO within a VM running under said F20?16:48
mahaticzaitcev, yup16:48
mahaticnotmyname, like you describe here, https://swiftstack.com/blog/2013/02/12/swift-for-new-contributors/ : I don't have a scratch to itch :) or rather an immediate pain point16:49
mahaticnotmyname, Can you please suggest a bug that i can get started with? I have been looking into the launchpad, but was wondering if you or anyone else would have any suggestions for a newbie16:50
notmynamemahatic: check the topic :-)16:50
notmynamemahatic: yesterday I added a link with some simple ideas there16:50
zaitcevnotmyname: I edited the /ideas a bit16:52
notmynamezaitcev: great!16:52
notmynamezaitcev: ah, the ring validator. yup16:55
zaitcevnotmyname: maybe too wordy, sorry, feel free to make laconic16:55
notmynamezaitcev: oh, and you don't want auto-reload config files16:55
zaitcevnotmyname: yeah... I'll be willing to be outvoted by people with actual production clusters on this, but as it is I do not.16:56
zaitcevthat's why I asked if you ran these ideas past ops in San Antonio16:56
zaitcevactually, the explicit bind_port too16:57
notmynamezaitcev: no, that wasn't the kind of thing that we were able to discuss in SA16:57
mahaticnotmyname, in multinode install docs, show mounting with a label? I don't quite get that. Sorry to sound naive!16:58
notmynamezaitcev: the feedback I have on the explicit bind_port is that rax and swiftstack both already set it.16:58
zaitcevnotmyname: most of our deployment tools do to, I think.16:58
*** bkopilov has joined #openstack-swift16:59
notmynamemahatic: devices can get re-ordered on reboot if you aren't using an explicit static reference to the device when mounting. either uuid or label. so updating the docs to use a label is something pretty simple to do and can keep people from shooting themselves in the foot17:00
*** tsg has quit IRC17:00
zaitcevmahatic: labels are there so you could pull drives from a node or add drives onto existing controllers. In your VM you can edit the .xml and make /dev/vda /dev/vdb stable17:00
zaitcevOh17:00
*** tsg has joined #openstack-swift17:02
mahaticnotmyname, Can you suggest any videos/demos that I could help me understand more?17:02
mahaticnotmyname, ah okay.17:02
notmynamemahatic: for that one, start with a google search of "mount by label"17:02
openstackgerritDolph Mathews proposed a change to openstack/swift: warn against sorting requirements  https://review.openstack.org/11869417:03
*** tkay has joined #openstack-swift17:04
*** tkay has left #openstack-swift17:04
zaitcevwow17:04
*** tgohad has joined #openstack-swift17:04
*** tsg has quit IRC17:07
notmynamezaitcev: ?17:08
zaitcevnotmyname: sorting of requirements.txt17:08
zaitcevnotmyname: I think it's super weird that order would matter.17:08
notmynameya, I saw some email this morning about it. seem that it's a Big Deal (tm) to people17:08
mahaticnotmyname, sure. I am. And i thought the SAIO set up doc does that? In here: http://docs.openstack.org/developer/swift/development_saio.html#using-a-partition-for-storage17:10
notmynamezaitcev: but I have no objections to adding comments to a requirements file (in fact I had some evil idea about it recently...)17:10
notmynamemahatic: no, doesn't look like ti17:14
notmynamebut I think torgomatic doesn't like comments because those are supposed to be machine-readable docs (and machines don't read comments)17:15
mahaticnotmyname, ah yes. It should be in this doc http://docs.openstack.org/developer/swift/howto_installmultinode.html17:18
mahatici believe17:18
notmynamemahatic: yes. start there. but in both is good :-)17:19
*** bill_az_ has quit IRC17:22
*** bill_az_ has joined #openstack-swift17:23
mahaticnotmyname, sure :)17:25
*** geaaru has quit IRC17:29
*** annegent_ has joined #openstack-swift17:38
torgomaticnotmyname: depends on what's in them... a comment like "the real versions that work are X..Y" is something that operators will want to know, but if they don't read the requirements.txt, they won't find out17:39
notmynametorgomatic: right17:40
torgomaticwhereas a comment like "don't mess with the file ordering, dopes" is entirely reasonable, as it will be seen by those who want to go poking at stuff for no reason17:40
*** occupant has joined #openstack-swift17:40
torgomaticor for no *good* reason, at least :)17:40
ahalecomputers don't read comments but computers also don't reorder lists alphabetically for ease of reading17:41
openstackgerritSarvesh Ranjan proposed a change to openstack/swift: Spelling mistakes corrected in comments.  https://review.openstack.org/11870117:41
*** annegent_ has quit IRC17:43
*** occupant has quit IRC17:45
notmynamewow. the gate is 18 hours deep17:46
torgomaticthat seems not good17:48
openstackgerritJohn Dickinson proposed a change to openstack/swift: make the bind_port config setting required  https://review.openstack.org/11820017:49
notmynamewhoops. left some debug print statements ^^17:49
notmynameall gone now17:49
notmynameacoles: if you can ask donagh or others about an explicit port setting, that would be nice. ie do they already set it explicitly (and thus the above patch doesn't affect anything), or are they not setting it and will need to update configs?17:52
*** morganfainberg is now known as morganfainberg_Z17:53
*** mkollaro has quit IRC17:54
*** mahatic has quit IRC17:57
*** aix has quit IRC18:00
*** cutforth has joined #openstack-swift18:03
*** tsg has joined #openstack-swift18:03
*** tgohad has quit IRC18:04
*** occupant has joined #openstack-swift18:05
*** angelastreeter has joined #openstack-swift18:07
notmynamehttps://wiki.openstack.org/wiki/Swift/PriorityReviews updated with an eye toward the openstack integrated release18:09
torgomaticis it a baleful eye?18:10
zaitcev1. Full of deadly or pernicious influence; destructive. 2. Full of grief or sorrow; woeful; sad. [Archaic]18:14
notmynametorgomatic: http://www.theshiznit.co.uk/feature/cant-unsee-christian-bales-eye-wart.php ??18:15
notmynamezaitcev: interesting. I only knew the 2nd definition18:15
*** annegent_ has joined #openstack-swift18:18
*** gvernik has joined #openstack-swift18:31
*** IRTermite has quit IRC18:32
*** mahatic has joined #openstack-swift18:34
*** zul has quit IRC18:43
*** annegent_ has quit IRC18:44
notmynameswift team meeting in 15 minutes18:45
*** zul has joined #openstack-swift18:48
*** angelastreeter has quit IRC18:51
taras_how does one create a container in swift after setting up SAIO18:57
taras_i was using devstack, but that was slow and eventually broke18:57
*** elambert has joined #openstack-swift18:57
notmynametaras_: PUT /v1/<your account>/container18:57
notmynametaras_: also, you may be interested in https://github.com/swiftstack/vagrant-swift-all-in-one18:57
notmynametaras_: the swift CLI can do it too: `swift post new_container_name`18:58
* notmyname doesn't like the "post" there, but *sigh*18:58
taras_nice18:58
taras_swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing post nc gives me "not found"18:59
*** mkollaro has joined #openstack-swift18:59
notmynametaras_: against devstack?19:00
taras_Container 'nc' not found19:00
taras_against SAIO19:00
*** peluse has joined #openstack-swift19:00
torgomatictry just "swift -A ... post" to make sure the account exists19:00
peluse /msg NickServ identify intel12319:00
taras_ah not found19:01
taras_thanks19:01
notmynamepeluse: oops19:01
taras_fixed it19:01
taras_thanks19:01
notmynametaras_: oh, I think you need to set account_autocreate to true19:01
notmynamewell, "need"19:01
peluseheh19:01
pelusebig secret now out in the open!19:01
notmynamepeluse: (1) I hope that's not your password (2) change it19:01
notmynameah! meeting time19:02
*** ChanServ sets mode: +v peluse19:06
*** angelastreeter has joined #openstack-swift19:06
acolesnotmyname: will do (re port setting)19:08
taras_hmm19:10
taras_i do have account_autocreate, but still struggling with auth :(19:10
notmynametaras_: ok. I'm in the swift team meeting now in #openstack-meeting. I can help after19:10
taras_ok19:11
taras_curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass: testing' http://127.0.0.1:8080/auth/v1.019:11
taras_seems to give me the token, but same user/url stuff wont wokr with swift cmd19:11
mahaticnotmyname, where does the meeting happen? not here?19:12
mahaticgot it19:13
Anju_taras : list  is working ?19:13
taras_swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat seems to work19:13
taras_and list seems to work19:13
taras_(no containers)19:13
Anju_hmmm ..then try  swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing upload continer_name object (you want to store)19:14
tdasilvamahatic: in the #openstack-meeting channel19:15
mahaticyup got it19:15
mahatictdasilva, i'm there. Thanks!19:15
*** IRTermite has joined #openstack-swift19:15
taras_Anju_: swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing upload nc .bashrc19:16
taras_Error trying to create container 'nc': 404 Not Found: <html><h1>Not Found</h1><p>The resource could not be found.<19:16
Anju_hmm19:16
taras_Object PUT failed: http://127.0.0.1:8080/v1/AUTH_test/nc/.bashrc 404 Not Found  [first 60 chars of response] <html><h1>Not Found</h1><p>The resource could not be found.<19:16
taras_wonder if i messed up in saio setup19:16
Anju_check19:17
Anju_logs19:17
tdasilvataras_: can you create a container with curl?19:17
taras_can't19:17
taras_that's something like curl –X PUT -i     -H "X-Auth-Token: AUTH_tka516e3754bab43129a71f17aa57cf5b2"    http://127.0.0.1:8080/v1/AUTH_test/nc  right19:17
taras_?19:17
Anju_this is the path19:18
tdasilvataras_: are you getting the same 404 error?19:19
taras_tdasilva: yes19:20
*** Anju_ has quit IRC19:21
btorchtaras_: are you able to just do a HEAD request against the account and get 2XX back ?19:21
taras_yes19:21
taras_i think i messed up the backend19:21
taras_in saio19:21
*** Anju has joined #openstack-swift19:21
taras_lemme try to fix that19:21
Anjutaras: checks the log ../var/log/swift/all.log19:23
Anjuyou can find the reason here19:24
Anjuand check your all services are running19:24
taras_ok, yeah, fixed up stuff in /sr19:26
taras_ok, yeah, fixed up stuff in /srv19:26
taras_now things work19:26
taras_sorry for trouble guys19:26
taras_will do the vagrant thing next time19:28
*** jergerber has joined #openstack-swift19:35
*** elambert has quit IRC19:49
*** mkollaro has quit IRC19:54
notmynametaras_: you got it all straightened out?19:56
notmynameAnju: btorch: thanks for helping taras_ out19:57
claygyay another great meeting19:57
portanteclayg, torgomatic: does that article make sense to you?19:57
portanteeither of you, might be better english19:57
* mattoliverau has the flu, so is surprised I survived being awake for the entire meeting!19:57
claygportante: i stopped when I got to the pretty picture with all the nice colors19:57
mattoliverauback to bed, see ya all in a few hours19:57
portantethe code from the article is better: http://lwn.net/Articles/457672/19:58
portanteclayg19:58
*** tgohad has joined #openstack-swift19:59
*** tsg has quit IRC19:59
claygportante: cool, like notmyname said we need to do some homework - i'll probably pull it up on the train home tonight20:00
*** gvernik has quit IRC20:00
portantegreat20:00
portantehopefully, it won't cause you to fall asleep and miss your stop!20:01
claygheheheh20:03
*** tsg has joined #openstack-swift20:03
*** annegent_ has joined #openstack-swift20:05
*** tgohad has quit IRC20:07
notmynamebarrier v. nobarrier were always confusing to me20:07
portanteyes20:08
notmynamebarrier == on == writeback cache is flushed to disk?20:08
portanteI believe so20:08
notmynameie making it a pass-through operation20:08
notmynameok. and that's the default20:08
notmynamenobarrier means it could be written to writeback cache and not persisted20:08
portantebarrier meaning do initiate another i/o to that volume until the cache is written to disk20:09
portanteyes, I believe that is correct20:09
portantedo not ...20:09
notmynameand nobarrier is ok if you've got something like a battery-backed cache that can persist the data20:09
portanteyes20:09
portantebut20:09
notmynamethere's always a "but"20:09
portanteyou have to have your disks properly configured to not have a cache in play behind that controller20:09
notmynameright20:10
notmynameit's caches all the way down20:10
portanteI have to remember this right, but I believe if a controller with a write-back cache detects a disk with a cache enabled, then the controller's cache becomes write-through and all the I/Os go to the disks20:12
portantewith nobarrier enable or not, IIRC20:12
portanteI could be wrong about that20:12
portanteI'll have to check20:12
*** Anju has quit IRC20:13
claygnotmyname & portante are digging in to the *real* reason we store three copies20:16
*** Anju has joined #openstack-swift20:16
*** fifieldt_ has joined #openstack-swift20:17
*** fifieldt has quit IRC20:21
btorchportante: I think that depends a lot on the controller as well, some will disable the drive cache when a bbm is used20:30
Anjunotmyname: did you think about my changes (limit check)20:35
*** tdasilva has quit IRC20:36
notmynameclayg: because it's all magic and nobody knows anything!!20:38
notmynameAnju: got a link?20:39
Anjunotmyname:  https://review.openstack.org/#/c/118186/20:39
*** angelastreeter has quit IRC20:40
*** mwstorer has quit IRC20:43
notmynameAnju: looks like clayg had the same concerns I had. also, you'll need to get the unit tests passing20:45
Anjunotmyname: means you will see after test success ?? :P20:48
portantebtorch: yes, and some controllers don't20:48
portante:)20:48
notmynameAnju: well....I still think there's the concern of the subtle api change you're introducing20:51
Anjuawww20:51
notmynameall I'm saying is that I want more than just one or two people to weigh in, and we normally are pretty conservative20:52
Anjunotmyname:  last time you ask about the client handling https://github.com/openstack/python-swiftclient/blob/master/swiftclient/client.py#L46720:54
notmynameAnju: ah ok. so limit could be passed in as 'abc'20:55
Anjunoo..It is giving an error20:56
notmynameportante: fsync'ing the containing directory isn't FS-specific is it?20:56
Anjunotmyname:- it is giving some ValueError: invalid literal for int() with base 10: 'abc' error20:57
notmynameAnju: oh! that's good then20:57
notmynameAnju: where is that error raised? what line?20:58
portanteI don't think it is fs-specific20:58
Anjuclient.py 44120:58
claygnotmyname: it's '%d' % limit20:58
claygso you know -1 is fine, but abc is ValueError20:59
clayggood on original author there20:59
notmynameclayg: of course. I'll blame the headcold20:59
portanteit may be that one fs does not require the fsync, e.g. an os.rename() always ends up persisted to disk for some reason, but I would like the vfs layer would hide that20:59
*** angelastreeter has joined #openstack-swift21:01
claygacoles: hey - thanks a ton for all of the comments on https://review.openstack.org/#/c/103777/ - i'd sorta not been looking at it too much giving folks time to digest21:02
claygacoles: it all came out of work on the multi-reconciler patch, I was going out of my way to understand the behaviors when multiple actors were both uploading the same objects with the same timestamp and i ran into those very same 503's that you were seeing21:02
portantewell, that paper we are talking about says that the need to persist the rename of the directory is file system and/or mount option specific21:03
claygwhich lead me to my current understanding of how the proxy handles 409's on PUT and the opinion that the result of that handling is sorta crappy/wasteful.21:03
*** annegent_ has quit IRC21:04
Anjuclayg: notmyname : The only changes in did in swiftclient is shell.py so I can use the limit option which are here: http://pastebin.com/pg92A3xi21:05
*** jasondotstar has joined #openstack-swift21:05
*** angelastreeter has quit IRC21:06
Anjuwhen I use limit=abc error is not handled gracefully but the error is : TypeError: %d format: a number is required, not str21:06
*** tsg has quit IRC21:06
Anjunotmyname:  and when I gave limit = 1 still the same error ..so I changed limit to int(limit)21:11
Anju:)21:11
*** miqui has quit IRC21:11
*** dencaval has quit IRC21:16
*** tsg has joined #openstack-swift21:19
*** annegent_ has joined #openstack-swift21:35
*** annegent_ has quit IRC21:37
*** annegent_ has joined #openstack-swift21:38
portantenotmyname: from the XFS FAQ: http://xfs.org/index.php/XFS_FAQ#Q._Which_settings_does_my_RAID_controller_need_.3F21:42
portantesee the last sentence of second paragraph of that section21:43
notmynameportante: yup21:44
*** mahatic has quit IRC21:44
*** angelastreeter has joined #openstack-swift21:53
*** angelastreeter has quit IRC21:57
*** mwstorer has joined #openstack-swift22:03
*** openstack has joined #openstack-swift22:11
*** tab_ has joined #openstack-swift22:14
*** joeljwright has quit IRC22:15
portantenotmyname: does anybody deploy swift on local FSes besides XFS regularly in production?22:17
notmynameportante: not that I know. I've talked to some people who deployed it on an ext varient but they always go back to xfs after seeing what happens with ext when you get a lot of inodes22:19
notmynameportante: I suppose we aren't counting glusterfs here ;-)22:20
*** erlon has quit IRC22:20
portantenotmyname: yes22:21
portante:)22:21
taras_so i tested getting rid of index + using ints for created_at in sqlite22:26
taras_filesize goes from 1.7mb to 1mb for my 10K entry db22:26
taras_redbo: how did you get so many entries into the db?22:27
taras_it takes me about 17ms per insert22:27
*** annegent_ has quit IRC22:28
tab_Let's say that I have 10 machines. Does the proxy services has to live on all tha machines?22:30
portantenotmyname, redbo, clayg, torgomatic: can anybody create the conditions for zero-byte files at will with XFS?22:33
*** angelastreeter has joined #openstack-swift22:34
torgomaticportante: not I22:46
openstackgerritTushar Gohad proposed a change to openstack/swift: EC: Make quorum_size() specific to storage policy  https://review.openstack.org/11106722:47
redbothe main source of zero byte files I think is improper shutdown after rsyncs. Since rsync doesnt fsync at all.22:48
*** gyee has quit IRC22:59
*** angelastreeter has quit IRC23:01
* clayg has this visual image of torgomatic waving his hand across an object server to exert his will23:02
torgomaticmaybe if there's a strong enough magnet in that hand... ;)23:03
jokke_LOL23:03
claygtab_: no, you can put proxies just on the machines that you want to load balance across - as long as they can talk to the storage nodes listed in the rings23:04
*** jergerber has quit IRC23:06
mattoliverauMorning23:15
*** openstackstatus has quit IRC23:19
*** openstackstatus has joined #openstack-swift23:21
*** ChanServ sets mode: +v openstackstatus23:21
*** sungju has joined #openstack-swift23:21
*** sungju has quit IRC23:22
*** dmsimard is now known as dmsimard_away23:26
*** astellwag has quit IRC23:32
*** astellwag has joined #openstack-swift23:33
*** Anju has quit IRC23:38
*** mwstorer has quit IRC23:39
tab_clayg: thx. So in case I have 10 machines, I can have only one with proxy service active on one machine to be able to read and write data to all 10 nodes.23:57
claygtab_: ayup23:57
claygunless that one proxy goes kaboom ;)23:57
tab_that's better than Ceph's radosgw consensous algorithm with Paxos, which in production wants a quorum of machines (which is i guess at least 2) with monitors deamon up, to someone to be enable to read/write ....23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!