Tuesday, 2016-10-18

*** hogepodge has quit IRC00:02
mattoliveraujamielennox: I'll put it on my list to review before summit and try and get around to it. However I'm a little swapped with work I need to complete before I leave, but will do what I can.00:06
jamielennoxmattoliverau: whatever you can - thanks00:07
jamielennoxswiftclient is something we should try and discuss at summit anyway00:07
mattoliveraujamielennox: +100:07
zaitcevI have one small concern. Is this interpolated already? How is that possible?  storage_url = session.get_endpoint(service_type=service_type, interface=interface)00:07
zaitcevWait, lemme guess. The actual access to keystone must be happening where the Session() is invoked to instantiate...00:08
*** hogepodge has joined #openstack-swift00:08
mattoliveraujamielennox: you coming to summit? worst case I can review on the _long_ flight from Oz > Barca :)00:08
mattoliverauahh cool, looks like zaitcev is already looking at it00:09
zaitcevmattoliverau: just with 1 eye00:09
*** david-lyle has quit IRC00:09
mattoliverau:)00:09
*** _JZ_ has quit IRC00:10
jamielennoxmattoliverau: yea, i'm at summit, and yea, i'm not looking forward to that flight00:14
mattoliverau:)00:14
mattoliveraucool00:14
*** _JZ_ has joined #openstack-swift00:14
jamielennoxzaitcev: the access to keystone is done on demand, this lets it do things like reauthenticate when a token expires00:17
jamielennoxthis means if you share session between swiftclient and other clients you don't need to reauthenticate or pass preauth_tokens00:18
*** hogepodge has quit IRC00:20
*** hogepodge has joined #openstack-swift00:21
*** david-lyle has joined #openstack-swift00:21
*** dmorita has joined #openstack-swift00:28
*** david-lyle has quit IRC00:29
*** david-lyle has joined #openstack-swift00:31
*** david-lyle has quit IRC00:34
*** diogogmt has quit IRC00:34
*** david-lyle has joined #openstack-swift00:36
*** dmorita has quit IRC00:39
kota_good morning00:40
*** dmorita has joined #openstack-swift00:46
*** david-lyle has quit IRC00:47
zaitcevImportError: No module named keystoneauth100:47
*** david-lyle has joined #openstack-swift00:48
mattoliveraukota_: morning00:51
*** vint_bra has quit IRC00:51
*** david-lyle has quit IRC00:51
*** delattec has quit IRC00:52
*** david-lyle has joined #openstack-swift00:52
*** vint_bra has joined #openstack-swift00:56
claygwell i only just got to looking at patch 387655 and I need to break for a bit - there's plenty to do there tho - hopefully i can move on that after dinner00:59
patchbothttps://review.openstack.org/#/c/387655/ - swift - WIP: Make ECDiskFileReader check fragment metadata00:59
*** david-lyle has quit IRC00:59
*** david-lyle has joined #openstack-swift01:01
*** jamielennox is now known as jamielennox|away01:01
*** lifeless has quit IRC01:02
*** lifeless has joined #openstack-swift01:02
*** vint_bra has quit IRC01:04
*** david-lyle has quit IRC01:05
*** tqtran has quit IRC01:05
*** david-lyle has joined #openstack-swift01:06
*** david-lyle has quit IRC01:07
*** jamielennox|away is now known as jamielennox01:08
*** bill_az has quit IRC01:08
kota_mattoliverau: morning01:08
kota_clayg: i will try it too today01:09
*** klrmn has quit IRC01:12
*** david-lyle has joined #openstack-swift01:13
*** david-lyle has quit IRC01:14
*** david-lyle has joined #openstack-swift01:15
*** hogepodge has quit IRC01:16
*** vint_bra has joined #openstack-swift01:16
*** david-lyle has quit IRC01:19
*** david-lyle has joined #openstack-swift01:20
*** david-lyle has quit IRC01:21
*** david-lyle has joined #openstack-swift01:23
*** david-lyle has quit IRC01:24
*** david-lyle has joined #openstack-swift01:25
*** trananhkma has joined #openstack-swift01:30
*** david-lyle has quit IRC01:32
*** david-lyle has joined #openstack-swift01:34
*** hogepodge has joined #openstack-swift01:35
*** david-lyle has quit IRC01:37
*** david-lyle has joined #openstack-swift01:38
*** david-lyle has quit IRC01:38
*** trananhkma_ has joined #openstack-swift01:41
*** trananhkma has quit IRC01:41
*** stack_ has quit IRC01:41
*** trananhkma_ is now known as trananhkma01:42
*** david-lyle has joined #openstack-swift01:44
*** david-lyle has quit IRC01:44
*** david-lyle has joined #openstack-swift01:45
*** david-lyle has quit IRC01:46
*** trananhkma has quit IRC01:50
*** trananhkma has joined #openstack-swift01:50
*** trananhkma has quit IRC01:50
*** trananhkma has joined #openstack-swift01:51
*** vint_bra has quit IRC01:52
jamielennoxzaitcev: there's no automatic dependency on keystoneauth1, ideally i would like to just add this to requirements01:54
zaitcevjamielennox: but what package is it?01:54
jamielennoxzaitcev: in practice anyone who is passing a session to the client has already got keystoneauth1 installed to have made the session01:54
jamielennoxopenstack/keystoneauth or pip install keystoneauth01:55
jamielennoxyea - the 1 is annoying01:55
jamielennoxit was launched in a time where we had lots of compatibility problems01:55
zaitcevInteresting. It lookg like RDO does not have anything called "keystoneauth".01:55
jamielennoxyum would be python-keystoneauth01:56
jamielennoxor maybe python-keysotneauth101:56
jamielennoxit's been out for a while now, RDO must have it01:56
*** david-lyle has joined #openstack-swift01:57
jamielennoxit's a required dependency of pretty much every other client01:57
*** david-lyle has quit IRC01:57
*** david-lyle has joined #openstack-swift01:59
*** david-lyle has quit IRC02:00
*** david-lyle has joined #openstack-swift02:01
*** david-lyle has quit IRC02:02
*** david-lyle has joined #openstack-swift02:03
*** david-lyle has quit IRC02:03
*** diogogmt has joined #openstack-swift02:03
*** david-lyle has joined #openstack-swift02:05
*** dmorita has quit IRC02:05
*** david-lyle has quit IRC02:05
*** david-lyle has joined #openstack-swift02:07
*** david-lyle has quit IRC02:08
*** david-lyle has joined #openstack-swift02:10
*** david-lyle has quit IRC02:11
*** david-lyle has joined #openstack-swift02:12
*** david-lyle has quit IRC02:14
*** david-lyle has joined #openstack-swift02:17
*** david-lyle has quit IRC02:18
*** david-lyle has joined #openstack-swift02:19
*** david-lyle has quit IRC02:19
*** AndyWojo has quit IRC02:19
*** david-lyle has joined #openstack-swift02:20
*** david-lyle has quit IRC02:21
*** klrmn has joined #openstack-swift02:22
*** AndyWojo has joined #openstack-swift02:23
*** david-lyle has joined #openstack-swift02:26
*** hogepodge has quit IRC02:40
openstackgerritMatthew Oliver proposed openstack/swift: Mirror X-Trans-Id to X-OpenStack-Request-Id  https://review.openstack.org/38735402:52
claygkota_: i'm back on for awhile02:54
mattoliverau^^ just correcting a commit message02:55
*** sheel has joined #openstack-swift02:55
claygX-OpenStack-Request-Id nice02:55
claygwait i thought the idea with that is that when glance makes a request to swift it could have a transaction associated with that requst which we'd then use to track any subrequests etc02:57
claygdo we not expect the osreqid to be passed in from other services?  should it be?02:57
*** jamielennox is now known as jamielennox|away03:01
*** hogepodge has joined #openstack-swift03:04
kota_clayg: woow, I was in the lunch just a bit03:16
kota_and now back to my desk03:16
mahaticgood morning03:18
kota_mahatic: o/03:19
*** david-lyle has quit IRC03:23
*** david-lyle has joined #openstack-swift03:24
*** david-lyle has quit IRC03:25
*** david-lyle has joined #openstack-swift03:26
*** david-lyle has quit IRC03:27
*** david-lyle has joined #openstack-swift03:28
*** david-lyle has quit IRC03:29
*** david-lyle has joined #openstack-swift03:30
*** david-lyle has quit IRC03:33
*** jamielennox|away is now known as jamielennox03:33
*** david-lyle has joined #openstack-swift03:34
*** links has joined #openstack-swift03:35
*** david-lyle has quit IRC03:36
mattoliverauclayg: according to the bug, it looks like its just mirroring our trans-id.. that is so we have a common return header in openstack that people can look for.03:42
mattoliveraumahatic: morning03:43
mahatickota_: mattoliverau: o/03:43
*** david-lyle has joined #openstack-swift03:46
*** david-lyle has quit IRC03:46
*** david-lyle has joined #openstack-swift03:50
*** ChubYann has quit IRC03:50
*** david-lyle has quit IRC03:51
*** diogogmt has quit IRC03:53
*** david-lyle has joined #openstack-swift03:53
*** david-lyle has quit IRC03:54
*** david-lyle has joined #openstack-swift03:55
*** david-lyle has quit IRC03:55
*** tqtran has joined #openstack-swift03:57
*** david-lyle has joined #openstack-swift03:57
*** david-lyle has quit IRC03:57
*** psachin` has joined #openstack-swift03:59
*** david-lyle has joined #openstack-swift04:00
*** david-lyle has quit IRC04:01
*** david-lyle has joined #openstack-swift04:02
*** david-lyle has quit IRC04:04
*** david-lyle has joined #openstack-swift04:04
*** david-lyle has quit IRC04:05
*** ChubYann has joined #openstack-swift04:05
*** david-lyle has joined #openstack-swift04:06
*** david-lyle has quit IRC04:08
*** david-lyle has joined #openstack-swift04:10
*** david-lyle has quit IRC04:11
*** david-lyle has joined #openstack-swift04:12
*** david-lyle has quit IRC04:13
kota_hmmm, i noticed that i don't have the permission to add +2 for stable branch.04:13
*** david-lyle has joined #openstack-swift04:14
*** david-lyle has quit IRC04:15
*** david-lyle has joined #openstack-swift04:16
*** david-lyle has quit IRC04:16
*** david-lyle has joined #openstack-swift04:17
*** david-lyle has quit IRC04:18
mattoliveraukota_: yeah, stable branch is owned by the stable team. notmyname I think is the only swift core with +2 there04:19
*** david-lyle has joined #openstack-swift04:19
mattoliverautho I could be wrong... but I don't either04:19
*** david-lyle has quit IRC04:20
kota_mattoliverau: it looks like https://review.openstack.org/#/admin/groups/542,members ?04:21
mattoliverauYou might have to ping stable team for review, or call out to tonyb and entice him with meat to smoke :P But they'll be looking for Swift core's +1s to know that we think patches are good04:21
mattoliveraukota_: ^^04:21
*** cshastri has joined #openstack-swift04:21
*** david-lyle has joined #openstack-swift04:21
mattoliveraukota_: yeah that'll be it04:21
kota_mattolvierau: thx04:22
tonybkota_: what needs looking at?04:22
kota_tonyb: we have 2 backport patches for stable/mitaka and stable/newton04:23
kota_https://review.openstack.org/#/c/387123/ and https://review.openstack.org/#/c/387172/04:23
patchbotpatch 387123 - swift (stable/newton) - Prevent ssync writing bad fragment data to diskfile04:23
patchbotpatch 387172 - swift (stable/mitaka) - Prevent ssync writing bad fragment data to diskfile04:23
tonybkota_, mattoliverau: I'll take a look at them from a stable team POV04:24
kota_the original patch for the master has landed and i just am wondering who one could make it to land to stable branch04:24
mattoliverautonyb: thanks man04:24
kota_tonyb: notmyname may be able to work on it tommorrow-ish though.04:24
mattoliverautonyb: if you need kota and I to +1 em or anything then let us know04:25
kota_tonyb: but thanks ;-)04:25
tonybkota_, mattoliverau: +1 would be good04:25
mattoliverauk, I'll fire up my stable saio's so I can run the tests etc.. cause I wanna be sure04:26
*** m_kazuhiro has joined #openstack-swift04:26
kota_mattoliverau: thanks too :D04:27
* mattoliverau hasn't built a saio to track stable newton yet... so am building one... glad I have a script to do that thing.04:37
tonybmattoliverau: scripting for the win!04:40
mattoliverau\o/04:41
openstackgerritPete Zaitcev proposed openstack/swift: Add InfoGet handler with test  https://review.openstack.org/38779004:45
openstackgerritHanxi Liu proposed openstack/swift: Add links for more detailed overview in overview_architecture  https://review.openstack.org/38144605:00
*** ppai has joined #openstack-swift05:06
*** wer has quit IRC05:20
claygis there no way to shutup liberasure code 1.1 printing to stdout with pyeclib 1.3.1?05:23
clayg... other than upgrade liberasurecode?05:23
clayg... where upgrade ~= build and install from source since no distos package the liberasurecode that goes with pyeclib 1.3.1?05:24
mattoliverauclayg: good question and if you find the answer let me know ;P05:24
claygas much as I'm sure that if i would just go update my vsaio stuff to clone/build/install liberasure from source I'd be happier ...05:26
clayg... i keep feeling like it's a distraction from what i'm trying to do *right now*05:26
claygmeanwhile - gd, shut up liberasurecode!05:26
claygi *would* just install an olderish pyeclib but we went and bumped requirements?  so it ends up being a real mess :\05:27
*** ChubYann has quit IRC05:28
*** sure has joined #openstack-swift05:28
*** sure is now known as Guest2944005:29
*** bill_az has joined #openstack-swift05:31
mattoliverauclayg: I have the xenial vsaio branch on my OSX dev laptop, and swift is logging directly into /var/log/syslog.. is this the normal setup (ie not logging to /var/log/swift/*), a problem with my chef build, or a vsaio xenial bug?05:31
Guest29440hi all, I deployed openstack swift, Right now i am getting all swift related logs in "/var/log/syslog" i want these logs in "/var/log/swift/swift.log" file05:31
Guest29440is there any way to do please help?05:31
mattoliverauGuest29440: yeah, you need to make sure you set up the rules correctly in rsyslog05:31
*** tqtran has quit IRC05:32
zaitcevRDO installs those automatically05:32
Guest29440mattoliverau: can you elaborate the process05:32
mattoliverauGuest29440: you can either do it via log facility (set in the swift config files) and then catch them and redirect.. including send them to a remote syslog server if thats what you do.05:33
zaitceveither ... or what?05:34
mattoliverauyou can see some examples of how its done in the documentation, or even in the swift all in one doco.. I'll try and find some (on phone atm).05:34
mattoliverauzaitcev: good point :P05:34
claygzaitcev: syslog-ng lets you pick out log lines based on pattern matching and shiz05:34
mattoliverauyeah, so depends on what syslog your using05:35
Guest29440mattoliverau: yeah, i will try to find05:35
claygGuest29440: this section is a quick read -> http://docs.openstack.org/developer/swift/development_saio.html#optional-setting-up-rsyslog-for-individual-logging05:35
claygmight give you a general sense of the idea05:36
Guest29440clayg: thanks and if got struck anywhere i will ask your help05:36
zaitcevGuest29440: basically https://bugzilla.redhat.com/show_bug.cgi?id=99798305:36
openstackbugzilla.redhat.com bug 997983 in openstack-swift "swift in RDO logs container, object and account to LOCAL2 log facility which floods /var/log/messages" [Low,Closed: currentrelease] - Assigned to zaitcev05:36
zaitcevor, better yet, stand by for flood05:38
zaitcev[root@rhev-a24c-01 ~]# cat /etc/rsyslog.d/openstack-swift.conf05:38
zaitcev# LOCAL0 is the upstream default and LOCAL2 is what Swift gets in05:38
zaitcev# RHOS and RDO if installed with Packstack (also, in docs).05:38
zaitcev# The breakout action prevents logging into /var/log/messages, bz#997983.05:38
zaitcevlocal0.*;local2.*        /var/log/swift/swift.log05:38
zaitcev&                        ~05:38
zaitcev[root@rhev-a24c-01 ~]#05:38
Guest29440zaitcev: that is in the case of redhat here i am using ubuntu14.0405:39
*** bill_az has quit IRC05:41
*** wer has joined #openstack-swift05:43
*** chlong has quit IRC05:47
*** wer has quit IRC05:48
mattoliverauGuest29440: in each swift service you can set specific syslog settings e.g: https://github.com/openstack/swift/blob/master/etc/proxy-server.conf-sample#L42-L4605:50
mattoliverauonce you know the log facility (or set it to what you want) (And you can use a few different facilitys if you want to seperate your swift logs even more). You can specify rules05:51
mattoliverauGuest29440: for example on the rsyslog side, this is what we do for the Swift All In One (SAIO) dev environment: https://github.com/openstack/swift/blob/master/doc/saio/rsyslog.d/10-swift.conf05:52
mattoliverauGuest29440:  you can see again in the SAIO's proxy server configuration we are sending all logs to syslog facilty 1: https://github.com/openstack/swift/blob/master/doc/saio/swift/proxy-server.conf#L605:54
*** m_kazuhiro has quit IRC05:55
Guest29440i have given parameters like this in proxyserver.conf http://paste.openstack.org/show/586103/05:55
* zaitcev facepalms05:55
Guest29440mattoliverau: is it corrrcet or not05:56
*** dmorita has joined #openstack-swift05:56
mattoliverauGuest29440: the log address needs to stay /dev/log as thats the syslog device we write logs to05:57
mattoliverauand now you have no log facility set05:57
mattoliverauyou dont tell swift where to log (ie /var/log/swift/), you just tell swift where syslog is and what facility to tag the logs as.05:59
mattoliverauthen in syslog you say things coming from this facility write them to /var/log/swift/<something>.log05:59
*** dmorita has quit IRC06:00
*** dmorita has joined #openstack-swift06:00
*** rcernin has joined #openstack-swift06:01
*** chlong has joined #openstack-swift06:01
mattoliverauyou could use a few different facilities (LOG_LOCAL0, LOG_LOCAL1) and then seperate say to proxy from storage, or use more and seperate the eventual consistancy engine from the storage node requests and proxy, etc.. really the skies the limit06:01
*** links has quit IRC06:01
Guest29440mattoliverau: is this is correct configuration and tell me how can write in to /va/log/swift/<something>.log file06:02
Guest29440http://paste.openstack.org/show/586105/06:02
mattoliverauSure, so now your saying send to /dev/log and tag with the LOG_LOCAL0 facility. now you need to tell rsyslog (if that's what your using) to filter on LOG_LOCAL0.06:05
mattoliverauSyslog has different levels, for things like warnings, errors, debug, info level messages etc. You can just send them all to a log. or seperate them some more (like the saio is doing).06:06
mattoliverauGuest29440: so a very basic (just dump everything on log facility 0 to say /var/log/swift/swift.log would be do do somethink like: create the file /etc/rsyslog.d/10-swift.conf and then in that file place something like http://paste.openstack.org/show/586107/06:12
mattoliverauthen make sure /var/log/swift dir exists and the permissions are correct for syslog (look at /var/log/).. then restart syslog06:13
claygwhoa!  go mattoliverau!06:14
Guest29440mattoliverau: thanks i got logs in /var/log/swift/swift.olg06:15
mattoliverauthe saio rsyslog file a linked before shows examples of how you can split it up some more.. and if you want to push logs to a remote syslog server you can do that too. This is where reading the rsyslog documentation can tell you more.. really the skies the limit06:15
mattoliverauGuest29440: \o/06:15
Guest29440ohhh!!! now i will seperate the logs06:15
mattoliverauGuest29440: now you can play with serperating them if you find that log is way too verbose and noisy (which it would be) :)06:16
*** pcaruana has joined #openstack-swift06:18
*** dmorita_ has joined #openstack-swift06:22
*** dmorita has quit IRC06:22
*** x1fhh9zh has joined #openstack-swift06:24
Guest29440mattoliverau: You have any idea about container synchronization please let me know06:28
mattoliverauGuest29440: in what regards? syslog seperating, container sync, container replication, something else?06:30
*** hogepodge has quit IRC06:30
Guest29440mattoliverau: container sync06:30
mattoliverauGuest29440: ahh ok, what are you trying to do? sync between 2 clusters?06:33
Guest29440as of now i want to sync between two clusters06:33
Guest29440mattoliverau: and is it possible to sync in same cluster?06:34
mattoliverauGuest29440: yeah, again you can look at how the SAIO has set it up. Because it has container sync setup to sync with itself as we need that to test container sync code.06:36
*** admin6 has joined #openstack-swift06:37
Guest29440mattoliverau: you have any reference links regarding this06:37
mattoliverauGuest29440: I don't know how much longer I'll be around as it's getting to the end of my day and could get pulled away any minute :)06:37
mattoliverauGuest29440: sure :)06:37
*** x1fhh9zh has quit IRC06:37
mattoliveraulet me find some starting points for you :)06:37
Guest29440mattoliverau: thank you for giving your time06:38
mattoliverauGuest29440: so start my reading the container sync overview here: http://docs.openstack.org/developer/swift/overview_container_sync.html06:38
*** _JZ_ has quit IRC06:39
mattoliverauGuest29440: the high level idea is, you need to have a realm config for all clusters that trust each other.. in your case it can just contain one cluster06:39
mattoliverauthe realm key is a unique secret that the clusters will use to authenticate each other06:40
mattoliverauonce this trust is set up, you then mark containers to sync with anther cluster (as it appears in the real file) and to what account/container it syncs with. Again doing that invoves having container level secret keys for extra security.06:42
*** abhitechie has joined #openstack-swift06:42
mattoliverauin your cases, you'd point to the same cluster06:42
Guest29440the realm key we need to set like a header while creation of container?06:42
mattoliverauThe saio realm file is here: https://github.com/openstack/swift/blob/master/doc/saio/swift/container-sync-realms.conf06:43
*** hseipp has joined #openstack-swift06:43
mattoliverauthe realm key for the clusters are in the conf. the container keys are as a header yes06:43
mattoliverausorry need to step away for a bit06:43
mattoliverauGuest29440: when you add the key to a container, it's just setting metadata, so you can do that or change that anytime. So no it doesn't have to be at creation06:51
*** x1fhh9zh has joined #openstack-swift06:52
Guest29440mattoliverau: yeah i am doing right now if any i got any doubts let you know06:53
mattoliverauGuest29440: great, good luck, I'm always in channel, even when I'm not here. so ping if you have any other questions06:55
*** zhengyin has quit IRC06:55
*** zhengyin has joined #openstack-swift06:57
Guest29440mattoliverau: here is my container-sync-realms.conf file http://paste.openstack.org/show/586114/06:59
Guest29440while creating container i am getting this error "swift post -t '//realm1/192.168.2.187/AUTH_51a527847ebf4004a1e0f4b133fbdcca/container2' -k 'suresh' container1"07:00
Guest29440error is "Container POST failed: http://192.168.2.187:8080/v1/AUTH_51a527847ebf4004a1e0f4b133fbdcca/container1 400 Bad Request   No cluster endpoint for 'realm1' '192.168.2.187'"07:00
mattoliverauGuest29440: because you are only using the 1 cluster, you only need 1 cluster defined07:00
Guest29440mattoliverau: then what i need to mention in this file07:02
*** tesseract has joined #openstack-swift07:03
*** tesseract is now known as Guest8585507:03
mattoliverauGuest29440: in your case it shouhld be //realm1/clustername1/AUTH_51a527847ebf4004a1e0f4b133fbdcca/container207:05
Guest29440yeah i got it07:05
mattoliverauGuest29440: //<realm name from config>/<cluster name (after cluster_) from realm config>/<account>/<container>07:05
mattoliverauGuest29440: nice :)07:05
Guest29440just now i created container07:05
Guest29440mattoliverau: i created two containers named "container1" & "container2" and uploaded object to "container1" but while doing "swift list container2"07:08
Guest29440it is not showing that object07:09
mattoliverauis your continer-sync daemon running?07:09
mattoliverauand container-sync also runs on a interval (runs every now and then, which can set).07:10
Guest29440mattoliverau: yeah it is running07:10
mattoliverauthen maybe it hasn't run since you've added the objects07:10
Guest29440mattoliverau: just now i restarted the service07:11
mattoliverauoh and you need to make sure you have container sync in your pipeline on the proxy and that if you've changed the realm file you may need to make sure the changes have taken effect. I've mainly only run container sync for development and testing patches. So I'm not too experienced with what exactly needs to be restarted or what not07:13
*** klrmn has quit IRC07:13
mattoliverauwell I have to go to dinner.. so I'm out for now. Night swift land07:14
*** amoralej|off is now known as amoralej07:16
*** jlwhite has quit IRC07:16
*** mahatic has quit IRC07:16
Guest29440mattoliverau: ok thankyou for your patience and help...!!!07:17
*** mahatic has joined #openstack-swift07:17
*** clayg has quit IRC07:18
*** jlwhite has joined #openstack-swift07:18
*** rledisez has joined #openstack-swift07:18
*** clayg has joined #openstack-swift07:19
*** ChanServ sets mode: +v clayg07:19
*** wer has joined #openstack-swift07:19
*** SkyRocknRoll has joined #openstack-swift07:25
claygi just noticed that an invalid or missing X-Object-Sysmeta-Ec-Frag-Index on PUT to object server raises an unhandled DiskFileError - object server returns it as a 500 with a traceback in the body :\07:25
*** mathiasb has quit IRC07:26
clayg... seems like 400 would be better, i.e. same as we handle missing x-timestamp07:27
claygbut i'm not sure if it's worth a wishlist bug - might not hurt?07:27
*** mathiasb has joined #openstack-swift07:28
*** jordanP has joined #openstack-swift07:28
*** jordanP has quit IRC07:28
*** wer has quit IRC07:28
*** wer has joined #openstack-swift07:30
*** geaaru has joined #openstack-swift07:30
*** deep_ has joined #openstack-swift07:34
deep_Hi, I am trying to setup swift proxy under httpd on rhel. I am facing issue with s3,  s3 bucket create is returning with error related Signature mismatch. Bucket is getting created but s3curl is getting error. Any clue what can be the issue ??07:36
*** hogepodge has joined #openstack-swift07:40
*** wer has quit IRC07:47
*** mathiasb has quit IRC07:48
*** mathiasb has joined #openstack-swift07:49
*** _JZ_ has joined #openstack-swift07:50
*** wer has joined #openstack-swift07:52
*** takashi has joined #openstack-swift07:53
*** _JZ_ has quit IRC07:59
*** openstackgerrit has quit IRC08:04
*** openstackgerrit has joined #openstack-swift08:05
*** dmorita_ has quit IRC08:05
*** natarej_ has joined #openstack-swift08:05
*** _JZ_ has joined #openstack-swift08:08
*** natarej has quit IRC08:08
Guest29440Hi all, i am doing container synchronization but it is not syncing the objects to other container08:09
*** x1fhh9zh has quit IRC08:09
Guest29440please some one help..!!08:09
*** x1fhh9zh has joined #openstack-swift08:10
claygdeep_: what's the error that s3curl reports?  does the swift log 201 w/o error?08:11
claygGuest29440: is the container-sync daemon running?  Does it leave an errors in the logs?08:12
Guest29440clayg: yes it is running08:13
Guest29440In logs it is showing container-sync: Configuration option internal_client_conf_path not defined. Using default configuration, See internal-client.conf-sample for options08:13
*** _JZ_ has quit IRC08:21
openstackgerritOndÅ™ej Nový proposed openstack/swift: Set owner of drive-audit recon cache to swift user  https://review.openstack.org/38759108:31
*** joeljwright has joined #openstack-swift08:31
*** ChanServ sets mode: +v joeljwright08:31
kota_oh my...08:34
kota_looking at patch 387655, i'm realizing pyeclib is now wrong handling for the assertions on the fragment metadata.08:35
patchbothttps://review.openstack.org/#/c/387655/ - swift - WIP: Make ECDiskFileReader check fragment metadata08:35
kota_if we had a corrupted header in the fragment, we should return -EBADHEADER but currently it causes -EINALIDPARAMETER that means something wrong on caller side.08:36
kota_that's one of the guilties of current one.08:36
onovyhi guys. we are in progress of upgrade swift 2.5.0 -> 2.7.0 in production. We upgraded first store and just after that upgrade, obj/replication/rsync metrics from recon jumped up (https://s9.postimg.org/l45qoh0q7/graph.png). In object-replicator log i see many rsync of .ts files.08:37
onovyany idea?08:37
kota_the one more guilty is that *current liberasurecode doesn't test anything for invalid_args test cases i found while I was writing the test.08:37
claygi would guess it's the new old suffix tombstone invalidation08:38
claygonovy: ^08:38
onovyclayg: so we should continue with upgrade and it will be fixed after last store?08:38
kota_liberasurecode had the test cases but currently off because some stuff while we ware refactoring the tests.08:38
kota_agh.08:39
kota_clayg:!?!?08:39
claygkota_: i'm not sure if you're saying libec has yet another bug you'll probably end up fixing while we try to figure out how as a community we're going to adapt to ownership of that library08:39
claygor like everything with patch 387655 is bollocks because new libec isn't going to pop on invalid frags?08:39
patchbothttps://review.openstack.org/#/c/387655/ - swift - WIP: Make ECDiskFileReader check fragment metadata08:39
kota_clayg: have you been in barcelona?08:39
claygkota_: no that's like next week08:39
*** dmorita has joined #openstack-swift08:40
claygi do still need to work on cschwede and I's slides some more before then tho08:40
kota_clayg: that means you're a night man.08:40
kota_(mid-night man)08:40
claygonovy: i'm... hesitant to make that recommendation - i don't really know the situation - but I think I can find you the patch and we can think about it?08:41
kota_clayg: that's able to pop the invalid frag but i don't like to catch the error as ECDriverError as acoles is doing now, https://review.openstack.org/#/c/387655/1/swift/obj/diskfile.py@5308:41
patchbotpatch 387655 - swift - WIP: Make ECDiskFileReader check fragment metadata08:41
claygonovy: so I *did* say that it will be fine -> https://review.openstack.org/#/c/346865/08:42
patchbotpatch 346865 - swift - Delete old tombstones (MERGED)08:42
kota_because the ECDriver error is an abstruction of all ECError including something like no available backend.08:42
*** admin6 has quit IRC08:42
kota_I don't like to the auditor is doing quarantine if the backend is not avialable.08:42
kota_so we need to catch more strict error like ECInvalidFragmentMetadata, imo.08:43
onovyclayg: this patch is in 2.10.0 only08:44
claygonovy: oh, well then my first guess was not correct!  See - good thing you didn't listen to me.08:44
claygwhat's happening again now?08:44
onovyclayg: upgraded one store to 2.7.0 from 2.5.0. recon metric jumped up to high number08:45
claygkota_: ok, i like that - is a more appropriate error available?  i think backend not available would fire earlier when trying to create the policies08:45
claygone "store" ~= one "node" or zone or cluster?08:46
onovyhttps://s12.postimg.org/nchht94n1/graph2.png this show min/avg/med/max from whole cluster08:46
onovyhttps://s9.postimg.org/l45qoh0q7/graph.png this is sum08:46
kota_clayg: sure that. anyway, i have to make sure which error can be raised there though.08:46
kota_clayg: i think ECInalidFragmentMetadata is suite to catch.08:47
kota_sweet08:47
claygonovy: and what metric is this again?08:47
kota_but currently pyeclib doesn't return that when the metadata currupted :/08:47
kota_with a bug in liberasurecode08:47
kota_and I tried to fix it, it's really trivial and easy.08:48
kota_and make a test for that but nothing failed w/o patch08:48
claygonovy: maybe suffix.syncs?08:48
onovyclayg: replication/object/replication_stats/rsync from recon middleware08:49
kota_making sure what happen in the liberasurecode, actually liberasurecode doesn't test any failure case i noticed.08:49
onovyclayg: what is suffix.syncs?08:49
claygstatsd metrics - you're turning recon dumps into timeseries data?08:49
onovyno, not stats. i'm loading recon data over http to one server and aggregating them into rrd08:50
kota_so hopefully, 1. fix test cases in liberasurecode, 2. fix liberasurecode bug, 3. catch good error in Swift but it requires any other works like dependency managements.08:50
kota_:/08:50
onovyOct 17 18:56:02 sdn-swift-store1 swift-object-replicator: <f+++++++++ 3da/db6db09b842616d169dec89348c753da/1476722950.35238.ts08:50
onovyOct 17 18:56:02 sdn-swift-store1 swift-object-replicator: Successful rsync of /data/hd1-1T/objects/449389/3da at sdn-swift-store13-repl.###::object/hd11-1.2T/objects/449389 (0.186)08:50
onovythis is in log08:50
claygonovy: that's a pretty recent tombstone?08:51
claygare they *all* from yesterday?08:51
onovychecking logs08:52
onovyhttp://paste.openstack.org/show/586135/ , upgraded at ~11am08:54
onovybut i'm trying to check your question with cut/sed/... gimme sec :)08:54
onovyyep, (almost) all of them after 11am is from yesterday08:57
openstackgerritClay Gerrard proposed openstack/swift: WIP: Make ECDiskFileReader check fragment metadata  https://review.openstack.org/38765508:57
onovyclayg: this is pretty strange: http://paste.openstack.org/show/586136/08:58
onovyfirst row: count, second: hour of day08:58
onovysame from NOT-upgraded node: http://paste.openstack.org/show/586137/08:59
Guest29440kota: hi, Do you have any idea about container synchronization08:59
*** wer has quit IRC09:00
kota_Guest29440: can i make sure your meaning 'container synchronization'?09:00
onovyso if i understand it correctly, this rsync of .ts was always there (which could be fine), but count of rsync jumped up after one node upgrade09:00
kota_across over the different swift clusters?09:00
onovyafk lunch09:00
Guest29440kota: yes and i am trying in same cluster09:01
*** wer has joined #openstack-swift09:01
*** x1fhh9zh has quit IRC09:02
kota_2 same cluster?09:02
kota_Guest29440: container sync is an option to sync over the 2 clusters, http://docs.openstack.org/developer/swift/overview_container_sync.html09:02
kota_an user can make a sync target container  to a container.09:03
*** openstackgerrit has quit IRC09:04
*** openstackgerrit has joined #openstack-swift09:04
Guest29440kota: i followed the same link but i am not seeing objects which are uploaded to one in another container09:05
*** winggundamth has quit IRC09:05
kota_Guest29440: ok,  could you have the perrmission to figure out what happens in the cluster?09:06
Guest29440kota: can you elaborate what i need to do in cluster09:14
claygGuest29440: sorry, was looking at other stuff - the interal-client.conf log message is not a problem09:15
claygGuest29440: can you share your relams.conf and the metadata you set on the container - maybe it's something obvious?09:16
claygGuest29440: otherwise maybe container sync is failing to identify and process the container - or it's trying to process it and failing to sync data somehow09:17
claygif it's the latter I would think there's be some noise in the logs about it09:17
openstackgerritKota Tsuyuzaki proposed openstack/liberasurecode: Fix liberasurecode skipping a bunch of invalid_args tests  https://review.openstack.org/38787909:18
kota_ah, looks like clayg knows something rather than me.09:18
kota_it looks tsg is absent in this channel.09:19
* kota_ is going to ping him tommorow morning09:19
claygkota_: for backport I don't think we can expect liberasure to be repackaged09:20
*** admin6 has joined #openstack-swift09:21
clayg... can we?09:21
kota_clayg: good point, so we need to fix the auditing with as...09:21
claygkota_: for Guest29440 on container-sync I don't know nothing - and i'm going to sign off shortly09:22
claygkota_: for the ec auditor invalid frag data - I pushed up what I have so far - there's still some tests that need to be fixed - and I don't think I have the quarantine behavior on the object-server quite right yet (need some tests for invalid frag data in obj.test_server)09:23
claygI think the TODO's in the commit are correct (cc @ acoles)09:24
onovyclayg: back09:24
*** acoles_ is now known as acoles09:24
kota_clayg: i'm wondering, who sets disk chunk read size for the reader in the auditor.09:24
kota_that could come from policy.fragment_size?09:25
acolesgood morning09:25
kota_acoles: good morning09:25
joeljwrightacoles: good morning09:25
claygkota_: it's tunable - and I think documented that the auditor will pass it's value into the dfm - so you can adjust your auditor io different than object server09:26
*** winggundamth has joined #openstack-swift09:26
claygon server too big iop can gum up the reactor - on auditor it's nice to have bigger fat iops09:26
kota_i think, we need *at least* pyeclib 1.3.0 if we are using the auditor backport in the stable/mitaka anyway because the policy.fragment_size probably causes memory leak.09:26
clayggreat - so we don't have to do backports!?09:27
kota_and if we don't make the chunk size as same as policy.fragment_size, the get_metadata call doesn't fit the alignment of fragments.09:27
claygonovy: did the rsync spike level off shortly after the upgrade?  or is it still going?09:27
Guest29440kota: here isa my container-sync-realms.conf09:27
acoleskota_: clayg where are we at? I saw clayg pushed a new patchset, do I need to pick up anything?09:27
Guest29440http://paste.openstack.org/show/586145/09:27
kota_not sure, when I added a mitagation for that, i think it happened in mitaka-newton.09:28
claygacoles: i didn't get started until after dinner - i just cleaned up some tests09:28
claygacoles: I took a stab at the early quarantine - but I think it only really works in the adutior currently09:29
claygso maybe starting on a obj_server test to read frag_archive with invalid data in it is next thing todo09:29
kota_hmm, i have to study about container-sync-ralms.conf, that's my sabotage :/09:29
claygit's either that or work on fixing the app_iter Range tests in diskfile (blargh)09:29
acolesclayg: ack09:30
kota_acoles: i didn't yet complete the review on the auditor but I had made sure some my concern in pyeclib/liberasurecode09:31
acolesclayg: so i am on board with you and Sam re doing early quarantine (on first bad frag), I wasn't sure if we got some of that for free if the reader close method was called even when the reader wasn't fully read.09:31
kota_and found another problems a lot :.(09:31
acoleskota_: what's the liberasure code issue? is that related to the auditor patch?09:31
kota_s/another/other/09:31
acoleskota_: :(09:32
onovyclayg: graph is actual, so it's still going09:32
onovyspike is about ~24 hours09:32
kota_acoles: at first, i'd like to change the error handling not to catch ECDriverError. That is because it can catch every errors in the driver including no backend available.09:32
kota_i think ECInvalidBadFragmentMetadata is good for that which means corrupted fragment metadata we want to check the fragment bytes.09:33
acoleskota_: ah ok, makes sense09:33
onovyclayg: https://github.com/openstack/swift/blob/stable/mitaka/swift/obj/replicator.py#L294 this value is in graph09:34
kota_but 1. liberasurecode has a bug which doesn't return the error when the metadata corrupted09:34
claygonovy: it's updated in update() too I think?09:34
acolesnot good09:34
kota_2. liberasurecode is skipping all invalid_args tests including the corrupted metadata.09:34
kota_the second one I couldn't believe :(09:35
onovyyep, https://github.com/openstack/swift/blob/stable/mitaka/swift/obj/replicator.py#L46709:35
kota_acoles:^^09:35
acoleskota_: uh? so how come the tests in my patch worked? what error *was* I provoking?09:35
kota_acoles: ah, the second one is not a big related to yours.09:36
acoleskota_: I see "liberasurecode[97679]: Invalid fragment, illegal magic value"09:36
kota_yeah09:36
deep_clayg: This is the error <Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><RequestId>txcd3ec5c31efd45289468e-005805ed4e</RequestId></Error>09:37
acolesso what is skipped?09:37
acoleskota_: ^^09:37
kota_acoles: ok, explain step by step09:37
kota_let me explain09:37
patchbot(let <variable> = <value> in <command>) -- Defines <variable> to be equal to <value> in the <command> and runs the <command>. '=' and 'in' can be omitted.09:37
acoleslol09:37
kota_sorry patch bot09:37
claygdeep_: that's for the swift3 middleware - kota_ knows everything swift309:37
kota_so busy!?!?09:38
claygdeep_: unfortuately - he also knows everyting about libec - and we're sort of in crisis :D09:38
acolescan we spawn another kota?09:38
kota_acoles: great idea09:38
kota_JK09:38
joeljwrightdammit kota_ stop being so useful!09:38
claygkota.fork()09:38
kota_so, back to liberasurecode09:38
*** ppai has quit IRC09:38
acolesfor b in bugs: wait(kota)09:39
kota_acoles: "liberasurecode[97679]: Invalid fragment, illegal magic value", this is comming from sanity check with older libeasurecode09:39
kota_acoles: IIRC, during liberasurecode development history, we need to validate the fragment can be decode or not for the compatibility perspective.09:40
deep_clayg, kota_ : :) What is debugged till now for createbucket call i see one put followed by get, for put request the keystone check is successful but for get it return error "Authorization failed. Credential signature mismatch "09:40
kota_because sometimes we need to update the api or structore of the fragments09:40
acoleskota_: I have liberasurecode-dev 1.1.009:40
kota_and the magic value can be worked for the check, 'this is compatible with your engine'09:41
claygonovy: so either nothing has really changed, and reporting has changed and it's reporting is either more or less accurate now - or we're doing more rsync's - which means we're not just invalidating suffixes - but we have out of sync suffixes09:41
onovyyep. i can't found anything related to reporting change09:41
kota_deep_: will ack, sorry, I'm not so quick to think/type because I'm not a native English09:41
kota_back to liberasurecode again09:42
claygonovy: is there any difference in the *requests* coming into the upgraded node?  last I looked we object-servers' don't emit statsd metrics per status code like the proxies do (so annoying!) but you could try to parse logs or something?09:42
acoleskota_: so each engine has a magic value and liberasurecode checks it is good?09:42
kota_acoles: so that, liberasurecode[97679]: Invalid fragment, illegal magic value means, the fragment is incompatible with currently your using.09:42
kota_acoles: yes09:42
acolesand is the magic the first N bytes?09:43
claygonovy: if you go poke at /var/cache/swift/object.recon do the numbers make sense?  Are they way higher on the one node?  Is the cycle time longer?  partition timing higher?09:43
onovyclayg: requests count, type and status codes are same for new and old node09:43
kota_acoles: but acutally it works with your patch because the corrupted fragment metadata is absolutely incompatible.09:43
kota_acoles: yes09:43
claygeverything looks like "yes, more rsyncs on this node" - many factors back up the reported metric?09:43
kota_acoles: but unfortunately that returns ECInvalidParamter which means caller doing something wrong.09:43
acoleskota_: ah, but if the corrupted data just happens to be equal to a valid magic for an engine, then we would get no exception??09:44
kota_acoles: sure09:44
acoleskota_: right IIRC sometimes I saw "Inavlid args" or similar from liberasurecode09:44
onovyclayg: http://paste.openstack.org/show/586137/ this shows more rsync on upgraded node. but not "much more", just ~ 10%?09:45
onovyehm sry, that was not-upgraded node. upgraded is here: http://paste.openstack.org/show/586136/09:45
acoleskota_: but, for the bug we know about (ssync) the corruption will always be that the start of the first frag is either "PUT", "POST" or "DELETE". I hope none of those are EC magic values ?!?09:46
deep_clayg, kota_ : np, take your time. just putting the complete problem and debugging so far.  PUT and GET is using same ec2 credentials. I am not able to find why and who invoke the get call. till now i reached till swift3/request.py function authenticat() --         sw_resp = sw_req.get_response(app)         if not sw_req.remote_user:             raise SignatureDoesNotMatch() -- from here the the s3curl error is returned.09:46
acoleskota_: wait, maybe I am wrong there, need to think some more09:46
kota_acoles: i think so, so probably catching ECInvalidParameter is an option insted of ECDriverError09:46
onovyclayg: https://s22.postimg.org/dteqvt7sx/graph3.png sum of obj/replication/time metric from whole cluster09:47
onovyso time is +- same09:47
kota_acoles: current magic value, https://github.com/openstack/liberasurecode/blob/master/include/erasurecode/erasurecode.h#L31909:48
onovyclayg: only this (rsync) metrics jumped up. all other metric is fine09:48
claygonovy: that's good - one bad signal is normally less scary than a bunch of bad signals09:49
acoleskota_: I think I may be wrong - the *examples* we have seen always had zero bytes of the reconstructed frag sent so that the start of the actual sent data was the start of next subrequest, but in general I'm not sure that is guaranteed e.g. if reconstructor rebuild timed out part way through a rebuild09:49
onovyclayg: :)09:49
onovyonly other problem is drive-audit metrics, which i send review/patch for. but i think it's unreleated09:50
claygonovy: I would at this point start to lean towards maybe older nodes are mis-reporting somehow - or that the source of that signal has some unknown scaler factor away from norm that's different from old and new09:50
kota_acoles: yes, exactly09:50
claygonovy: i might even upgrade another node and see if it does the same thing (probably will) but look for other signals that may indicate if movement in that metric is "bad"09:51
clayg... not sure if you would agree09:51
onovyclayg: i will try to stop object-expirer on that node09:51
acoleskota_: oh, so that struct has 59 bytes of metadata first then the magic. That is interesting, because when i first wrote my test I corrupted the first 12 bytes and saw no error! then i increased to corrupt 64 and saw the bad magic error. So is it the first 59 bytes of metadata checks that are skipped?09:51
onovyand than i will try to upgrade another one09:52
onovyand let's see what happens09:52
*** dmorita has quit IRC09:52
claygonovy: good luck; may the force be with you09:52
*** ppai has joined #openstack-swift09:52
claygacoles: how many examples do you have?09:52
kota_acoles: but unfortunately currently no ways to detect the corruption if the magic value is same if using liberasurecode < 1.2.009:52
onovyclayg: :) btw i found something09:52
acolesclayg: 209:52
onovythis: obj/expirer/expired_last_pass jumped at yesterday morning too09:53
kota_if usgin liberasurecode >=1.2.0, that may be caught with header checksum.09:53
onovyso if we have more expired objects, we have more .ts and more rsyncs...?09:53
claygacoles: I spent some time staring at the reconstrcutor code trying to convince myself why a _reconstruct_frag_iter would break early on the first frag more frequently than the in the middle and couldn't see anything?09:53
claygi assumed we just had the one sample and it happened to pop on the first frag in the archive?09:53
kota_acoles: yeah09:54
claygonovy: correlation is not causation ?09:54
onovyclayg: i will stop expirer and try to upgrade another one node than :)09:55
acolesclayg: yeah. *hand waving*...maybe if a GET is going to timeout then it often will time out on first byte read???? but I think we have to assume not09:55
kota_acoles: currently, liberasurecode is doing 1. version check, 2. crc check for the metadata and then if it's healty try to chekc the magic value.09:55
kota_2 is avaliable >=1.2.009:56
acoleskota_: so to clarify, liberasurecode <1.2.0 skips the metadata check but will detect a corrupt magic value, liberasurecode >=1.2.0 will detect both corrupt metadata and bad magic?09:56
claygyeah xattr stats read stuff maybe is more liekly to be in some filesystem location that's in the page cache than the first chunk read which drops at the bottom of a heavy io queue?  could be09:56
*** thebloggu has joined #openstack-swift09:56
kota_acoles: yes09:56
claygacoles: i'm so glad you're translating09:56
kota_acoles but one more thing,  a bug is at 1. version check09:57
kota_acoles: https://review.openstack.org/#/c/387879/1/src/erasurecode.c09:58
patchbotpatch 387879 - liberasurecode - Fix liberasurecode skipping a bunch of invalid_arg...09:58
kota_if we hit the corruption as like version is negative or 0, the corruption check was skipped09:58
kota_right now09:59
kota_my patch 387879 is saving the case the version <= 0 but it may be just mitagation10:00
patchbothttps://review.openstack.org/#/c/387879/ - liberasurecode - Fix liberasurecode skipping a bunch of invalid_arg...10:00
kota_even if the case, we could check the sanity with the magic value, anyway?10:00
*** mvk has quit IRC10:01
* kota_ is grubbing another cup of coffee10:04
acoleskota_: I am computing :)10:04
* acoles needs too10:04
acolescoffee*10:04
kota_back from coffee server10:13
openstackgerritKaren Chan proposed openstack/swift: Mirror X-Trans-Id to X-OpenStack-Request-Id  https://review.openstack.org/38735410:13
kota_deep_: looking at your explanation around sw_resp = sw_req.get_response(app)         if not sw_req.remote_user:             raise SignatureDoesNotMatch() -- from here the the s3curl error is returned.10:17
kota_maybe swift3 couldn't collect the user information from your keystone.10:18
kota_deep_: ah, wait I may be wrong.10:20
*** admin6 has quit IRC10:27
kota_hmmm.... not sure the intent for the remote_user because i forgot10:29
kota_deep_: what i can tell for now is maybe you need to set HTTP_REMOTE_USER in your request.10:30
*** sarcasticidiot has joined #openstack-swift10:30
sarcasticidiotHi guys, im currently on the finish stage of setting up a swift cluster(1 keystone, 1 proxy, 2 storage nodes) for some testing but ran into an odd issue. Everything seems to work fine and I can create container using 'swift' client but on the keystone server 'openstack container list' return nothing10:32
*** trananhkma has quit IRC10:32
kota_but i don't think swift3 is using the remote_user value everywhere.10:32
deep_kota_ : same code works with eventlet, as soon as i move to httpd it start failing10:35
deep_kota_ : HTTP_REMOTE_USER from where to set it, i am using steps from here http://docs.openstack.org/developer/swift/apache_deployment_guide.html10:36
kota_deep_: hmm... I don't have the experience with apache but i think REMOTE_USER is defined by user client.10:37
kota_not sure evnelet has the default value if it is not defined.10:37
deep_kota_ : one more thing i am seeing is with eventlet, there is only PUT call but with httpd there PUT and GET call for bucket create10:41
kota_deep_: sounds weird10:42
kota_deep_: which resource for the GET call?10:43
deep_kota_ : if i comment following exception from S3Controller check_signature from file keystone/contrib/s3/core.py bucket creation succeed without any error.10:44
deep_        if not utils.auth_str_equal(credentials['signature'], signed):                 #print "-----------DEEBUG------string doesn't matched but not raising the exception--------------------"             raise exception.Unauthorized('Credential signature mismatch')10:44
deep_kota_ : same resource as of PUT10:45
kota_deep_: could you let me check your swift3 version?10:46
*** chlong has quit IRC10:46
onovyclayg: value of 'rsync' per-store node. store1.ko is upgraded one. http://paste.openstack.org/show/586156/10:47
deep_kota_ : i am using liberty stable10:47
deep_kota_ : i forgot to mention that all other operations like list bucket, upload object , download object works file with this setup10:48
kota_swift3 is out of release management for openstack so we don't have liberty stable...10:49
kota_deep_: i'm now trying to do the request with remote_user in our functional10:50
kota_s/with/without/10:50
*** dmorita has joined #openstack-swift10:51
deep_kota_ : i think it is swift3 1.8, I see your commit as last check in 4469c131d43b9f46e75e1e0394705698872c1bcf Author: Kota Tsuyuzaki <tsuyuzaki.kota@lab.ntt.co.jp> Date:   Wed Nov 25 14:16:06 2015 -0800      Fix date validation10:55
acoleskota_: hi, here's how I see the liberasurecode<1.2.0 issue working out when we are auditing a bad frag...1. the bad frag may by chance appear to be version >=1.2.0, in which case the metadata check will happen and the header checksum is unlikely to match. Otherwise, the magic value is likely to be invalid. In the event that the magic value is by chance valid, then the bad frag will not be quarantined until liberasurecode10:56
acolesis upgraded. *But we will clean up most bad frags*10:56
acoleskota_: I do not think it is possible that we would quarantine a good frag, correct?10:57
kota_deep_: ah, probably 1.8 is much older. And i noticed my misread, that request failed when REMOTE_USER is defined10:57
acoleskota_: but we do need to use a more specific exception than ECDriverError.10:57
deep_kota_ :  For example if i sent following s3 request  ./s3curl.pl --id testuser2 --createBucket --acl public-read -- -s http://c3ces:8080/deepak24 --PUT request-- PUT   Tue, 18 Oct 2016 09:37:11 +0000 x-amz-acl:public-read /deepak24 --- For this PUT, keystone also get request for signature validation which succeeded After PUT there i see GET request  GET   Tue, 18 Oct 2016 09:37:11 +0000 x-amz-acl:public-read /deepak24 For this10:58
deep_orization failed. Credential signature mismatch10:58
kota_a lot of messages comming in :\10:58
acoleskota_: I will paste my comments to gerrit so you can read async :)10:59
kota_acoles: thanks!11:00
kota_(on swift3) hmmm.... interesting. Once  i tried to make REMOTE_USER has a value, that request doesn't fail. However, proxy logs 'that has a value you set'11:01
kota_what happens????11:01
*** aswadr_ has joined #openstack-swift11:03
*** mvk has joined #openstack-swift11:04
kota_deep_: I didn't reach the reason yet, can you try to the newest master or 1.11?11:04
kota_v1.8 is tagged at Sep 12 00:46:11 2014 so i don't have clear memory for 2 years ago one and we have a tons of various patches during the 2 years.11:06
acoleskota_: actually I commented on the bug https://bugs.launchpad.net/swift/+bug/16336411:09
openstackLaunchpad bug 163364 in fpm (Ubuntu) "fpm does not start after upgrade to gutsy" [Undecided,Invalid]11:09
acolesnot that one! this one: https://bugs.launchpad.net/swift/+bug/163364711:10
openstackLaunchpad bug 1633647 in OpenStack Object Storage (swift) "bad fragment data not detected in audit" [High,Confirmed]11:10
kota_acoles: correct. we may loose bad frag to quarantine but not quarantine a good frag unless catching the error just as ECDriverError.11:10
kota_IIRC11:10
acoleskota_: yes11:10
kota_and back to which specific error is good to catch ;-)11:11
kota_or specific errors are11:12
kota_https://github.com/openstack/pyeclib/blob/master/pyeclib/ec_iface.py#L450-L49611:13
kota_available errors on pyeclib11:13
kota_error classes11:13
kota_ah, one more selectable exists, ECBadFragmentChecksum11:14
kota_maybe "except (ECInvalidFragmentMetadata, ECBadFragmentChecksum, ECInvalidParamter)" is good?11:15
acoleskota_: yup, thanks for the pointers11:15
acoleskota_: I'll try to work some more on the patch today but I have other stuff on so may not make huge progress11:16
kota_k11:16
kota_and I also have to work to make sure11:17
kota_when ECInvalidParamter can be raised11:17
acoleskota_: and you also have to sleep!11:17
kota_that error sounds caller error so that if we call the get_metadata with *Invalid Args* we might quarantine the good frags?11:18
kota_acoles: thanks but it's just around 8:20 p.m.11:18
kota_it's good time to be back home and have dinner though :\11:18
acolesright!11:19
kota_hmm.... get_metadata(None) can trigger ECInvalidParamter11:21
kota_just a possibility though, we could make it as a god of destruction with miscoding?11:22
kota_that would be pain...11:22
kota_Am i worried too much?11:23
*** kei_yama has quit IRC11:30
kota_k, i need a fresh head to consider. let's back to home11:30
kota_acoles: thanks for making the note on launchpad bug report.11:30
*** ppai has quit IRC11:35
deep_kota_ : http://paste.openstack.org/show/586164/, i will give a try with 1.11.11:46
*** ppai has joined #openstack-swift11:47
*** zul has quit IRC11:51
*** deep_ has quit IRC11:56
*** admin6 has joined #openstack-swift11:59
*** zhengyin has quit IRC12:01
*** zhengyin has joined #openstack-swift12:02
*** qwertyco has joined #openstack-swift12:04
onovyclayg: https://review.openstack.org/#/c/293177/ // can this be related?12:06
patchbotpatch 293177 - swift - Auditor will clean up stale rsync tempfiles (MERGED)12:06
onovyclayg: guess: i have stale rsync tempfiles on not-upgraded node, upgrade removes them, non-upgrade rsync them back12:06
*** zul has joined #openstack-swift12:06
onovyclayg: https://review.openstack.org/#/c/292661/ // and this is NOT in 2.7.0, so maybe it's related too? :)12:07
patchbotpatch 292661 - swift - Make rsync ignore it's own temporary files (MERGED)12:07
*** dmorita has quit IRC12:09
admin6acoles, kota_: Hi, I saw you talked about the examples you have. Are you interested in some more examples of corrupted fragments? So far I can provide you around 20 of them.12:24
*** cdelatte has joined #openstack-swift12:29
*** ppai has quit IRC12:30
*** SkyRocknRoll has quit IRC12:32
openstackgerritMerged openstack/liberasurecode: Fix a typo in the erasurecode file  https://review.openstack.org/36263812:36
*** amoralej is now known as amoralej|lunch12:43
*** deep has joined #openstack-swift12:47
acolesadmin6: yes, thanks, tar via email works for me12:54
acolesadmin6: btw we are working on an auditor patch to find and quarantine corrupt frags https://bugs.launchpad.net/swift/+bug/163364712:55
openstackLaunchpad bug 1633647 in OpenStack Object Storage (swift) "bad fragment data not detected in audit" [High,Confirmed]12:55
*** x1fhh9zh has joined #openstack-swift12:56
admin6acoles: yes I saw, Thanks. :-) Do you also plan to backport it for mitaka?13:00
*** abhinavtechie has joined #openstack-swift13:04
*** abhitechie has quit IRC13:05
*** StraubTW has joined #openstack-swift13:07
*** takashi has quit IRC13:08
acolesadmin6: hopefully yes. I would like that!13:08
tdasilvagood morning13:15
acolestdasilva: good morning13:20
*** psachin` has quit IRC13:20
*** sgundur has joined #openstack-swift13:43
*** amoralej|lunch is now known as amoralej13:44
*** sgundur has quit IRC13:49
admin6acoles: I just sent you the examples by email.13:53
*** sgundur has joined #openstack-swift13:54
*** ChanServ sets mode: +v tdasilva13:54
acolesadmin6: thanks13:57
*** mvk has quit IRC13:58
openstackgerritStefan Majewsky proposed openstack/swift: swift-recon-cron: do not get confused by files in /srv/node  https://review.openstack.org/38802914:00
openstackgerritStefan Majewsky proposed openstack/swift: swift-recon-cron: do not get confused by files in /srv/node  https://review.openstack.org/38802914:03
*** sarcasticidiot has quit IRC14:05
-openstackstatus- NOTICE: We are away of pycparser failures in the gate and working to address the issue.14:06
*** abhinavtechie has quit IRC14:10
*** x1fhh9zh has quit IRC14:10
*** ntata_ has joined #openstack-swift14:10
*** ntata_ has quit IRC14:15
*** ntata_ has joined #openstack-swift14:16
*** qwertyco has quit IRC14:17
*** ntata_ has quit IRC14:18
*** ntata_ has joined #openstack-swift14:18
*** hoonetorg has quit IRC14:28
*** ntata_ has quit IRC14:33
*** ntata_ has joined #openstack-swift14:35
*** ntata_ has quit IRC14:39
*** CaioBrentano has joined #openstack-swift14:41
*** CaioBrentano has quit IRC14:41
*** CaioBrentano has joined #openstack-swift14:41
*** psachin` has joined #openstack-swift14:49
*** psachin` has quit IRC14:54
*** cppforlife_ has quit IRC14:55
*** cppforlife_ has joined #openstack-swift14:59
*** sgundur has quit IRC15:04
*** mvk has joined #openstack-swift15:06
*** cppforlife_ has quit IRC15:08
*** cppforlife_ has joined #openstack-swift15:11
*** klrmn has joined #openstack-swift15:15
*** sheel has quit IRC15:20
*** cshastri has quit IRC15:24
*** sgundur has joined #openstack-swift15:47
*** geaaru has quit IRC15:48
*** ChubYann has joined #openstack-swift15:49
*** admin6 has quit IRC16:01
*** admin6_ has joined #openstack-swift16:02
*** admin6_ has quit IRC16:02
*** admin6 has joined #openstack-swift16:03
* briancline shakes fist at pycparser16:04
*** acoles is now known as acoles_16:07
-openstackstatus- NOTICE: pycparser 2.16 released to fix assertion error from today.16:12
*** Guest85855 has quit IRC16:25
*** hseipp has quit IRC16:25
*** rledisez has quit IRC16:28
notmynamehello world16:35
openstackgerritAlistair Coles proposed openstack/swift: WIP: Make ECDiskFileReader check fragment metadata  https://review.openstack.org/38765516:39
*** tqtran has joined #openstack-swift16:42
*** acoles_ is now known as acoles16:42
*** acoles is now known as acoles_16:46
*** acoles_ is now known as acoles16:46
acolesclayg: kota_ ^^ I didn't make much progress today, sorry. Fixed the failing app iter tests in test_diskfile.py.16:49
*** rcernin has quit IRC16:51
*** diogogmt has joined #openstack-swift16:53
*** sheel has joined #openstack-swift16:53
*** mohitmotiani has joined #openstack-swift16:54
*** abhitechie has joined #openstack-swift16:56
*** joeljwright has quit IRC16:57
*** acoles is now known as acoles_16:57
*** mohitmotiani has quit IRC17:01
*** manous has joined #openstack-swift17:02
notmynameI need to push the 2 fishbowl sessions to the summit calendar asap. after that I can work on grouping the other topics for the rest of the schedule for the working sessions17:04
*** mohitmotiani has joined #openstack-swift17:04
*** mohitmotiani has quit IRC17:05
*** mohitmotiani has joined #openstack-swift17:06
*** mohitmotiani has quit IRC17:08
*** mmotiani_ has joined #openstack-swift17:08
*** mmotiani_ has quit IRC17:10
claygmorning17:21
claygacoles_: oh wow did you really!?  did the pattern I was using work for you +-?17:21
*** chsc has joined #openstack-swift17:22
*** abhitechie has quit IRC17:22
openstackgerritShashirekha Gundur proposed openstack/swift: NIT: test_crossdomain_get_only  https://review.openstack.org/38814217:27
*** ntata has quit IRC17:33
*** alpha_ori has quit IRC17:33
*** MooingLemur has quit IRC17:33
*** jeblair has quit IRC17:33
*** ahale_ has quit IRC17:33
*** jistr has quit IRC17:33
*** cargonza has quit IRC17:33
*** urth has quit IRC17:33
*** Anticimex has quit IRC17:33
*** DuncanT has quit IRC17:33
*** mlanner has quit IRC17:33
*** vern has quit IRC17:33
*** ujjain has quit IRC17:33
*** Guest66666 has quit IRC17:33
*** blair has quit IRC17:33
*** EmilienM has quit IRC17:33
*** timburke has quit IRC17:33
*** kencjohnston has quit IRC17:33
*** notmyname has quit IRC17:33
*** acoles_ has quit IRC17:33
*** fbo has quit IRC17:33
*** madorn has quit IRC17:33
*** jroll has quit IRC17:33
*** briancline has quit IRC17:33
*** tonyb has quit IRC17:33
*** briancli1e has joined #openstack-swift17:33
*** ujjain- has joined #openstack-swift17:33
*** Guest66666 has joined #openstack-swift17:33
*** Anticimex has joined #openstack-swift17:33
*** alpha_ori has joined #openstack-swift17:33
*** tonyb has joined #openstack-swift17:33
*** ahale has joined #openstack-swift17:33
*** kencjohnston has joined #openstack-swift17:33
*** jeblair has joined #openstack-swift17:33
*** MooingLemur has joined #openstack-swift17:33
*** MooingLemur has quit IRC17:33
*** MooingLemur has joined #openstack-swift17:33
*** urth has joined #openstack-swift17:34
*** timburke has joined #openstack-swift17:34
*** vern has joined #openstack-swift17:34
*** ChanServ sets mode: +v timburke17:34
*** mlanner has joined #openstack-swift17:34
*** jistr has joined #openstack-swift17:34
*** notmyname has joined #openstack-swift17:34
*** ChanServ sets mode: +v notmyname17:34
*** EmilienM has joined #openstack-swift17:34
*** jroll has joined #openstack-swift17:34
*** madorn has joined #openstack-swift17:35
*** EmilienM has quit IRC17:35
*** EmilienM has joined #openstack-swift17:35
*** ntata has joined #openstack-swift17:35
*** acoles_ has joined #openstack-swift17:36
*** acoles_ is now known as acoles17:36
*** ChanServ sets mode: +v acoles17:36
tdasilvanotmyname:17:37
tdasilvanotmyname: hello17:37
*** fbo has joined #openstack-swift17:37
*** sgundur has quit IRC17:37
notmyname17:37
notmynamehello17:37
tdasilvathe devops sessions won't be in one of the fishbowl sessions this time around, correct?17:38
notmynamedevops sessions?17:38
notmynameour regular ops feedback session?17:38
tdasilvasorry, yeah, ops feedback17:38
notmynameyou and your paradigm-shifting synergies with your devops and agile methods17:39
claygoh, no it looks like you didn't use the assertBodyEqual at all, I think your way is better maybe17:39
notmynametdasilva: 2:!5 on tuesday is an ops session for swift. I'll be moderating that17:39
notmynamehttps://etherpad.openstack.org/p/BCN-ops-swift17:39
notmynamewhich means that we may not need to do another during our own fishbowl sessions (thursday morning)17:40
notmynamehttps://www.openstack.org/summit/barcelona-2016/summit-schedule/events/17353/ops-swift-feedback17:40
*** sgundur has joined #openstack-swift17:40
*** DuncanT has joined #openstack-swift17:41
claygtdasilva: I sorta like calling it swift devops now that you point it out :D17:41
notmynameand I'd love any other things added to that etherpad17:41
notmynameso here's the question for us: do we have another ops session or not?17:42
claygalways moar ops17:42
tdasilvaheh17:43
clayglets just have a session where onovy preaches at us for 30 mins and then we ask questions17:43
*** cargonza has joined #openstack-swift17:44
notmynameI think we'd be just crying at the end. or did you mean ask questions like "why is everything still so terrible?" ;-)17:44
*** mvk has quit IRC17:44
tdasilvaclayg: talking about ops, i just realized that in our installation docs, we don't talk about where to run the object-expirer. i've heard arguments for running on the proxies and others argue to run on the data storage nodes. what's the official recommendation?17:45
*** deep has quit IRC17:45
claygtdasilva: i officially recommend you run it on the object servers - because - absolutley no good reason17:47
claygexcept that's what I do and it doesn't seem to cause me any grief17:47
claygi'm in to keeping with doing things that aren't causing problems17:47
claygi have pleanty i stuff I'm actively trying to *stop* doing because of all the problems17:48
clayg*plus* - it has object in the nae17:48
clayg*name17:48
notmynameok, I'll put the ops session (part dos) and community/dev feedback as fishbowl sessions for thursday morning17:50
tdasilvaclayg: plus, if people were to use the processes/process options, i think it would be easier to use on storage nodes, rather than proxy nodes17:51
notmynamethen we have a break and then at 11am thursday through 6pm friday are the rest of the swift working sessions17:51
tdasilvanotmyname: i think that's how austin went too, right? with the fishbowl sessions?17:52
notmynametdasilva: yep, pretty much17:53
jrichli+117:55
notmynameok, those are updated17:59
*** manous has quit IRC18:04
*** hseipp has joined #openstack-swift18:12
claygnice18:13
*** lcurtis has joined #openstack-swift18:17
*** manous has joined #openstack-swift18:17
claygsurly we have a test helper that takes plaintext data as a string and turns it into to a list of encoded frag_archives - where is it?18:31
claygok, new question - where *should* it be ;)  https://github.com/openstack/swift/blob/a79d8508df493d5744b262e1d1830782e32dbd04/test/unit/proxy/controllers/test_obj.py#L220618:32
*** amoralej is now known as amoralej|off18:36
claygtest.unit.encode_frag_archive_bodies(policy, body) it will be forever more18:39
*** aswadr_ has quit IRC18:39
*** eranrom has quit IRC18:43
*** thebloggu has quit IRC18:46
clayg*why* is it policy.*ec_*segment_size and not just policy.segment_size - which other segement size is it going to be?18:49
*** thebloggu has joined #openstack-swift18:50
claygif we ever have a thing that is just a plain "segment_size" but is not policy.ec_segment_size I would probably want to stab someone in the face18:50
*** thebloggu has quit IRC18:50
*** CaioBrentano has quit IRC18:50
claygi guess eventually we'll run out of synonyms for chunk18:51
*** CaioBrentano has joined #openstack-swift18:51
clayg... oh maybe not - lump, hunk, block (to disk-y), slab (sorta memcach-y), nugget, dollop18:52
claygi vote we use dollop_size for the next think we have to split into bits18:52
*** CaioBrentano has quit IRC18:52
claygoh... i guess we have slo_segement_size :'(18:53
claygGD!18:53
*** CaioBrentano has joined #openstack-swift18:55
*** CaioBrentano has quit IRC18:59
*** sgundur has quit IRC19:00
*** CaioBrentano has joined #openstack-swift19:00
*** sgundur has joined #openstack-swift19:01
jrichliclayg: and then at some point you might have to ask things like : does the slo_segment_size for a sub-slo segment equal the sum of the slo_segment_sizes of its segments?  or the size of the sub-slo manifest?19:01
*** CaioBrentano has quit IRC19:01
*** thebloggu has joined #openstack-swift19:01
*** CaioBrentano has joined #openstack-swift19:03
*** CaioBrentano has quit IRC19:04
*** CaioBrentano has joined #openstack-swift19:05
clayga'ight object server I've got you in my sights^Wfailing unittest now19:10
claygjrichli: no one would ask that - their head would explode19:11
*** CaioBrentano has quit IRC19:11
jrichlilol19:12
*** CaioBrentano has joined #openstack-swift19:16
*** thebloggu has quit IRC19:20
openstackgerritShashirekha Gundur proposed openstack/swift: Invalidate cached tokens api  https://review.openstack.org/37031919:21
claygwow, so i don't think there is really any graceful way to tell eventlet.wsgi you're not going to offer up all the bytes you promised19:28
openstackgerritThiago da Silva proposed openstack/swift: added expirer service to list  https://review.openstack.org/38818519:30
zaitcevraise ValueError19:31
zaitcev^_^19:31
notmynameopenstack user survey stuff has been published https://www.openstack.org/assets/survey/October2016SurveyReport.pdf19:31
notmynamealong with a new site to interactively explore it https://www.openstack.org/analytics19:31
clayg"yeah you know that contract we agreed too - sorry not happenin, go ahead and shut your stuff down, sorry bro"19:32
zaitcevOh god, that NPS19:33
notmynamezaitcev: on a scale of 1-10 how likely would you be to recommend openstack to a friend19:34
notmynameseems like a really weird question for (1) infrastructure software (2) open source software (3) something that is *not* a product19:35
zaitcevIt's like you know, in the few recent years the unemployment decreased and labor participation collapsed in the U.S. So we now have an excellent employment and tons of jobless who gave up, went hobo, mooch off family.19:35
zaitcevSo yeah, those who are in (job market or OpenStack), they are happy.19:35
*** manous has quit IRC19:35
zaitcevNPS does not tell you what's going outside though.19:36
zaitcevYou know who has the highest NPS? BeOS users.19:36
*** silor has joined #openstack-swift19:37
pdardeauwow, there really is question "how likely are you to recommend OpenStack?"19:38
pdardeaui thought notmyname was just making a funny...19:38
zaitcevWell, it does have a certain merit. Here's a counter - example. A year ago I thought it possible that Swift is over, and went to work on Ceph. Specifically, Ceph's object storage, called "Rados Gateway" (RGW).19:40
zaitcevAfter spending a few months on RGW and learning some ropes, as well as posting a few patches of various degree of complexty, I ended liking it less than before I knew what's inside.19:41
zaitcevSage is awesome though19:41
*** hseipp has quit IRC19:43
zaitcevSo I guess what the Foundation is trying to find out is if users install OpenStack testbed, then say "god I can't run my business on this crock of shit, let's buy some Eucalyptus"19:43
*** silor has quit IRC19:45
*** silor has joined #openstack-swift19:46
claygno one buys Eucalyptus20:00
claygzaitcev: that's nice of you to say that you think Swift is over tho!  One less thing for me to owrry about.20:00
openstackgerritThiago da Silva proposed openstack/swift: update urls to newton  https://review.openstack.org/38819620:02
clayg>50% unmodified packages from the OS!20:04
claygi had no idea20:04
onovyclayg: session only for me? thanks!20:04
claygoh gee, chef is a looser :'(20:05
claygoh i see - he sample size is not the complete set of respondents20:05
claygi was freaking out when ~40% was using k8s - but it's more like 40% of the 10% that had any answer for containers20:06
onovyclayg: tried to shutdown that upgraded node. rsync metric: https://s21.postimg.org/7980bdn9z/graph4.png20:06
onovythat spike is just after node shutdown20:07
clayg61% reporting fewer than 1000 object stored tells me we need a more baked in solution for account level rollup20:08
*** silor1 has joined #openstack-swift20:09
mattoliverauMorning20:10
*** silor has quit IRC20:11
*** silor1 is now known as silor20:11
*** _JZ_ has joined #openstack-swift20:14
*** cdelatte has quit IRC20:17
notmynameclayg: how do you mean? are you thinking that people are reporting <1000 because they don't have an aggregate number provided by swift somewhere?20:20
*** _JZ_ has quit IRC20:29
*** sgundur has quit IRC20:32
*** sgundur has joined #openstack-swift20:33
onovyclayg: any idea what can i check? :(20:36
tdasilvamattoliverau: o/20:36
*** sheel has quit IRC20:40
*** portante has quit IRC20:40
*** ndk_ has quit IRC20:40
*** silor has quit IRC20:42
*** ndk_ has joined #openstack-swift20:42
*** portante has joined #openstack-swift20:42
*** cdelatte has joined #openstack-swift20:43
*** sgundur has quit IRC20:59
*** AndyWojo has quit IRC21:15
*** CrackerJackMack has quit IRC21:16
*** AndyWojo has joined #openstack-swift21:16
*** oxinabox has quit IRC21:17
*** wasmum has quit IRC21:17
*** philipw has quit IRC21:17
*** philipw has joined #openstack-swift21:17
*** kencjohnston has quit IRC21:17
*** kencjohnston has joined #openstack-swift21:19
*** CrackerJackMack has joined #openstack-swift21:20
*** mvk has joined #openstack-swift21:23
*** wasmum has joined #openstack-swift21:23
*** joeljwright has joined #openstack-swift21:24
*** ChanServ sets mode: +v joeljwright21:24
*** itlinux has joined #openstack-swift21:30
*** StraubTW has quit IRC21:33
*** Jeffrey4l has quit IRC21:34
*** Jeffrey4l has joined #openstack-swift21:35
*** hoonetorg has joined #openstack-swift21:40
*** clu_ has joined #openstack-swift21:45
openstackgerritNandini Tata proposed openstack/swift: Fix policy and ring usage from --swift-dir option  https://review.openstack.org/38823121:48
openstackgerritNandini Tata proposed openstack/swift: Fix policy and ring usage from --swift-dir option  https://review.openstack.org/38823121:52
*** cdelatte has quit IRC21:53
*** ntata_ has joined #openstack-swift22:07
*** ntata_ has quit IRC22:10
claygonovy: so right *after* a reboot it spiked but then it went down?22:22
*** rjaiswal has joined #openstack-swift22:28
claygnotmyname: yeah something like that22:35
notmynameclayg: we've talked about that or something similar before. I wonder if we could add somehting to admin /info22:36
claygnotmyname: i don't see any reason it would have to go in that namespace22:37
notmynameno, me neither, except it's already a place where we can put stuff of interest to those querying the cluster22:38
claygnotmyname: well I just assume the api would match an account listing but one level higher22:39
claygyou'd have total-objects - objects-in-policy-X - a list of all the (account, policy) rows with their stats22:39
claygor something, idk22:39
notmynamethere's not a natural one-level-higher place to put it in the current api scheme, is there? we use /healthcheck and /info today, with /v1/* being for user data22:40
claygall I reember is that no one has >10M accounts and we could totally have an account-updater than sends to an account db in a dot-account it would totally be a thing22:41
notmynameyeah22:41
claygnotmyname: agree, it'd be one-offed - /cluster or /utlization or /storage or something cute like that22:41
claygbasically rewiting it to a /v1/.interal account level request with whatever reseller_admin_super_god_user roles are needed22:42
notmynameah yeah. I see what youre getting at. I wasn't thinking, initially, for as much info. but yeah, if it's got a list of all the accounts and a lot of stats, then it doesn't make sense to put it under /info22:42
notmynameright22:42
notmynametotally would be a thing someone could do22:42
claygtotally22:42
notmynameon that note, /me remembers to respond to karenc22:43
claygbut I think maybe our usage survey results are suffering from a lack of such a thing22:43
claygbasically *everyone* who stands up swift notices that "roll your own usage" solution is annoying22:43
notmynamedefinitely. also they're suffering from a cpu-cores-focused understanding of cloud and a very tedious survey in which to enter the info22:44
claygi don't know how to solve log/request/band-width processing in the general sense (it's a really a non-trivial problem) - but trolling the account db's for bytes and object counts was always the easy part22:44
claygthe container updater does basically exactly the thing we want22:44
*** joeljwright has quit IRC22:46
*** _JZ_ has joined #openstack-swift22:50
*** _JZ_ has quit IRC22:50
*** _JZ_ has joined #openstack-swift22:52
openstackgerritNandini Tata proposed openstack/swift: Fix policy and ring usage from --swift-dir option  https://review.openstack.org/38823122:57
*** klrmn has quit IRC23:29
claygwhen using the context manager form of assertRaises the context object provided attaches the raised exception as an attribute on the context23:31
clayg... but for some reason I can *never* remember what is the *name* of the attribute!?23:31
claygI always want it to be... like "e" or "err" or "exc" or something cut - but it's just "exception"23:31
*** diogogmt has quit IRC23:32
*** kei_yama has joined #openstack-swift23:39
*** chsc has quit IRC23:39
*** klrmn has joined #openstack-swift23:52

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!