Wednesday, 2017-07-19

*** ^andrea^ has quit IRC00:06
*** caiobrentano has joined #openstack-swift00:06
*** frank_young has joined #openstack-swift00:15
*** frank_young has quit IRC00:20
*** lucasxu has quit IRC00:22
*** lucasxu has joined #openstack-swift00:35
*** xrb has quit IRC00:45
*** kiennt has joined #openstack-swift00:49
*** xrb has joined #openstack-swift00:52
*** tovin07_ has joined #openstack-swift00:52
*** lucasxu has quit IRC00:55
*** ukaynar_ has quit IRC01:09
*** JimCheung has quit IRC01:15
*** caiobrentano has quit IRC01:15
openstackgerritjunbo.li proposed openstack/swift master: [Api-ref] fix response status  https://review.openstack.org/48425401:18
*** frank_young has joined #openstack-swift01:23
*** JimCheung has joined #openstack-swift01:30
*** vikram has joined #openstack-swift01:30
*** JimCheung has quit IRC01:35
*** caiobrentano has joined #openstack-swift01:43
*** caiobrentano has quit IRC01:43
*** RayLei has joined #openstack-swift01:44
*** JimCheung has joined #openstack-swift01:47
*** JimCheung has quit IRC01:51
kota_good morning02:10
kota_timburke: nice, go code!!02:14
*** gyee has quit IRC02:17
kota_notmyname: for now, the script reached at k=6, m=20. To make it fast, I need more computation resource, more server or many core cpus. It doesn't waste the memory so just cpu cores are needed to run in parallel.02:22
kota_i think, even using timburke's one, it requires a many core machine to run speedy02:23
*** lucasxu has joined #openstack-swift02:30
*** caiobrentano has joined #openstack-swift02:49
*** lucasxu has quit IRC02:53
*** caiobrentano has quit IRC02:55
*** JimCheung has joined #openstack-swift02:57
*** JimCheung has quit IRC03:02
*** ukaynar has joined #openstack-swift03:03
*** SkyRocknRoll has joined #openstack-swift03:14
*** lucasxu has joined #openstack-swift03:17
*** gkadam has joined #openstack-swift03:53
*** gkadam has quit IRC03:53
*** gkadam has joined #openstack-swift03:53
*** hieulq has quit IRC03:57
*** hieulq has joined #openstack-swift03:57
*** links has joined #openstack-swift03:58
*** lucasxu has quit IRC04:02
*** Dinesh_Bhor has joined #openstack-swift04:11
*** Dinesh_Bhor has quit IRC04:14
*** Dinesh_Bhor has joined #openstack-swift04:15
*** psachin has joined #openstack-swift04:21
*** RayLei has quit IRC04:25
*** frank_young has quit IRC04:26
*** frank_young has joined #openstack-swift04:36
*** frank_young has quit IRC04:41
*** ukaynar_ has joined #openstack-swift05:14
*** Dinesh_Bhor has quit IRC05:17
*** ukaynar has quit IRC05:17
*** ukaynar_ has quit IRC05:29
*** ukaynar has joined #openstack-swift05:29
*** ukaynar has quit IRC05:34
*** tonanhngo has quit IRC05:34
*** kiennt has quit IRC05:42
*** frank_young has joined #openstack-swift05:48
*** rcernin has joined #openstack-swift05:50
*** cshastri has joined #openstack-swift05:58
*** kiennt has joined #openstack-swift06:02
openstackgerritjunbo.li proposed openstack/swift master: Enabling off-by-default checks  https://review.openstack.org/47456206:05
*** SkyRocknRoll has quit IRC06:15
*** kiennt has quit IRC06:21
*** Dinesh_Bhor has joined #openstack-swift06:50
*** Dinesh_Bhor has quit IRC06:50
*** tonanhngo has joined #openstack-swift07:01
*** tonanhngo has quit IRC07:04
*** cschwede_ has joined #openstack-swift07:11
*** itlinux has quit IRC07:11
*** tesseract has joined #openstack-swift07:17
*** klrmn has quit IRC07:22
*** oshritf has joined #openstack-swift07:26
nttHi, I have a swift cluster with 3 zones and 3 different servers (with replica=3). One of my servers went down and now I have to restart after 10 days of inactivity. When I try to restart rsyncd or any other service (with swift-init) it simply hangs without error messages or kernel panic. It seems to me that is a "swap problem", but I'm not sure because when it hangs I completely lost the access to the machine even with the local physical console.07:29
nttPlease, someone can help me? Thank you07:29
*** catintheroof has joined #openstack-swift07:31
*** SkyRocknRoll has joined #openstack-swift07:35
*** catintheroof has quit IRC07:36
*** kiennt has joined #openstack-swift07:40
*** cbartz has joined #openstack-swift07:46
*** openstackgerrit has quit IRC08:49
*** tovin07_ has quit IRC08:50
*** mvk has joined #openstack-swift09:50
*** RayLei has joined #openstack-swift10:16
RayLeiIf swift has two regions, swift how to timely detect a region is not available?10:17
*** Dinesh_Bhor has joined #openstack-swift10:27
*** RayLei has quit IRC10:33
*** kiennt has quit IRC10:45
*** tonanhngo has joined #openstack-swift11:03
*** frank_young has quit IRC11:04
*** tonanhngo has quit IRC11:05
*** frank_young has joined #openstack-swift11:09
*** chlong_ has quit IRC11:43
*** caiobrentano has joined #openstack-swift11:47
*** hoonetorg has joined #openstack-swift12:09
*** catintheroof has joined #openstack-swift12:21
*** frank_young has quit IRC12:24
*** gkadam has quit IRC12:24
*** caiobrentano has joined #openstack-swift12:34
*** caiobrentano has quit IRC12:34
*** MVenesio has joined #openstack-swift12:37
*** RayLei has joined #openstack-swift12:40
RayLeiCan the middleware get all the actions of the object to the proxy?12:40
RayLeiI repeat it again ,Can the middleware get the response from all objects to the proxy?12:42
*** caiobrentano has joined #openstack-swift12:46
*** RayLei has quit IRC12:53
*** frank_young has joined #openstack-swift12:54
acolesRayLei: if you mean individual backend object server responses to the proxy, no they are not visible to middleware12:55
*** frank_young has quit IRC12:59
*** lucasxu has joined #openstack-swift13:06
*** ukaynar has joined #openstack-swift13:35
*** vint_bra has joined #openstack-swift13:36
nttHi, I partially solved the problem of restarting a server after a long downtime (10 days). Almost all is resynchronized but I have some errors: Jul 19 15:36:18 r1z1 object-replicator: ERROR __call__ error with REPLICATE /sde1/918 : #012Traceback (most recent call last):#012  File "/usr/lib/python2.7/site-packages/swift/obj/server.py", line 904, in __call__#012    res = method(req)#012  File "/usr/lib/python2.7/site-packages/swift/common/utils.py", line13:38
ntt2648, in wrapped#012    return func(*a, **kw)#012  File "/usr/lib/python2.7/site-packages/swift/common/utils.py", line 1205, in _timing_stats#012    resp = func(ctrl, *args, **kwargs)#012  File "/usr/lib/python2.7/site-packages/swift/obj/server.py", line 873, in REPLICATE#012    device, partition, suffixes, policy)#012  File "/usr/lib/python2.7/site-packages/swift/obj/diskfile.py", line 768, in get_hashes#012    self._get_hashes, partition_path,13:38
nttrecalculate=suffixes)#012  File "/usr/lib/python2.7/site-packages/swift/common/utils.py", line 3068, in force_run_in_thread#012    return self._run_in_eventlet_tpool(func, *args, **kwargs)#012  File "/usr/lib/python2.7/site-packages/swift/common/utils.py", line 3048, in _run_in_eventlet_tpool#012    raise result#012OSError: [Errno 21] Is a directory13:38
nttIt seems that there is a problem with replication. Someone can help me about this? Thank you13:38
nttOne solution could be to delete the folder sde1/objects/918 on the node I've rebooted?13:44
*** frank_young has joined #openstack-swift13:52
*** nathaniel has joined #openstack-swift13:53
*** nathaniel is now known as Guest1836713:53
Guest18367Can I use swift with other file system instead of xfs?13:54
Guest18367in Object node13:54
*** frank_young has quit IRC13:57
*** tonanhngo has joined #openstack-swift14:01
*** tonanhngo has quit IRC14:03
tdasilvaGuest18367: technically you could use any filesystem that supports xattr14:07
tdasilvabut the great majority of people (AFAIK) use xfs14:08
acolesntt: that traceback is hard to diagnose because the source of the exception isn't logged, but (I'm taking a guess) this may be because you have a corrupt hashes.pkl that is a dir - so check on the node that is being replicated to that all */objects/*/hashes.pkl are files or non-existent but not dirs14:09
nttacoles: thank you. Can I simply delete the folder sde1/objects/918 ?14:10
nttswift should replicate again (I think)14:10
acolesntt: that folder may have object data in it14:11
nttI know14:11
nttbut I have 3 zones on 3 different servers. If I "accidentally" delete sde1/objects/918 from one zone I think swift should replicate again. Is this wrong?14:11
acolesif you suspect folder is the cause then check the sde1/objects/918/hashes.pkl is not a dir14:11
ntthashes.pkl IS a dir inside the folder14:12
tdasilvantt: so I think notmyname had suggested completely erasing the whole drive since it's coming from a long downtime, did you consider that?14:12
tdasilvantt: if you are running with default config, you might run into a chance of introducing dark data in your system14:12
ntttdasilva: yes... but the complete resync is a long task and I need to solve fast14:13
Guest18367tdasilva: thank you14:13
acolesntt: ok it should not be a dir, so first you may want to check what is in that dir, possibly create a copy of it, but ultimately you'll need to delete the hashes.pkl that is a dir.14:13
*** frank_young has joined #openstack-swift14:14
nttacoles: I have only one dir inside hashes.pkl with only 1 file inside it14:14
nttfor a total of 51K of data14:14
acolesntt: also, take not of tdasilva's comment14:14
nttok14:15
nttso, after a backup of hashes.pkl, I have to delete the folder. Right?14:15
*** links has quit IRC14:18
*** Guest18367 has quit IRC14:19
acolesntt: yes, make a backup, delete the bad hashes.pkl folder, then next run of object replicator should replace it with a file.14:20
*** frank_young has quit IRC14:30
*** frank_young has joined #openstack-swift14:35
nttacoles: It seems to work!! All errors disappeared14:38
nttthank you14:38
*** cshastri has quit IRC14:39
*** gyee has joined #openstack-swift14:39
*** frank_young has quit IRC14:40
acolesntt: out of curiosity, if you are able to share publically, what was in the hashes.pkl folder? (please paste to http://paste.openstack.org/ rather than dumping in irc)14:40
nttI have only another error. From zone2 and zone3 I see an rsync error: http://paste.openstack.org/show/615861/  When I check /srv/node/sdb1/objects/487/92e on 192.168.128.1, 92e is a file and NOT a folder. How can I solve the issue?14:41
nttacoles: sure! I can share publically and write some doc if needed14:41
acolesntt: I'm just curious how it got in that state14:42
nttacoles: no problem, I can give you all details about the cluster. Anyway, I need to solve last problem. Can you give me some advice?14:43
xrbhi all, making progress with my setup. Middlewares are really a cool (and handy way) to extend Swift's functionality!14:55
xrbI am currently getting issues with two things (not related to middlewares). 1) when a user creates a bucket, is the creator tracked somewhere (not seeing it with 's3cmd info' or 'swift stat').. Does he get some special permission on his bucket?14:57
*** rloo has joined #openstack-swift14:58
xrb2) if not through (1), within a project, how are permissions on a bucket granted to users? Is it only done using 'ACLs' or is there also some other mechanism?15:00
*** caiobrentano has quit IRC15:00
*** caiobrentano has joined #openstack-swift15:01
*** caiobrentano has quit IRC15:01
rloohi swift'ers. we've (ironic) been seeing sporatic issues with devstack, setting up swift: http://logs.openstack.org/64/483464/8/check/gate-tempest-dsvm-ironic-lib-partition-agent_ipmitool-ubuntu-xenial/a3125e6/logs/devstacklog.txt.gz15:01
rloodoes that look familiar to anyone here? is it a swift issue, devstack issue, ??15:02
*** hoonetorg has quit IRC15:02
*** psachin has quit IRC15:08
*** frank_young has joined #openstack-swift15:13
*** klrmn has joined #openstack-swift15:22
*** Sukhdev_ has joined #openstack-swift15:39
*** gyee has quit IRC15:40
*** frank_young has quit IRC15:41
*** gyee has joined #openstack-swift15:42
acolesxrb: usual caveat from me - answers are for keystoneauth rather than s3 :) 1) the creator id is not tracked when a container is created 2) users with an operator role for the project (e.g. admin) can read/write containers; ACLs of form user:project can grant cross-project access (but nor restrict within project)15:45
acolesxrb: there is another form of ACL that grants access to a user based on role on project15:46
acolesso for example if you have a user with role 'foo' on project (but not role admin) and set container ACL to foo, then that user can access the container. sadly I cannot find docs for that form of ACL :/15:47
tdasilvarloo: are you referring to this error: http://logs.openstack.org/64/483464/8/check/gate-tempest-dsvm-ironic-lib-partition-agent_ipmitool-ubuntu-xenial/a3125e6/logs/devstacklog.txt.gz#_2017-07-19_13_54_31_908 ?15:49
rlootdasilva: yes! sorry i should have been more specific15:49
tdasilvarloo: so, that's very interesting. I have personally seen a similar error when running our probe tests. It happens sporadically and it seems to do with restarting a service and sending a request right away to that service15:50
tdasilvasometimes the service is not ready yet, so you get that Errno 111] Connection refused15:51
*** frank_young has joined #openstack-swift15:51
rlootdasilva: the ironic jobs have been failing due to a neutron issue, so we haven't been looking too much, just assuming neutron. i don't know when this started, but i think in the last week. maybe. any idea what might have changed recently?15:51
*** gyee has quit IRC15:52
rlootdasilva: maybe things are just 'slower'? lame excuse...15:52
rlootdasilva: what service is it looking for that may not be ready yet?15:52
tdasilvarloo: for the swift probe tests I belive I saw that with the proxy service, but it could be any service really...15:53
tdasilvado you know where I can find lib/swift:swift_configure_tempurls15:53
tdasilvasee what it is oding15:53
rlootdasilva: so far, i've only seen it wrt starting swift. does devstack start swift first?15:53
rlootdasilva: sec, let me find link for you15:53
rlootdasilva: https://github.com/openstack-dev/devstack/blob/master/lib/swift15:54
rlootdasilva: does swift use devstack for their own testing?15:54
tdasilvarloo: not for the probe tests15:55
tdasilvarloo: https://8b86aea46fb38e6450f2-0e5f4c086da474abc1df58826577db2f.ssl.cf1.rackcdn.com/427911/7022/probetests/console.txt15:55
tdasilvathese are the probe tests failures, search for ECONNREFUSED15:55
rlootdasilva: yeah, looks similar15:56
*** frank_young has quit IRC15:56
rlootdasilva: it can't be just ironic & swift that has noticed this...15:56
*** frank_young has joined #openstack-swift16:00
*** klrmn has quit IRC16:05
tdasilvarloo: so that issue looks very similar to what i have been seeing with our probe tests, so it could be swift regression, but I need to investigate further16:09
tdasilvaI will open a bug in swift to track this16:09
*** rcernin has quit IRC16:09
notmynamegood morning16:10
*** vinsh has quit IRC16:11
*** caiobrentano has joined #openstack-swift16:11
rlootdasilva: thx!!!16:12
*** oshritf has quit IRC16:13
*** oshritf has joined #openstack-swift16:15
*** oshritf has quit IRC16:17
*** cbartz has quit IRC16:18
*** frank_young has quit IRC16:23
*** rcernin has joined #openstack-swift16:23
timburkegood morning16:24
*** JimCheung has joined #openstack-swift16:25
notmynamehttps://review.openstack.org/#/c/448480 and https://review.openstack.org/#/c/478416/ and https://review.openstack.org/#/c/475038/ need some reviews, please16:29
patchbotpatch 448480 - swift - DB replicator cleanup16:29
patchbotpatch 478416 - swift - Add multiple worker processes strategy to reconstr...16:29
patchbotpatch 475038 - python-swiftclient - Allow for uploads from standard input.16:29
*** frank_young has joined #openstack-swift16:33
*** caiobrentano_ has joined #openstack-swift16:36
*** frank_young has quit IRC16:37
timburkerloo: tdasilva: yeah, looks like a race between the proxy starting up and devstack configuring tempurls. makes me wonder how many retries osc uses, and what their backoff strategy is16:38
*** caiobrentano has quit IRC16:38
rlootimburke: sigh, something changed recently. Can't it just work. like. magic?16:41
timburkei'm gonna guess that change was to enable tempurls?16:42
timburkei *think* if account POST was done with python-swiftclient this wouldn't be a problem? we default to like 5 retries with an exponential backoff; all told i think that'd give the proxy something like 30 seconds to start (as opposed to the not-quite 2 seconds osc has between initial call and reporting the error)16:43
rlootimburke: no idea, wish i knew when it started happening. i suppose i could do some digging. i don't know much about swift. (i know very little.)16:44
*** mvk has quit IRC16:51
*** ukaynar has quit IRC16:52
timburkerloo: fwiw, it looks like tempurls were enabled back in feb: https://github.com/openstack-infra/project-config/commit/17c5302 -- does that about line up with when you started seeing these failures? i wonder if we just need a patch to add a `sleep 5` or something at the start of swift_configure_tempurls in lib/swift16:52
*** ukaynar has joined #openstack-swift16:52
rlootimburke: i only noticed it last week. which doesn't mean it didn't happen before. it could have been a rare occurrence before, but seems to be more frequent in the last week.16:53
rlootimburke: a sleep would work for me16:53
*** vinsh has joined #openstack-swift16:56
*** ukaynar has quit IRC16:57
*** klrmn has joined #openstack-swift17:07
timburkerloo: submitted patch 48528217:12
patchbothttps://review.openstack.org/#/c/485282/ - openstack-dev/devstack - When configuring temp urls, give Swift time to sta...17:12
rlooThanks timburke!17:14
rlootimburke: do we need to open a bug for that?17:14
timburke*shrug* i just try to fix things :-)17:14
rlootimburke: that's a good attitude :D let's see what they say.17:15
*** ukaynar has joined #openstack-swift17:18
tdasilvatimburke: the problem is that we are seeing something very similar with our probe tests, at first I tried adding sleeps everywhere, but I'm not sure that's a good solution, since we did not have that problem before17:21
tdasilvatimburke: I tried reverting this change https://github.com/openstack/swift/commit/537f9a3f64428d73bec6776c1d6ee519b63a776917:22
*** tonanhngo has joined #openstack-swift17:22
tdasilvaand it seemed to solve the problem for me17:22
*** gyee has joined #openstack-swift17:23
*** ukaynar has quit IRC17:29
rlootdasilva, that might be it then. and/or it exasperated (sp) the problem...17:30
*** ukaynar has joined #openstack-swift17:32
*** ukaynar_ has joined #openstack-swift17:36
*** ukaynar has quit IRC17:39
*** saint_ has quit IRC17:40
*** itlinux has joined #openstack-swift17:42
*** gyee has quit IRC17:42
*** gyee has joined #openstack-swift17:44
*** gyee has quit IRC17:44
*** mvk has joined #openstack-swift17:46
*** tesseract has quit IRC17:46
*** ntt_ has joined #openstack-swift18:24
*** u_nuSLASHkm8 has joined #openstack-swift18:24
ntt_Hi, I have an error with rsync in my swift cluster: from zone2 and zone3 (one server per zone) I see an rsync error: http://paste.openstack.org/show/615861/  When I check /srv/node/sdb1/objects/487/92e on 192.168.128.1 (zone 1), 92e is a file and NOT a folder. How can I solve the issue?18:25
ntt_acoles: have you some advice?18:26
ntt_can I simply delete the file in zone1?18:26
*** u_nuSLASHkm8 has left #openstack-swift18:27
notmynamekota_: timburke and I got his cauchy checker running on a different box with 16 cores, so we should be able to finish the checking18:29
*** lucasxu has quit IRC18:32
notmynamethe important question now is, now that timburke isn't running the script on his laptop, how will he stay warm?18:34
timburkeso... cold...18:34
*** lucasxu has joined #openstack-swift18:35
notmynameinteresting/good lesson for our community to learn from: http://www.nickmilton.com/2017/07/a-story-of-how-community-lost-trust.html?utm_source=dlvr.it&utm_medium=twitter18:35
timburkerloo: tdasilva: yeah, that makes sense... loading the app takes some non-trivial amount of time; if we bind ports *before* doing that, we wait on a connection timeout rather than immediately coming back with a refused connection18:40
*** lucasxu has quit IRC18:41
*** lucasxu has joined #openstack-swift18:41
*** ChubYann has joined #openstack-swift18:47
*** SkyRocknRoll has quit IRC18:53
*** cschwede_ has quit IRC18:54
*** lucasxu has quit IRC18:55
*** ntt_ has quit IRC19:00
*** frank_young has joined #openstack-swift19:01
*** frank_young has quit IRC19:06
*** chlong_ has joined #openstack-swift19:15
*** gyee has joined #openstack-swift19:18
*** silor has joined #openstack-swift19:27
*** lucasxu has joined #openstack-swift19:30
*** openstackgerrit has joined #openstack-swift19:32
openstackgerritAlistair Coles proposed openstack/swift master: Ring rebalance respects co-builders' last_part_moves  https://review.openstack.org/47700019:32
*** tinyurl_comSLASH has joined #openstack-swift19:36
*** tinyurl_comSLASH has left #openstack-swift19:38
*** lucasxu has quit IRC19:38
*** lucasxu has joined #openstack-swift19:39
*** lucasxu has quit IRC19:43
*** lucasxu has joined #openstack-swift19:44
*** frank_young has joined #openstack-swift19:44
*** frank_young has quit IRC19:48
*** lucasxu has quit IRC19:53
*** lucasxu has joined #openstack-swift19:54
*** silor has quit IRC19:56
*** chlong_ has quit IRC20:03
*** ujjain has quit IRC20:06
*** Sukhdev_ has quit IRC20:19
*** Sukhdev has joined #openstack-swift20:20
*** ujjain has joined #openstack-swift20:20
*** ujjain has joined #openstack-swift20:20
*** MVenesio has quit IRC20:41
*** chlong_ has joined #openstack-swift20:48
*** stewie925 has quit IRC20:51
kota_morning20:55
*** lucasxu has quit IRC20:57
kota_notmyname: nice, thx!20:57
notmynamekota_: I just ran it. timburke wrote the code :-)20:58
notmynameI hope to have it finish this week sometime. it's running at 90% of 16 cores, so it should be soon ;-)20:58
timburkethanks for the code, https://docs.python.org/2/library/itertools.html#itertools.combinations !20:58
notmynamemeeting time in #openstack-meeting20:59
kota_parallel processing is huge win! :D20:59
notmynamealso, I started with the 16 data chunks, since we've already done up to that21:00
timburkekota_: were you going straight for the matrix in your testing, or doing a full encode/decode?21:00
notmynameit's currently at the end of checking 17 data21:01
*** caiobrentano_ has quit IRC21:01
*** frank_young has joined #openstack-swift21:03
*** chlong_ has quit IRC21:04
kota_timburke: one encode, and decode/reconstruct for all available conbinations for each k + m parameters21:05
kota_the work has been getting hard around k+m ~= 2021:05
*** frank_young has quit IRC21:08
timburkei think that might also be netting me some performance improvements -- i don't actually go through the full encode/decode/reconstruct, but instead build the underlying k-by-k+m matrix and check for invertibility of k-by-k matrixes made from combinations of k columns21:08
*** caiobrentano has joined #openstack-swift21:23
*** caiobrentano has quit IRC21:35
*** rloo has quit IRC21:38
*** Sukhdev has quit IRC21:42
*** Sukhdev has joined #openstack-swift21:43
*** m_kazuhiro has joined #openstack-swift21:54
timburketdasilva: wait, so barbicanclient needs to change from hitting /v1/... to hitting /1/...? that doesn't seem to line up with their api-ref at https://docs.openstack.org/barbican/latest/api/reference/secrets.html21:56
timburkeand that would kinda explain all the 404s (starting around http://logs.openstack.org/26/484926/2/check/gate-python-barbicanclient-devstack-dsvm-ubuntu-xenial/243a654/console.html#_2017-07-19_20_02_51_411769) in that tempest run...21:58
tdasilvatimburke: yeah i noticed that22:02
tdasilvabut changing the barbicanclient/osc_plugin.py doesn't seem to work either22:02
tdasilvaso I think I will need their help22:02
timburkei wonder if there's some issue with how the barbican endpoint gets defined in keystone...?22:02
tdasilvatimburke: maybe that's it22:03
tdasilvathis is what i get when changing osc_plugin22:03
tdasilvahttp://paste.openstack.org/show/615916/22:03
tdasilvai don't see barbican endpoint getting setup in mathiasb script, so i'm assuming that's done by devstack22:06
timburkeyeah, and it seems to be versionless -- https://github.com/openstack/barbican/blob/master/devstack/lib/barbican#L357-L37122:07
timburkeso i guess somewhere else in the client, we need to change '/%s/..' to be '/v%d/...' ?22:08
timburkehttps://github.com/openstack/python-barbicanclient/blob/master/barbicanclient/client.py#L44 maybe?22:09
tdasilvatimburke: not sure, because like you said, i think the right aproach is not to change https://github.com/openstack/python-barbicanclient/blob/master/barbicanclient/client.py#L2922:11
tdasilvano?22:11
*** frank_young has joined #openstack-swift22:12
timburkehrm. and it seems like the discovery api should be exposing 'v1'... https://github.com/openstack/barbican/blob/master/barbican/api/controllers/versions.py#L80-L8422:15
timburketdasilva: what happens if you change https://github.com/openstack/python-barbicanclient/blob/master/barbicanclient/osc_plugin.py#L18-L23 to use 'v1'?22:16
*** frank_young has quit IRC22:16
timburkeoh, right, the mismatch thing you were saying before...22:16
tdasilvayep :(22:17
timburkewas that *with* the other change? or *instead of*?22:17
tdasilvainstead of22:17
timburke...and like in your current change, you fixed both the default and the map? i'm trying to figure out how the '1' got there...22:21
timburkemaybe i should just spin it up and poke around22:22
*** m_kazuhiro has quit IRC22:30
*** caiobrentano has joined #openstack-swift22:34
*** itlinux has quit IRC22:58
*** frank_young has joined #openstack-swift22:59
*** vint_bra has quit IRC23:01
*** catintheroof has quit IRC23:03
*** ukaynar_ has quit IRC23:17
*** frank_young has quit IRC23:27
*** rcernin has quit IRC23:34
*** Renich has joined #openstack-swift23:36
*** frank_young has joined #openstack-swift23:37
*** frank_young has quit IRC23:41
*** itlinux has joined #openstack-swift23:47
*** Renich has quit IRC23:54
*** tonanhngo has quit IRC23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!