Wednesday, 2014-09-17

*** ncoghlan has joined #openstack-keystone00:14
*** r-daneel__ has quit IRC00:24
*** stevemar has joined #openstack-keystone00:25
*** ncoghlan is now known as ncoghlan_afk00:27
*** ncoghlan_afk is now known as ncoghlan00:28
*** openstackgerrit has quit IRC00:31
*** openstackgerrit has joined #openstack-keystone00:31
*** soulxu_ has quit IRC00:35
morganfainbergdstanek, ping00:38
morganfainbergor stevemar00:38
stevemarmorganfainberg, yo00:39
morganfainbergok so the memcache pool00:39
dstanekmorganfainberg: pong00:39
morganfainbergwhat is the case for a maximum timeout?00:39
morganfainbergand how... do we handle that?00:39
dstanektimeout for that wait?00:39
morganfainbergdstanek, yeah00:40
dstanekthe safest thing would be to just fail with a 500 - that way there is less of a chance to hammer the DB00:41
morganfainbergwell i mean, we have a lot of options here00:41
dstanekit's a highly exceptional condition that probably needs immediate attention00:41
morganfainbergdo we log a "oh hey we're blocked" and let it re-attempt?00:41
morganfainbergdo we want it to raise up a 500?00:41
morganfainbergdo we want to try 5 times then 500?00:41
morganfainbergthis is way deep in the internals of dogpile, and we're getting some very strange things happening if we're blocking here. (e.g. all memcache servers are down)00:42
dstaneki guess having a timeout and retries in the config file would give the most flexibility00:42
morganfainbergdstanek, i think the only real case we're hitting here is all memcache servers are un-reachable00:43
morganfainbergdstanek, and this is going to need a string freeze exception00:43
dstanekreally? i though this is because we reached the max of the pool and for some reason we're not releasing connections00:43
morganfainbergdstanek, it is, and afaict the only way that occurs is if we're being hammered and hitting socket timeout (3s) for every request00:44
morganfainbergdstanek, this is extremely deep in the internals between dogpile and the memcache lib.00:44
morganfainbergdstanek, i'm not saying we shouldn't have a timeout, i agree, just trying to figure out what that timeout looks like00:45
dstaneki think the simpest thing is to blowup and not do any retries - hmmm...i wonder how nofify works under eventlet00:48
*** morgan_remote_ has quit IRC00:50
morganfainbergdstanek, stupid eventlet.00:52
dstaneka shorter answer is don't use eventlet in produciton00:52
morganfainbergdstanek, afaict notify in eventlet just means that when we swap to execute on the greenthread it doesn't immidiately yeild00:53
morganfainbergdstanek, a .wait() is equivalent to .sleep(0) while condition00:53
morganfainbergand notify sets condition to False00:53
openstackgerritSteve Martinelli proposed a change to openstack/python-keystoneclient: Handle federated tokens
morganfainbergdstanek, ++ to "never use eventlet in production" oh wait... everything but keystone wont run then00:55
morganfainbergdstanek, this is what i came up with01:03
openstackgerritKui Shi proposed a change to openstack/keystone: Add memcached_backend configuration
*** rodrigods_ has quit IRC01:13
morganfainbergYorikSar, ping you here?01:16
dstanekmorganfainberg: so i'm back to wondering about the max connections again01:18
morganfainbergi've changed it like this01:18
dstanekthe docs say that cond.wait() will release the lock and then that would mean multiple greenthreads are waiting. when we run notify_all they will all get notified and not check the count right?01:19
morganfainbergdstanek, that should *really* prevent us yielding out somewhere in the memcache lib and accidently ending up with too many connections01:19
dstanekhaving trouble making test scenarios01:19
morganfainbergdstanek, aw crap we can't raise out on a timeout01:20
morganfainbergdstanek, you release the lock and re-enter the while loop on the next coroutine switch to that thread01:20
dstanekyeah, i don't know how wait tells you it was a timeout01:20
morganfainbergdstanek, it doesn't01:20
morganfainbergdstanek, it just returns out just like a notify does.01:20
morganfainbergdstanek, so we can't raise out.01:20
morganfainbergwe rely on the while loop working indefinitely to block extra connections from being created.01:21
morganfainbergsince if another connection grabbed the lock, and pushed us to max connections, the while loop would send us back into the wait state01:21
morganfainbergand remember only one greenthread is running at a time in the eventlet process (worker)01:22
dstanekit feels like we should just be using a real queue here01:23
dstanekis that while loop effectively a busy wait on a loaded server?01:24
morganfainbergdstanek, well, except we still need to wait for the response, so a queue will still run into the same issues, always waiting01:25
morganfainbergdstanek, i think it is a busy wait on a loaded server, but not *always* the same way we're synchronous grabbing data from the db01:26
dstanekmorganfainberg: i think you can do it much better with
morganfainbergthis just forces the same types of restrictions01:26
morganfainbergdstanek, except we can't just queue we need to get specific values back and expect a response for that value not "a" value01:27
*** diegows has quit IRC01:27
morganfainbergthis isn't straight FIFO, because each request is tied to the specific greenthread01:27
morganfainbergand internal coroutine01:27
morganfainbergor you mean to use the queue for the connections instead of a list?01:28
dstanekwhat you would do is make free pool a queue - you'd still have to manage how many things are out of the queue (because we want to kill older connections) - and use the context manager approach to pop a connection and push it back01:29
morganfainbergdstanek, i think we end up with the same mechanism really.01:29
dstanekmorganfainberg: very similar except that a timeout on a wait is sane and i think there are other small benefits - i can whip un an example in a minute or two01:31
morganfainbergdstanek, we still need to manage the max number of connections outstanding with a while loop though01:31
morganfainbergdstanek, since the queue doesn't know about connections not in it01:32
morganfainbergdstanek, so the wait/timeout would net us the same issue as the busy wait01:32
dstanekyeah, i was thinking that i would use two queues to eliminate the problem, but that could cause locking issues01:34
morganfainbergdstanek, yeah. :(01:34
morganfainbergwe're doing the inverse of a queue, we're tracking a maximum number of items and tossing the items not in use on a queue (list, whatever)01:35
morganfainbergif an active "thing" was being placed on a queue to be acted on, it would be different01:35
morganfainbergi guess we could do a runner queue and a result queue, but you're likely filtering every value in the result queue to find yours when you wakeup01:36
morganfainbergi think we're trying to effecitvely re-implement AMPQ here01:37
morganfainbergget me value for X, i'll be waiting and listening on Y for a result.01:37
openstackgerritMorgan Fainberg proposed a change to openstack/keystone: Add a pool of memcached clients
openstackgerritMorgan Fainberg proposed a change to openstack/keystone: Add a pool of memcached clients
openstackgerritMorgan Fainberg proposed a change to openstack/keystone: Add a pool of memcached clients
morganfainbergsomehow "im" ended at the end of one of the ASL license headers01:41
morganfainbergdstanek, ok that should address all but your comment about the timeout.01:41
morganfainbergstill needs testing though01:42
*** ncoghlan is now known as ncoghlan_afk01:44
*** amerine has quit IRC01:48
*** andreaf has quit IRC01:48
*** kevinbenton has quit IRC01:48
*** nonameentername has quit IRC01:48
*** cyeoh has quit IRC01:48
*** mitz has quit IRC01:48
*** zhiyan has quit IRC01:48
*** xianghui has quit IRC01:48
*** amakarov_away has quit IRC01:48
*** csd has quit IRC01:48
*** jamielennox|away has quit IRC01:48
*** lsmola______ has quit IRC01:48
*** rushiagr_away has quit IRC01:48
openstackgerritA change was merged to openstack/python-keystoneclient: Versioned Endpoint hack for Sessions
*** dims has joined #openstack-keystone01:53
*** diegows has joined #openstack-keystone01:53
*** amerine has joined #openstack-keystone01:53
*** andreaf has joined #openstack-keystone01:53
*** kevinbenton has joined #openstack-keystone01:53
*** nonameentername has joined #openstack-keystone01:54
*** cyeoh has joined #openstack-keystone01:54
*** mitz has joined #openstack-keystone01:54
*** zhiyan has joined #openstack-keystone01:54
*** xianghui has joined #openstack-keystone01:54
*** amakarov_away has joined #openstack-keystone01:54
*** csd has joined #openstack-keystone01:54
*** jamielennox|away has joined #openstack-keystone01:54
*** lsmola______ has joined #openstack-keystone01:54
*** rushiagr_away has joined #openstack-keystone01:54
*** mitz has quit IRC01:54
*** mitz has joined #openstack-keystone01:55
*** dims is now known as Guest978801:55
*** alex_xu has joined #openstack-keystone01:56
*** jraim has quit IRC01:58
*** amerine_ has joined #openstack-keystone01:58
*** diegows has quit IRC01:59
*** amerine has quit IRC02:00
*** jraim has joined #openstack-keystone02:02
*** marcoemorais has quit IRC02:05
*** amerine has joined #openstack-keystone02:07
*** amerine_ has quit IRC02:11
*** wanghong has quit IRC02:15
*** lbragstad1 has joined #openstack-keystone02:15
openstackgerritMorgan Fainberg proposed a change to openstack/keystone: Add a pool of memcached clients
openstackgerritMorgan Fainberg proposed a change to openstack/keystone: Add a pool of memcached clients
morganfainbergdstanek, some basic tests. but still not sure how to test the concurrency stuff02:19
dstanekis the requirement to trim down the number of connections worth it?02:22
openstackgerritMorgan Fainberg proposed a change to openstack/keystone: Add a pool of memcached clients
morganfainbergdstanek, the point is valid that if you have 5x keystones you may not want to consume 100 FDs/Sockets to memcached at low traffic times.02:26
morganfainbergdstanek, but i dunno.02:26
morganfainbergdstanek, are we going to have a lot of benefit there? not in keystone.  though i could see a bigger win in say auth_token02:26
morganfainbergdstanek, some endpoints may have lower traffic.02:27
morganfainbergand this does need to be ported over to auth_token as well.02:27
dstaneki dunno. if you can have up to 100 by the configuration then you have to be prepared to have 100 at any time02:27
morganfainbergdstanek, right, but if you *can* have 100 from every endpoint, should you legitimately always expect to have 100 from every endpoint?02:28
morganfainbergin keystone it's probably a moot point. we will likely always be at the capacity02:28
dstanekin any event the operator will have to size the hardware/infrastructure to support that number along with all of the other expected traffic02:29
*** sigmavirus24_awa is now known as sigmavirus2402:31
*** ncoghlan_afk is now known as ncoghlan02:31
morganfainbergdstanek, unfortunately the cleanup bits are not the complex parts of this pool02:32
*** harlowja is now known as harlowja_away02:32
morganfainbergthe base concurrency support is.02:32
dstanekthey actually are because the make the rest of the impl suck02:33
*** richm has joined #openstack-keystone02:33
morganfainbergdstanek, not really, the whole impl sucks. not because YorikSar did a bad job, but because we're working around a major short coming of combining eventlet plus thread.local02:33
morganfainbergdstanek, the cleanup bit really is a single function call in release now. removing that part doesn't change the landscape that much02:34
morganfainbergwe'd still have acquire count tracking, still need to use the context manager, etc.02:35
dstanekmorganfainberg: this is a very simple pool similar to ones i have used in the past - i just can't implement _trim without creating my own queue impl02:38
dstanekmorganfainberg: does mysql manage it's own connection pool?02:38
morganfainbergdstanek, sqla does some work on that i think, but mysqldb is swig, so C bindings meaning we don't get eventlet issues02:39
morganfainbergdstanek, any call to mysql is blocking via mysqldb02:39
dstanekhmm...single connection?02:40
morganfainbergdstanek, i think we can have a pool, but it wouldn't benefit us a lot02:40
morganfainbergthat is the idea behind moving to anothe rmysql connector (mysql-connector?) or running multiple workers (eventlet multi-worker mode, or mod_wsgi)02:40
*** ncoghlan is now known as ncoghlan_afk02:41
dstaneki've actually tried a pure python driver and it wasn't noticably faster when using rally02:41
*** ncoghlan_afk is now known as ncoghlan02:41
morganfainbergdstanek, it's not, the pure python connector itself is way way slower, so the benefits form eventlet we get are probably lost02:42
morganfainbergoh unrelated, i'll be here tomorrow.02:42
morganfainbergthursday is the day i'll be spotty02:42
morganfainbergmisread the date on the appointment02:43
dstanekthe profiling of both didn't show a significant differnence in driver performance - i think we are just doing some silly stuff02:43
morganfainbergdstanek, oh i know we are.02:44
morganfainbergdstanek, that was part of the idea behind:
morganfainbergas a starting place02:44
*** achampion has quit IRC02:49
*** achampion has joined #openstack-keystone02:51
*** ncoghlan is now known as ncoghlan_afk02:59
*** ayoung has joined #openstack-keystone03:06
*** ncoghlan_afk is now known as ncoghlan03:08
*** Guest9788 has quit IRC03:12
openstackgerritBob Thyne proposed a change to openstack/keystone: Update Endpoint Filter API
openstackgerritBob Thyne proposed a change to openstack/keystone: Update Endpoint Filter API
*** richm has quit IRC03:16
*** gokrokve has quit IRC03:17
openstackgerritMorgan Fainberg proposed a change to openstack/keystone: Add a pool of memcached clients
morganfainberg^ fixing doc bug03:18
*** amerine_ has joined #openstack-keystone03:21
*** amerine has quit IRC03:23
*** sigmavirus24 is now known as sigmavirus24_awa03:24
stevemarmorganfainberg, thx, i was wondering about that03:28
openstackgerritBob Thyne proposed a change to openstack/identity-api: Update Endpoint Filter API
morganfainbergstevemar, also sorry for being > 400 lines of change :(03:29
morganfainbergstevemar, this fix is already kindof unwieldy03:29
stevemarmorganfainberg, tis a bit03:30
*** alex_xu has quit IRC03:31
*** gordc has joined #openstack-keystone03:32
*** ayoung has quit IRC03:37
*** alex_xu has joined #openstack-keystone03:44
*** vhoward has joined #openstack-keystone03:46
*** ayoung has joined #openstack-keystone03:50
*** achampio1 has joined #openstack-keystone04:01
*** achampion has quit IRC04:02
*** nonameentername has quit IRC04:03
*** wanghong has joined #openstack-keystone04:11
openstackgerritMorgan Fainberg proposed a change to openstack/keystonemiddleware: Add composite authentication support
morganfainbergThis was a nasty rebase, please look carefully at the code when reviwing ^04:20
*** nonameentername has joined #openstack-keystone04:20
stevemarmorganfainberg, i *just* rebased it04:23
morganfainbergstevemar, beat ya to submitting it04:23
stevemarhaven't uploaded the changes yet, but ya nasty rebase04:23
morganfainbergstevemar, it wasn't a pretty rebase :(04:23
stevemari'll see if we got the same result :)04:23
morganfainbergstevemar, ++04:23
morganfainbergstevemar, 5 open bugs till RC04:25
morganfainbergnone not inprogress04:25
stevemarmorganfainberg, yeah, in the __call__ method, i think you caught them all04:25
stevemarmove remove_auth out of try, add the log debugs, and take the returns out of the try's04:25
morganfainbergthe remove_auth got left in the try for mine. *doh*04:26
morganfainbergbut the returns were caught plus the change to _call_app from _app04:26
morganfainbergand the addition of the reject_line where needed04:26
*** gokrokve has joined #openstack-keystone04:27
*** gordc has quit IRC04:28
stevemarmorganfainberg, i think this one is super close i addressed brant's comments04:30
stevemarand it had some +2 love from lbragstad04:30
stevemarand from jamielennox04:30
morganfainbergstevemar, yeah circling through those now.04:30
stevemarthe trust['enabled'] one needs a test :( but i'm not sure where to add it04:31
*** gokrokve has quit IRC04:31
stevemarmorganfainberg, do you have write access to or something?04:32
morganfainbergstevemar, if I star a review it is added04:32
morganfainbergstevemar, same with lbragstad and dstanek04:32
morganfainbergstevemar, dolph did magic :)04:33
stevemari was wondering how it was changed to (approved) so quickly04:33
stevemarfyi, might want to add bobt's stuff04:34
stevemarwell -
morganfainbergstevemar, +2 on that federated tokens one04:34
stevemaryee haw04:34
stevemarthat'll make marek happy :)04:34
morganfainbergadded bob's reviews04:34
morganfainbergidentity-api isn't really included in release blockers04:35
morganfainbergstevemar, btw,,n,z04:35
morganfainbergthat is the list of reviews i've actively starred04:35
morganfainbergstevemar, i can almost taste RC ;)04:37
stevemari have the same ones open in a bunch of tabs04:37
*** achampio1 has quit IRC04:41
*** achampion has joined #openstack-keystone04:44
*** russo3999 has quit IRC04:50
*** r1chardj0n3s is now known as r1chardj0n3s_afk04:51
*** ajayaa has joined #openstack-keystone05:17
morganfainbergstevemar, ugh endpoint filter extension may violate HTTP05:28
morganfainbergstevemar, the HEAD/GET issue again05:29
stevemaryou mean the old stuff, not the new endpoint grouping?05:29
morganfainbergi'm too tired to verify it tonight but remind me tomorrow (before we accept bob's patches) so we know if we need to open another bug on it05:29
morganfainbergthe identity-api is claiming 204 on HEAD request05:29
morganfainbergand looking at the code it has separate methods for get/head in cases05:30
morganfainbergi thin05:30
* morganfainberg is like i said too tired05:30
morganfainbergif we need to fix it, we'll just make it a followup to bob's patches05:30
stevemarits probably the old ep filter stuff05:36
*** r1chardj0n3s_afk is now known as r1chardj0n3s05:40
YorikSarmorganfainberg: pong05:49
YorikSarmorganfainberg: I guess it's too late :)05:49
*** stevemar has quit IRC05:50
*** ukalifon has joined #openstack-keystone05:57
*** YorikSar has quit IRC06:00
*** YorikSar has joined #openstack-keystone06:02
openstackgerritBob Thyne proposed a change to openstack/keystone: Update Endpoint Filter API
openstackgerritOpenStack Proposal Bot proposed a change to openstack/keystone: Imported Translations from Transifex
*** rushiagr_away is now known as rushiagr06:12
*** henrynash has joined #openstack-keystone06:12
*** k4n0 has joined #openstack-keystone06:26
*** andreaf has quit IRC06:29
openstackgerrithenry-nash proposed a change to openstack/keystone: Ensure identity sql driver supports domain-specific configuration.
*** henrynash has quit IRC06:36
*** BAKfr has joined #openstack-keystone07:06
openstackgerritMarcos Fermín Lobo proposed a change to openstack/keystone: Implement group related methods for LDAP backend
openstackgerritA change was merged to openstack/python-keystoneclient: Allow retrying some failed requests
openstackgerritQin Zhao proposed a change to openstack/python-keystoneclient: Fix the condition expression for ssl_insecure
*** GARNAV has joined #openstack-keystone08:00
*** henrynash has joined #openstack-keystone08:01
*** bdossant has joined #openstack-keystone08:03
*** lsmola______ is now known as lsmola08:07
*** r1chardj0n3s is now known as r1chardj0n3s_afk08:07
*** ncoghlan has quit IRC08:15
*** ajayaa has quit IRC08:18
*** amerine_ has quit IRC08:26
*** amerine has joined #openstack-keystone08:27
*** alex_xu has quit IRC08:31
*** achampion has quit IRC08:32
*** alex_xu has joined #openstack-keystone08:39
*** rushiagr is now known as rushiagr_away08:42
*** rushiagr_away is now known as rushiagr08:43
*** henrynash has quit IRC08:52
*** andreaf_ is now known as andreaf09:01
*** ajayaa has joined #openstack-keystone09:07
*** amakarov_away is now known as amakarov09:22
*** jaosorior has joined #openstack-keystone09:26
*** aix has joined #openstack-keystone09:57
openstackgerritQin Zhao proposed a change to openstack/python-keystoneclient: Fix the condition expression for ssl_insecure
*** dguerri has joined #openstack-keystone10:29
*** ajayaa has quit IRC10:30
*** henrynash has joined #openstack-keystone10:31
*** rushiagr is now known as rushiagr_away10:35
*** dims has joined #openstack-keystone10:37
*** dims has quit IRC10:39
*** dims has joined #openstack-keystone10:39
*** rushiagr_away is now known as rushiagr10:39
dguerrihello there :) got a quick question for you.10:39
dguerriI am using keystone with AD as a backend for identification.10:39
dguerriI need to keep assignment local, so I set the [assignment] section accordingly.10:39
dguerriNow, I would like to assign a default role for a specific tenant to all users. Is that possible? or do I need to authorize each user?10:39
*** ajayaa has joined #openstack-keystone10:42
*** YorikSar_ has joined #openstack-keystone10:44
*** YorikSar has quit IRC10:48
*** alex_xu has quit IRC10:50
*** henrynash has quit IRC11:22
*** gordc has joined #openstack-keystone11:38
*** stevemar has joined #openstack-keystone11:44
*** gordc has quit IRC11:54
*** stevemar has quit IRC11:56
*** stevemar has joined #openstack-keystone11:57
*** henrynash has joined #openstack-keystone12:04
*** stevemar has quit IRC12:05
*** k4n0 has quit IRC12:14
*** richm has joined #openstack-keystone12:20
*** diegows has joined #openstack-keystone12:28
marekdjust a question about gerrit tests - why some testsuites are named gate-* and some of them check-* ?12:35
*** Tahmina has joined #openstack-keystone12:45
*** dims has quit IRC12:45
*** gordc has joined #openstack-keystone12:45
*** dims has joined #openstack-keystone12:45
*** sigmavirus24_awa is now known as sigmavirus2412:53
*** aix has quit IRC13:00
rodrigodsmarekd, I think that I've read somewhere that "check" are the tests run before the patch is approved and "gate" afterworlds13:01
rodrigodsinfra tests13:01
marekdrodrigods: i see13:02
ayoungmarekd, which version of AD supports the SAML Federation  portal? What did you test against?13:02
marekdayoung: need some more specific version?13:03
marekdayoung: i may ask people who run it.13:03
*** nkinder has quit IRC13:12
ayoungmarekd, do you know if that works with 2008 or if it needs 2012?13:15
ayoungjudging by the number of posts that say 2011 on them, I'm going to guess that 2008 is sufficient13:16
*** joesavak has joined #openstack-keystone13:21
*** stevemar has joined #openstack-keystone13:25
*** ayoung has quit IRC13:26
*** aix has joined #openstack-keystone13:27
*** ayoung has joined #openstack-keystone13:36
*** sigmavirus24 is now known as sigmavirus24_awa13:39
*** sigmavirus24_awa is now known as sigmavirus2413:39
*** bknudson has joined #openstack-keystone13:43
*** achampion has joined #openstack-keystone13:44
marekdayoung: let me ask.13:46
ayoungmarekd, thanks....the AD  guy for our team is in transit today anyway, but I'd like to give your code a test run.13:46
marekdayoung: it'd be awesome.13:50
ayoungmarekd, I'll let you know13:50
marekdayoung: my guy is on the phone so I will have an answer in few minutes, but I'd say it's 201213:51
ayoungmarekd, then we'll need a way to scare up a copy for OpenStack testing.  I'll ask the Prime Minister...13:51
*** topol has joined #openstack-keystone13:55
marekdIt's ADFS2.1 on w2012. Same as adfs2.0 on w2008.13:58
ayoungmarekd, so you think we can test against thew 2008 version.  Good to know.  I just want to minimize the amount of hunting we have to do13:59
marekdayoung: i can rerun the code again on my adfs13:59
marekdand make sure it works13:59
ayoungmarekd, always a good idea13:59
ayoungmarekd, but not if its going to take away from other things.  I suspect I'll be bothering you once we have a setup to test against14:00
marekdayoung: sure thing.14:01
marekdi was fearing i'd be the only person who actually ran the code with a adfs .14:01
*** GARNAV has quit IRC14:01
*** nkinder has joined #openstack-keystone14:02
marekdayoung: i also made a separate piece of code for cli adfs authentication. Needs some polishing, but it does its core job - authenticates with ADFS:
ayoungmarekd, looking14:03
marekdthis is basically what's already in keystoneclient.14:03
*** radez_g0n3 is now known as radez14:03
ayoungmarekd, you really need to start doing some shortening of those constants14:04
marekdthat's lxmls philosphy....14:04
marekdbut you are right14:04
ayoungmarekd, BTW, did you seem my mailing list message about CORS, Horizon etc?  Its specifically targetting the SAML use case.  Does it make sense to you?14:14
marekdayoung: i did. and yes, the idea with JS makes lots of sense to me.14:17
marekdstevemar: LOL: "It is interesting that this latter comment came from the14:17
marekdacademic/science world, whereas the supportive one came from the14:17
marekdbusiness world14:17
ayoungmarekd, Ah, mail client just caught up14:17
ayoungmarekd, just responded to that point.  Short answer  some Idps are public, the rest private14:18
marekdayoung: ++14:18
ayoungmarekd, did I show you my proof of concept?14:18
stevemarmarekd, i thought that was very funny!14:18
marekdstevemar: it was :-)14:19
marekdayoung: JS code? yes.14:19
ayoungstevemar, same question; did I show you guys my proof of concept?14:19
ayoungmarekd, good14:19
marekdayoung: I saw it few weeks back but didn't bookmark the URL. Could you paste it again? Or, even better do you have some repo with the code?14:20
ayoungmarekd, I can do both14:20
ayoungfirst the repo14:20
ayoungmarekd, the code is running at14:21
stevemarayoung, i saw it a few weeks ago, not recently14:21
marekdayoung: thanks!14:22
ayoungGood.  I've started moving it over to Angular, and focused a bit more on trusts, but  the concepts are the same.  I'd love to get a SAML example up there as well;  I'll have to talk to simo about getting his SAML provider installed14:22
*** david-lyle has joined #openstack-keystone14:22
ayoungmarekd, the older version, the straight JQuery is at the same url but +  old.html14:23
ayoungthat has the role management stuff14:23
*** bambam1 has joined #openstack-keystone14:24
stevemarayoung, now do you trust us with a set of credentials :)14:24
marekdstevemar: :D14:24
ayoungstevemar, of course14:25
ayoungstevemar, something is making my machines connection to that crawl. I think it is here on my end, as I've been having network issues on and off all morning14:27
*** mflobo has joined #openstack-keystone14:27
*** ajayaa has quit IRC14:27
ayoungstevemar, ah, I never gave either of you guys accounts in the LDAP server backing it...hold on14:29
ayoungstevemar, marekd hit the IPA server and change your passwords,  assuming it responds....My IPA webui is still hanging.14:33
*** jaosorior has quit IRC14:36
*** zhiyan has quit IRC14:36
*** serverascode has quit IRC14:36
*** ctracey_ has quit IRC14:36
marekdayoung: in a minute. wanted to finish something.14:37
*** topol has quit IRC14:44
*** richm has quit IRC14:44
*** sigmavirus24 is now known as sigmavirus24_awa14:47
*** zhiyan has joined #openstack-keystone14:48
ayoungstevemar, so,  in the Federated workflow, at what point should I be handing a SAML document to Keystone?  In the call to get an unscoped Federated token?14:49
*** jaosorior has joined #openstack-keystone14:50
stevemarayoung, yep, it should be handed to the the idp/protocol specific protected URL14:50
*** sigmavirus24_awa is now known as sigmavirus2414:50
marekdayoung: yes.14:50
ayoungmarekd, stevemar, is there a "prep" call to Keystone that provides some way of keeping the SAML assertion from being a bearer token?14:51
ayoungalso, where does the SAML assertion go in the request?14:51
*** ctracey_ has joined #openstack-keystone14:52
marekdayoung: you are asking about websso or ecp?14:52
ayoungmarekd, hmmm, good question. I guess this would be websso14:52
ayoungecp should be similary, but I'm trying to understand the horizon use case first14:53
ayoungmarekd, is it like this:14:53
*** serverascode has joined #openstack-keystone14:53
ayoungpost to OS-FEDERATION/identity_providers/{idp}/prtocols/saml2  and get a redirect...14:54
marekdayoung: ecp eorkflow is described in docstring in ecp impl. in keystoneclient. I tried to make is verbose so ppl can understand what's going on:
ayoungand then when the redirect returns, we would have the SAML assertion handed to keystone, and keystone would have handed back a token?14:54
marekdayoung: yes. you can try this thing with your browser even today.14:55
*** Tahmina has quit IRC14:55
marekdbrowser will  print XMLized unscoped token.14:55
marekdayoung: and you don't POST, you GET  OS-FEDERATION/identity_providers/{idp}/prtocols/saml214:56
ayoungmarekd, I see that it is POST/GET  in our docs
marekdayoung: err, should rather be GET.14:57
marekdat least in our case.14:57
marekdbut anyway14:58
marekdyou want to send some data with this call?14:58
ayoungmarekd, I wrote up the flow in the email as I understand it.  I'd be interested to see how ksiu/dchadwick made it work for the Horizon case already, but I suspect it was by Horizon just playing man in the middle15:02
ayoungmarekd, for the Javascript apporach GET vs POST is not a problem as we can support either15:02
marekdayoung: yes, i was simply wonderin if you *need* to send some data (hence use POST) with this call...15:02
marekdayoung: reading the thread post.15:03
ayoungNope,  just picked the first off the list15:03
marekdayoung: ok.15:03
*** ukalifon has quit IRC15:09
marekdayoung: i am responding to the e-mail.15:10
lbragstadbknudson: around?15:16
morganfainbergYorikSar_, I've added what tests I can come up with for the pool. if you have any ideas on further testing, it would (of course) be appreciated. But all in all, I'm fairly happy with the pool at this point.15:17
Davieyayoung: Whilst talking about auth... did anyone work on adding requests_kerberos support into keystoneclient?15:19
ayoungDaviey, yes, it is going into a separate repo15:20
*** cjellick has joined #openstack-keystone15:21
morganfainbergayoung, for the enumerate unscoped tokens let me know when you have a sec, want to check with you on the domain stuff, because the rest of the code looked pretty good.15:23
ayoungmorganfainberg, you mean "we should also enumerate domains with unscoped tokens"  yes, but it is less pressing, as it is a different use case15:24
Davieyayoung: Thanks!15:24
morganfainbergayoung, no15:25
ayoungmorganfainberg, as you need to know your domain a-priori in order to log in15:25
morganfainbergayoung, i think you dropped support for filtering on domain completly15:25
morganfainbergayoung, unless you pass domain_id.15:25
*** topol has joined #openstack-keystone15:25
ayoungmorganfainberg, OK,  I think I have dropped it.  It was unintentional.  I'll fix15:26
morganfainbergayoung, ++ other than that i think that looks good15:26
ayounggood catch, too.  It is clear now that you pointed it out, but I missed it in previous passes15:27
*** aix has quit IRC15:28
*** rushiagr is now known as rushiagr_away15:29
Davieyayoung: Hmm, the external repo hasn't been created yet - right?15:32
*** dhellmann is now known as dhellmann_15:32
ayoungDaviey, yes, but the code is not yet submitted.  I have it in my github, though15:33
Davieyayoung: Ok, thanks15:33
ayoungDaviey, and the original patch is still posted for review in python-keystoneclient15:33
ayounglinks in a second15:33
Davieyayoung: Jose's patchset?15:35
ayoungDaviey, yes15:35
ayoungDaviey, I've reworked it slightly15:36
bknudsonlbragstad: I'm around now.15:36
Davieyayoung: ok, i'll give it a go. Thanks15:37
ayoungDaviey, cool. I'm working on integrating it into Horizon:  Django-Openstack-Auth to be exact15:38
lbragstadbknudson: hey, so I had a quick question on your review15:38
lbragstadI played around with passing in a parameter to a decorated function,15:38
bknudsonto the decorator?15:38
lbragstadbut it never reaches _check_grant_protection15:38
lbragstadbknudson: the kwargs are set by
Davieyayoung: Yeah, been following your threads on Horizon15:39
*** aix has joined #openstack-keystone15:40
ayoungDaviey, cool. You understand  the S4U2Setup necessary to make it work?15:42
bknudsonlbragstad: _check_grant_protection is referenced in @controller.protected(callback=_check_grant_protection)15:48
bknudsonlbragstad: if you want to pass an extra parameter to it, you can use functools.partial15:48
bknudsone.g., @controller.protected(callback=functools.partial(_check_grant_protection, allow_no_user=True)15:49
*** cjellick has quit IRC15:50
*** cjellick has joined #openstack-keystone15:54
*** diegows has quit IRC15:55
* lbragstad thanks bknudson16:03
*** wwriverrat has joined #openstack-keystone16:04
*** jsavak has joined #openstack-keystone16:04
*** joesavak has quit IRC16:04
*** rushiagr_away is now known as rushiagr16:05
*** radez is now known as radez_g0n316:08
marekdi cannot even resolve this hostname :(16:09
marekdayoung: ^^16:09
marekdayoung: are you sure it works from the internet ?16:09
ayoungmarekd, heh, did I send you the wrong one?16:09
ayoungit should be16:09
bknudsonlbragstad: another option is to have a new function, _check_grant_protection_delete() and use that instead.16:09
ayoungthat might be the problem...let me check16:09
ayoungYep...that was my problem too, sorry marekd16:10
marekdayoung: no worries.16:10
lbragstadbknudson: I got it working with _check_grant_protection16:10
ayoungthe cloudlab one is my internal dev system16:10
marekdayoung: that's what i thought.16:10
ayoungmarekd, actually it is arelief16:10
lbragstadbknudson: thanks for the functools.partial suggestion16:10
*** zzzeek has joined #openstack-keystone16:10
bknudsonlbragstad: maybe @controller.protected() could accept more kwargs and pass them on to the callback function.16:11
bknudsonbut that seems like overkill when you can just do functools.partial.16:11
lbragstadbknudson: right16:11
*** dhellmann_ is now known as dhellmann16:11
lbragstadwe don't ever pass in additional parameters16:11
lbragstadno other call does that16:11
ayoungmarekd, at least I created the accounts in the right system16:11
lbragstadat the very most, they pass in a callback,16:12
marekdayoung: lol, did you set some localization options (language etc?)?16:12
lbragstadso this seems to be the unique case16:12
ayoungmarekd, yes I did16:12
openstackgerritLance Bragstad proposed a change to openstack/keystone: Allow users to clean up role assignments.
marekdayoung: ah,ok.16:12
ayoungmarekd, specifically, it uses the locale specified by your browser16:12
ayoungmarekd, I tested it with Hebrew, Russian and Chinese16:12
openstackgerritLance Bragstad proposed a change to openstack/keystone: Allow users to clean up role assignments
ayoung I took a phrase, passed it to google translate and put the Russian in.  When I showed it to my boss (A native Russian speaker) he was immobilized with laughter for about a minute16:13
marekdayoung: so it's now tested with Polish.16:14
ayoungmarekd, Doskonałe16:15
marekdayoung: some parts are simply not translated. The rest looks cool.16:16
ayoungmarekd, getting translations for all languages is a huge undertaking. I'm psyched that we even have a subset in Polish16:16
marekdayoung: understood.16:16
ayoungmarekd, its also a Kerberos IdP. Once you get the password changed, if you want, I can help you go through the steps to get a TGT16:17
*** BAKfr has quit IRC16:17
marekdayoung: ok, so i am in you PoC (/cops). I simply put my user/pass, and what options shall i choose for getting a token?16:18
morganfainberglbragstad, the latest patch looks good. waiting for jenkins16:18
ayoungmarekd, ah16:18
lbragstadmorganfainberg: awesome16:18
marekdayoung: ayoung Token Fetch Failed with status 401.16:18
ayoungmarekd, the domain should be YOUNGLOGIC.NET16:18
*** wwriverrat has left #openstack-keystone16:18
marekdok ok16:19
marekdi see16:19
ayoungDefault is what the installer sets up, and works for admin and demo16:19
ayoungif I could remember the passwords16:19
marekdayoung: which authn method?16:20
*** bdossant has quit IRC16:20
ayoungmarekd, yeah, until you set up Kerberos16:20
ayoungstart with password, you should be able to get a token16:20
marekdayoung: ok, need to start in incognito mode.16:20
marekdi have a password.16:20
ayoungmarekd, now you can do list projects16:21
ayoungonce that is done, the drop down is populated, select one, and also the token radio button16:21
ayoungnext "Get token" should get you a scoped one16:21
marekdayoung: my project list i empty.16:22
ayoungof course it sec16:22
marekdmaybe i don't have any roles assigned16:22
ayoungmarekd, what did it generate for your user id?16:22
*** bjornar_ has joined #openstack-keystone16:23
ayoungmarekd, no, the SHA256 one16:23
marekdayoung: a, sorry: 88cd1c7c4618371137f59d4bbdadafb618617f1b368087a5e76ece04e1c0228f16:23
ayoungmarekd, try again16:24
ayoungyou should have "Member" role on "Demo" project16:24
marekdayoung: so long story short: your JS scripts simply mimic keystoneclient and are smart enough to send JSON to /auth/tokens and handle them later, right?16:24
openstackgerritLance Bragstad proposed a change to openstack/keystone: Allow users to clean up role assignments
marekdayoung: works16:24
openstackgerritLance Bragstad proposed a change to openstack/keystone: Allow users to clean up role assignments
ayoungmarekd, you can now create a trust, if you want16:28
marekdayoung: ok16:28
*** cjellick has quit IRC16:28
ayoungyou will be both trustor and trustee, and it is based on your token16:28
ayoungProof of concept limitations16:28
ayoungbut then you can use that trust + your token to get another token16:29
ekarlsoFor you keystone folks: http://localhost:8000/introducing-servizor.html just as a fun tool using ksclient :)16:30
ekarlso rather16:32
morganfainbergekarlso, hehe16:32
morganfainbergekarlso, was going to say "localhost" ;)16:32
ekarlsofound it as a pretty fun way to load endpoints quickly :)16:32
ayoungekarlso, looking16:34
ayoungneat...anything that should find its way into the openstack common client, or just a debuggin tool?16:35
ekarlsoayoung: i'm using it for changing from http / https atm16:37
ekarlsobut could probably be added to the client as a contrib thing16:37
ayoungekarlso, I've needed that on occasion16:37
*** cjellick has joined #openstack-keystone16:37
ekarlsodoing keystone shell commands to do it is way too tedious16:38
*** bjornar_ has quit IRC16:38
*** andreaf is now known as andreaf_16:38
raildohenrynash, ping16:45
*** jasonsb has quit IRC16:46
*** jaosorior has quit IRC17:00
*** nkinder has quit IRC17:00
*** palendae has quit IRC17:00
*** bjornar has quit IRC17:00
*** DavidHu_ has quit IRC17:00
*** fifieldt_ has quit IRC17:00
*** jimbaker has quit IRC17:00
*** jaosorior has joined #openstack-keystone17:00
*** nkinder has joined #openstack-keystone17:00
*** palendae has joined #openstack-keystone17:00
*** DavidHu_ has joined #openstack-keystone17:00
*** bjornar has joined #openstack-keystone17:00
*** fifieldt_ has joined #openstack-keystone17:00
*** jimbaker has joined #openstack-keystone17:00
*** bambam1 has quit IRC17:00
*** rodrigods has quit IRC17:00
*** rwsu has quit IRC17:00
*** toddnni has quit IRC17:00
*** palendae has quit IRC17:01
*** amcrn has joined #openstack-keystone17:01
*** gyee has joined #openstack-keystone17:02
*** bambam1 has joined #openstack-keystone17:02
*** rodrigods has joined #openstack-keystone17:02
*** rwsu has joined #openstack-keystone17:02
*** toddnni has joined #openstack-keystone17:02
*** palendae has joined #openstack-keystone17:02
*** topol has quit IRC17:02
*** ctracey_ has quit IRC17:02
*** henrynash has quit IRC17:02
*** amerine has quit IRC17:02
*** harlowja_away has quit IRC17:02
*** thiagop has quit IRC17:02
*** htruta has joined #openstack-keystone17:02
*** topol has joined #openstack-keystone17:02
*** ctracey_ has joined #openstack-keystone17:02
*** henrynash has joined #openstack-keystone17:02
*** amerine has joined #openstack-keystone17:02
*** harlowja_away has joined #openstack-keystone17:02
*** thiagop has joined #openstack-keystone17:02
raildohenrynash, we have created this document that shows all our patches about hierarchical projects, if you want to review :)
*** jsavak has quit IRC17:06
*** joesavak has joined #openstack-keystone17:08
*** sigmavirus24 is now known as sigmavirus24_awa17:18
YorikSar_morganfainberg: Hi17:18
YorikSar_morganfainberg: Unfortunately I couldn't find time to work on this during the day...17:19
morganfainbergYorikSar_, no worries17:19
morganfainbergYorikSar_, i've addressed most of the concerns and added some basic testing.17:19
YorikSar_morganfainberg: I've seen your tests though, they look fine. I'd like to add some concurrency tests if you don't mind.17:19
*** YorikSar_ is now known as YorikSar17:19
*** radez_g0n3 is now known as radez17:19
dstanekYorikSar: that would be awesome17:20
morganfainbergYorikSar, yeah some concurrency tests would be great17:20
YorikSarmorganfainberg: Although I totally don't understand what was the issue with _create_connection and eventlet's yielding...17:20
dstanekYorikSar: it doesn't look like it's a problem now, but could be in the future17:20
YorikSarmorganfainberg: It all is done under lock, no two threads can modify pool at one time...17:20
YorikSardstanek: ^17:21
morganfainbergYorikSar, so... we've serialized everything accessing memcached.17:21
morganfainbergYorikSar, ick. that is likely bad.17:21
YorikSarmorganfainberg: No, not everything.17:21
YorikSarmorganfainberg: Only everything related to pool itself.17:21
YorikSarmorganfainberg: acquire and release17:22
dstanekwe've only serialized getting stuff in and out of the queue17:22
YorikSardstanek: Basically, yes.17:22
morganfainbergdstanek, which is most everything since an acquire/release is done on each and every action to memcache17:22
YorikSarmorganfainberg: We release lock after acquire17:22
YorikSarmorganfainberg: And now other threads can access this pool.17:23
*** dhellmann is now known as dhellmann_17:23
dstanekmorganfainberg: yes, sorta - it definitely doesn't help performance there17:23
morganfainbergYorikSar, but it's eventlet, you're not really running full threads.17:23
morganfainbergYorikSar, anyway.17:23
morganfainbergdstanek, thats my thought17:23
YorikSarmorganfainberg: With monkeypatch all primitives are patched17:23
morganfainbergwhatever though17:23
YorikSarmorganfainberg: So locks are greenthread-aware17:23
dstanekwhy hasn't this been reported before?17:24
morganfainbergYorikSar, i really wish i could just delete eventlet17:24
morganfainbergYorikSar, it is awful17:24
morganfainbergdstanek, because no one was paying attention to the number of sockets consumed by python-memcached under eventlet before17:24
morganfainbergdstanek, actually it was reported there is an ancient bug saying we need an eventlet safe memcache lib17:25
YorikSarmorganfainberg: Ah... You just need to forget about it. As long as your code don't use C libraries, everything works just as with usual threads.17:25
marekdmorganfainberg: you don't like concept of single threaded + nonblocking operations application or just eventlet? :-)17:25
dstanekbut if it is a leak wouldn't it grow indefinitely until it reached the ulimit of fds per process?17:25
morganfainbergmarekd, eventlet is a trainwreck17:25
morganfainbergmarekd, coroutines are generally awful to work with.17:25
dstanekmorganfainberg: in this case thought it's not evenlets fault17:25
morganfainbergdstanek, it is thread.local + eventlet17:25
marekdmorganfainberg: threads can also be tricky :(17:26
YorikSarmorganfainberg, dstanek: Our guys ran rally on 20-nodes deployment of OPenStack. And they found out that there's awfully huge number of connections on controllers.17:26
morganfainbergmarekd, well with the GIL threading is just a bad idea all around17:26
marekdmorganfainberg: Python :(17:26
dstanekmorganfainberg: from what i can tell doing some testing by hand i was having 1 memcached connection per greenthread, which is what i expected17:26
morganfainbergdstanek, it's not a "leak" it's a every request gets a new client object and they aren't cleaned up quickly in all cases, sometimes load/contention causes it to be even worse17:26
YorikSardstanek: There's no leak...17:26
*** harlowja_away is now known as harlowja17:27
morganfainbergdstanek, in a simple tight loop of requesting tokens i could get a devstack to run out of FDs or starve memcached17:27
*** marcoemorais has joined #openstack-keystone17:27
morganfainbergdstanek, 400-600 active sockets to memcached17:27
YorikSardstanek: But with thousands of open connections memcached and Python processes become very slow.17:27
dstanekhmm...that's interesting. i wasn't able to duplicate that17:28
morganfainbergdstanek, it's a known issue and thread.local is a bad idea with eventlet. the probelm is python-memcache and dogpile both use thread.local17:28
morganfainbergdstanek, it just opens us up to being DOSed simply by forcing connections to memcached17:29
YorikSardstanek: Just run Keystone with devstack and run 'ab -c 100 -n 1000 -H "X-Auth-Token: $token"'17:29
morganfainbergsince everything would block on memcached (e.g. auth_token, keystone validating a token, etc)17:29
YorikSardstanek: Then you can see in 'lsof -i4:11211' huge number of connections.17:29
morganfainbergit's one of the main reasons memcached should probably not be used as a token backend ever.17:30
morganfainbergthe othe rbeing that using memcached as stable storage is an awful idea17:30
YorikSardstanek: Now if you split load producer and have 3 controllers, you'll get the issue.17:30
dstanekYorikSar: that's basically how i was testing. i wonder if i was somehow reusing greenthreads instead of spawing a new one for each request17:30
YorikSardstanek: Nah... There will be a lot of connections waiting to be closed...17:31
dstanekmorganfainberg: yeah, that is explicitly called out everywhere as a huge no-no17:31
morganfainbergYorikSar, if you want to rever the bit of stuff around the acquired thats fine17:32
YorikSarmorganfainberg: I want to rearrange things a bit there to make it clear when happens what.17:32
YorikSarmorganfainberg: btw, I keep wondering, why do you post so many patchsets in a row? :)17:33
*** marcoemorais has quit IRC17:33
morganfainbergYorikSar a few were typos / forgot to git add a file17:33
morganfainbergYorikSar fixing a doc bug.17:34
*** marcoemorais has joined #openstack-keystone17:34
morganfainbergYorikSar a couple things that cropped up along the way that I missed in local testing17:34
*** marcoemorais has quit IRC17:34
*** marcoemorais has joined #openstack-keystone17:34
*** marcoemorais has quit IRC17:34
*** marcoemorais has joined #openstack-keystone17:35
*** marcoemorais has quit IRC17:35
morganfainbergYorikSar, in either case lets get some concurrency testing in and see if we can get this gating.17:35
*** marcoemorais has joined #openstack-keystone17:36
YorikSarmorganfainberg: Yes, sure. I'm already working on acquire, will switch to tests soon17:36
*** marcoemorais has quit IRC17:36
morganfainbergYorikSar, thanks.17:36
lbragstadmorganfainberg: is this related to the bug you just opened?
uvirtbotLaunchpad bug 1370492 in keystone "calling curl "HEAD" ops time out on /v3/auth/tokens" [Undecided,New]17:36
*** marcoemorais has joined #openstack-keystone17:37
morganfainberglbragstad, don't think that is the same thing17:41
morganfainberglbragstad, the issue i reported is just that in some cases a HEAD will result in a 404 where a get will result in 2XX17:41
YorikSarmorganfainberg, dstanek: Should I add that busylooping to acquire to log smth like "Still waiting for a slot..."?17:42
*** packet has joined #openstack-keystone17:42
stevemarbknudson, i think i addressed your comments here: should be a quick review17:43
morganfainbergYorikSar, nah, lets not add more logging.17:47
morganfainbergYorikSar, so, what is the major downside if we move away from the auto-scaling pools?17:48
morganfainbergYorikSar, don't we need to be able to handle the maximum capacity anyway?17:48
morganfainbergYorikSar, (this is just makeing sure we're not over-engineering here)17:48
*** bjornar_ has joined #openstack-keystone17:49
YorikSarmorganfainberg: Ah, yes, I caught some of your discussion yesterday...17:49
morganfainbergYorikSar, could we just pre-allocate the connections and use a straight eventlet.queue ( dstanek brought this up last night ) it is worth considering17:49
*** dhellmann_ is now known as dhellmann17:50
morganfainbergYorikSar, it would be a lot simpler code, removes the explicit busy wait scenarios.17:50
morganfainbergand we don't need the extra locking for reaping the connections, a simple get/put on the queue in the context manager17:51
morganfainberglocking is implicit vs explicit then17:51
lbragstadbknudson: the token api doesn't use json_home?17:51
YorikSarmorganfainberg: Well, I think having 100 unused open connecions is a waste...17:52
bknudsonlbragstad: I don't think anything uses json home at this point.17:52
YorikSarmorganfainberg: And we'll still need all this code around, except cleaner.17:52
morganfainbergYorikSar, but you still need to support up to 100 connections at any given time.17:52
morganfainbergYorikSar, so what is the difference between having the connections and spending cycles ramping up/reaping connections.17:52
openstackgerritSteve Martinelli proposed a change to openstack/python-keystoneclient: SAML2 federated authentication for ADFS.
*** rushiagr is now known as rushiagr_away17:53
*** amakarov is now known as amakarov_away17:53
*** zzzeek has quit IRC17:53
YorikSarmorganfainberg: Time needed to open/close a connection is negligible compared to unused_timeout.17:54
morganfainbergYorikSar, that isn't answering my question.17:54
*** zzzeek has joined #openstack-keystone17:54
morganfainbergYorikSar, if we need to support 100 active connections, what is the real benefit of removing them vs. just keeping them around.17:54
morganfainbergYorikSar, again this is not about saying the reaping is a bad idea, just making sure we're not overengineering this and making it harder to maintain17:55
YorikSarmorganfainberg: Just to release resources to let them be used elsewhere, I think...17:57
*** radez is now known as radez_g0n317:57
YorikSarmorganfainberg: It's rather cheap (see prev. message) and don't require much code (15 lines of code in cleanup).17:58
YorikSarmorganfainberg: So not holding to unused resources should be enough to justify having this very small overhead.18:00
dstanekYorikSar: i think it's the complexity of managing it at all - the busy loop, etc.; not the lines of code18:02
*** radez_g0n3 is now known as radez18:02
dstanekthe operator has to scale to handle the number of expected connections anyway - why not get into a steady state early?18:02
YorikSardstanek: That loop is not so busy and is needed to limit total number of connections, no cleanup them.18:02
*** packet has quit IRC18:03
*** sigmavirus24_awa is now known as sigmavirus2418:03
dstanekthat can be done without a loop thought right? assuming you don't have to clean them up18:03
morganfainbergYorikSar, dstanek,
morganfainbergsomething much much simpler without the need to "cleanup"18:04
*** stevemar has quit IRC18:04
*** stevemar has joined #openstack-keystone18:04
morganfainbergit's not complete, just a quick example18:04
dstanekmorganfainberg: :-) i was just looking for the paste i sent you, but wasn't having any luck18:04
morganfainbergdstanek, hehe18:05
morganfainbergdstanek, yeah couldn't find it so i recreated something close18:05
morganfainbergdstanek, close-ish18:05
YorikSarmorganfainberg: Oh, please, dont do explicit imports from eventlet :)18:05
dstanekmorganfainberg: found it!
morganfainbergdstanek, ++ much better18:06
dstanekYorikSar: we could use Python's queue, but in this case i don't think this backend should be used outside of eventlet18:06
YorikSarmorganfainberg: Otherwise yes, this hides all this stuff in stdlib...18:06
YorikSar(which still have that loop inside)18:07
morganfainbergsure. i'm looking at maintainability, do we need to re-implment it18:07
dstaneki don't think we'll hit any loops in the stdlib version because we'll only be putting things that fit under maxsize18:08
YorikSardstanek: Let me show you...18:09
dstanekit should just be for getting items which is a simpler loop18:09
*** saipandi has joined #openstack-keystone18:10
YorikSardstanek: Queue uses semaphore, semaphore uses condition, condition _requires_ loop aroud waiting on it.18:10
morganfainbergYorikSar, but we're not needing to re-implement stdlib stuff in our code.18:10
YorikSarmorganfainberg: Yes, I agree.18:10
dstanekright, but we're not adding our own loop - either solution uses that18:11
YorikSarmorganfainberg: I'm thinking about how we could use semaphore here.18:11
dstanekmy comment about removing the loop was our own busy loop18:11
morganfainbergYorikSar, which was the whole point of this conversation, do we get a benefit from re-implementing the loop?18:11
YorikSarmorganfainberg: Wait... We weren't talking about loop. We were talking about cleaning.18:12
morganfainbergYorikSar, they are closely related18:12
dstanekmorganfainberg: the reason you see that factory in my paste is that i started to play with a _trim method to removed idle connections18:12
morganfainbergYorikSar, without the need for cleaning do we need our own loop18:12
YorikSarmorganfainberg: Nah...18:12
YorikSarmorganfainberg: It seems we don't' need that loop in any case.18:12
morganfainbergYorikSar since we don't need to reach into the list to clean it up, we don't.18:13
dstaneka Queue has a queue property that is a deque so it's possible to remove idle, but slightly complicated and needs extra locking18:13
YorikSarmorganfainberg: If we need cleaning or not is a separate question.18:13
morganfainbergYorikSar, queue is more opaque than a list, it largely looks like the loop is meant to support a list instead of queue object18:13
morganfainbergso we can clean it up without needing to pop everything off the queue, inspect and re add18:13
YorikSarmorganfainberg: Ok, I don't understand you... With queue we can't have cleanup logic.18:14
YorikSarmorganfainberg: To have cleanup logic we heed stack (-ish).18:14
morganfainbergYorikSar, right18:14
YorikSarmorganfainberg: So if we don't need cleanup, we can leave all locking to queue.18:15
morganfainbergYorikSar, the re-implementation of the loop appears to be related to using a list as a stack instead of a full queue.18:15
YorikSarmorganfainberg: But we won't need any explicit loops even with cleanup.18:16
YorikSarmorganfainberg: no....18:16
dstanekyou can cleanup with a queue18:16
morganfainbergdstanek, you indicated it was painful, so i'd say we shouldn't18:16
YorikSarmorganfainberg: Loop is there just because I didin't think about using semaphore here..18:16
YorikSardstanek: No, you can't.18:16
dstanekmorganfainberg: it's painful, but possible18:17
morganfainbergdstanek, yeah lest assume it's not possible for our implementation's sake :)18:17
dstanekYorikSar: why not? i had a poc that did it (unsafely because i didn't implement locking)18:17
YorikSardstanek: queue is FIFO. One thread with 6 memcached requests will use 6 different connections from the queue.18:17
YorikSardstanek: So all elements of the queue will get constantly renewed.18:18
*** packet has joined #openstack-keystone18:18
YorikSardstanek: You need to replace queue with stack (or better well)18:18
morganfainbergYorikSar, ok so for this implementation's sake (and because it's not worth it) lets assume we can't cleanup a queue.18:19
YorikSardstanek: So that connections are used from the top of it, and on the bottom we have old unused connections rotting.18:19
dstanekYorikSar: isn't that what we have now and we just the cleanup to find them?18:19
morganfainbergdstanek, we actually append recently used connections to the end, it's not really a "stack" it's a FIFO with some sliceing cleanup logic.18:20
YorikSardstanek: Now we acquire connections from the top of the stack (.pop()) and release them to the top of the stack (.append) while cleaner works from the bottom of the stack.18:21
morganfainbergoh crap wait i misread that yeah we use it as a stack18:21
morganfainbergblah brain .. not working today18:21
dstanekYorikSar: not worth discussing now; but my poc uses a Queue subclass that overrides ._get to .pop and extends .put to cleanup18:22
dstanekYorikSar: i was tempted to use LifoQueue, but i like the efficiency of using a deque18:24
YorikSardstanek: I don't think deque is faster than working on the left end of the list.18:27
dstanekYorikSar: it should be much faster because it doesn't have to move the list or create a new one18:28
dstaneki haven't timed it though18:28
lbragstadmorganfainberg: I can confirm
uvirtbotLaunchpad bug 1370492 in keystone "calling curl "HEAD" ops time out on /v3/auth/tokens" [Low,Confirmed]18:29
lbragstadmorganfainberg: want to include it in RC1?18:29
YorikSardstanek: After you fill the list, it'll stay fixed in memory as long as you don't add 2x more items to it.18:29
morganfainberglbragstad, hm.18:30
morganfainberglbragstad, we use HEAD for "check" on the token right?18:30
YorikSarmorganfainberg, dstanek: Looking and 'implementation' (4 overloaded methods) of LifoQueue I think we can use that, add some cleanup logic and proper size accounting and have the same pool, but without explicit concurrency outside stdlib.18:31
*** marcoemorais has quit IRC18:31
morganfainberglbragstad, means we probably need to look at why that is occuring.18:31
morganfainberglbragstad, and hit it before RC18:31
*** marcoemorais has joined #openstack-keystone18:32
*** marcoemorais has quit IRC18:32
*** marcoemorais has joined #openstack-keystone18:32
YorikSarmorganfainberg, dstanek: Should I try that or do you think it's not worth it at all?18:32
morganfainbergYorikSar, it would probably make things a lot more readable.18:35
morganfainbergYorikSar, but i'm still not sure we *need* the cleanup logic18:35
lbragstadmorganfainberg: that *should* be doing
morganfainberglbragstad, hm. i wonder18:36
YorikSarmorganfainberg: Ok, I'll implement it and we'll see...18:36
*** amcrn has quit IRC18:38
*** amcrn has joined #openstack-keystone18:38
dstanekYorikSar: i'm curious now about the implementation of lists (for my own knowledge) - where can i find info on why what we are doing won't shift the elements?18:39
dstanekYorikSar: i thought the l[:1] = [] was effectively the same as l.pop(0)18:39
YorikSardstanek: Yes, that part is slow.18:40
lbragstadmorganfainberg: ... it's looping in routes/middleware.py18:40
morganfainberglbragstad, ah18:40
dstanekYorikSar: ah, ok; that's why i used the deque18:40
YorikSardstanek: But with deque we have constantly allocating and deallocating chunks when we put and fetch from it.18:40
YorikSardstanek: So with list common operations are fast (pop/append), rare operations are slow (popleft).18:42
YorikSardstanek: Ah, wait... Did you put items to the left end of deque?18:42
dstaneki change get to pop so that they came from the right and the put already appended to the right18:43
dstanekmy cleanup did a popleft and checked for idle timeout - if it was idle i would trash the connection and popleft again18:43
YorikSardstanek: Oh, in that case deque will just add fast cleanup, yes...18:43
dstanekonce i reached a non-idle connection i put it back18:43
dstaneklet me see if i still have that - i was a trivial amount of code18:44
*** stevemar has quit IRC18:44
YorikSardstanek: mb you can put it to that CR?18:45
dstanekYorikSar: i don't have it anymore, but i can recreate a post it18:47
YorikSardstanek: btw, did you account for acquired connections?18:48
YorikSardstanek: I guess if you don't have the code I can write it then.18:49
dstanekYorikSar: yeah, i had to keep a count in the cleanup version, but it's not safe yet - i have see about using the existing queue lock for it18:52
dstanekYorikSar: that's why i like the no-cleanup version - much simpler18:52
YorikSardstanek: If you do cleanup in _put, you're already protected by queue's mutex.18:53
morganfainbergdstanek, and the no-cleanup version would hit our needs for Juno and middleware18:53
*** Apsu has joined #openstack-keystone18:54
dstanekYorikSar: it's not the cleanup that bothers me it's adding more items when we need more18:54
dstanekmorganfainberg: i'll throw something together for you guys to look at18:54
YorikSardstanek: That should happen in _get - still guarded by mutex18:54
*** aix has quit IRC18:55
*** cjellick has quit IRC18:55
dstanekYorikSar: that's not the current design though; i could make the Pool a subclass of queue and maybe do it like that18:55
YorikSardstanek: Yes.18:56
YorikSardstanek: Ok, looks like now we both are doing the same thing :)18:56
YorikSardstanek: I guess I'll leave it to you then.18:56
*** packet has quit IRC18:56
dstanekYorikSar: did you already start working on it?18:56
YorikSardstanek: Just a little bit.18:57
YorikSardstanek: Alhough I'd be happy to leave it to you if you don't mind. Its getting late here :)18:57
dstanekYorikSar: i'm happy to let you continue this; there's still a few RC reviews to get through today; just let me know what you want to do18:57
*** jasonsb has joined #openstack-keystone18:58
YorikSardstanek: Oh, ok. I'll subclass queue and add cleanup logic to it.18:58
YorikSardstanek: I'll post my version in some time then.18:58
dstanekYorikSar: ok, if you need to bail because it's late just let me know i can help pick up where you left off18:59
YorikSardstanek: Ok, sure18:59
*** gyee has quit IRC19:02
*** packet has joined #openstack-keystone19:04
*** amcrn has quit IRC19:07
*** packet has quit IRC19:11
*** joesavak has quit IRC19:11
*** marcoemorais has quit IRC19:11
*** packet has joined #openstack-keystone19:12
*** marcoemorais has joined #openstack-keystone19:12
*** joesavak has joined #openstack-keystone19:12
*** marcoemorais has quit IRC19:12
*** radez is now known as radez_g0n319:12
*** marcoemorais has joined #openstack-keystone19:12
*** amcrn has joined #openstack-keystone19:16
*** marcoemorais has quit IRC19:16
*** marcoemorais has joined #openstack-keystone19:16
*** stevemar has joined #openstack-keystone19:17
*** vdreamarkitex has joined #openstack-keystone19:17
YorikSardstanek: damn eventlet...19:23
YorikSardstanek: it doesn't provide a way to override _qsize.19:23
dstanekYorikSar: in eventlet's queue implementation?19:24
YorikSardstanek: yep19:24
YorikSardstanek: I can add rather ugly hack-around though...19:25
*** packet has quit IRC19:25
YorikSardstanek: can=will19:25
dstanekyou can't override qsize()?19:25
YorikSardstanek: I can, but we shouldn't do that in stdlib's version of queue.19:26
YorikSardstanek: I'll add an if: to cover eventlet...19:27
dstanekare you using both?19:27
YorikSardstanek: I just ran tests created by morganfainberg. We seem to have eventlet around in tests.19:28
morganfainbergYorikSar, we likely do as we start full versions of keystone in the functional tests19:29
*** cjellick has joined #openstack-keystone19:30
YorikSarmorganfainberg: Yeah... That means we don't run unittests w/o eventlet.19:31
morganfainbergYorikSar, sounds about right.19:31
morganfainbergYorikSar you can set an ENV arg to use standard threads19:31
*** jaosorior has quit IRC19:32
YorikSarmorganfainberg: Just 'ENV= tox -e py27'?19:32
morganfainbergYorikSar, looking for it19:32
*** openstackgerrit_ has joined #openstack-keystone19:34
morganfainbergYorikSar, 'STANDARD_THREADS'19:34
YorikSarmorganfainberg: Oh, thanks19:34
morganfainbergYorikSar, so STANDARD_THREADS=True tox -epy2719:34
morganfainbergshould do it19:34
YorikSarmorganfainberg: Good. Will verify with usual thread later then.19:34
morganfainbergYorikSar, ++19:35
*** radez_g0n3 is now known as radez19:36
openstackgerritSteve Martinelli proposed a change to openstack/python-keystoneclient: Handle federated tokens
stevemarbknudson, ^ round 2! fight!19:47
*** jsavak has joined #openstack-keystone19:50
*** packet has joined #openstack-keystone19:51
henrynashdstanek, lbragstad: if either of you have a moment, looking for additional +2/A on
*** joesavak has quit IRC19:51
lbragstadhenrynash: checking19:51
henrynashlbragstad: thx19:52
*** aix has joined #openstack-keystone19:53
stevemarhenrynash, do you deserve a +2 on that!19:56
stevemar.... looking19:56
henrynashstevemar: far be it for me to be so bold as to suggest anything so outrageous…19:56
henrynashstevemar: but if a couple of pints of english “old thumper” ale will swing your vote let me know :-)19:58
morganfainbergstevemar and lbragstad,  henrynash's patch looks good to me, but i'm happy to let you continue reviewing it.19:58
stevemarhenrynash, well you're off to a good start, passed the bknudson test19:58
morganfainbergthe tests too me a while to get through19:58
stevemarmorganfainberg, go ahead and +A/+2, i'm still going to review it to get to know the code19:58
lbragstadhenrynash: morganfainberg stevemar I stepped through the tests again... looks all good19:58
lbragstadmy comments were addressed19:58
morganfainberglbragstad, go for the +2/+A then.19:58
dstaneklbragstad: ++19:59
* lbragstad picks up hammer19:59
* lbragstad good Mjölnir... good... 19:59
henrynashthank y’all kindly20:00
morganfainbergreally winding down the RC list20:00
lbragstadhenrynash: thank you for being so prompt!20:00
*** andreaf has joined #openstack-keystone20:02
dstanekto that end...does anyone know the author of ? they haven't been back in a while20:03
*** radez is now known as radez_g0n320:04
bknudsononly thor can lift Mjölnir.20:05
stevemardstanek, was thinking the same, maybe one of us should add a test20:06
stevemarbknudson, maybe lbragstad is Thor20:06
morganfainbergdstanek, i actually saw that recently20:06
bknudsonstevemar: I've never seen them both in the same place at the same time...20:06
morganfainbergit should be an easy test case20:07
*** jasonsb has quit IRC20:07
stevemarbknudson, I think you scared off the authors of
lbragstadlike Clark and his glasses20:07
bknudsonstevemar: I'm waiting for them to email my manager.20:07
openstackgerritYuriy Taraday proposed a change to openstack/keystone: Add a pool of memcached clients
YorikSardstanek, morganfainberg: ^20:08
morganfainbergYorikSar, looking, it's looking easier to read already.20:08
stevemarmorganfainberg, if you want to reapply your +2 here: I just moved some things around to make future federation fixtures easier20:09
*** topol has quit IRC20:09
YorikSardstanek, morganfainberg: It passed Morgan's tests and couple of ab runs with dying and resurrecting memcached backends20:09
*** amcrn has quit IRC20:09
morganfainbergstevemar, done20:09
YorikSardstanek, morganfainberg: I'm falling asleep, so I'm looking forward to seeing your comments/fixes in the morning.20:10
dstanekYorikSar: awesome, thanks! i'll take a look20:10
morganfainbergYorikSar, generalyl speaking looking good! sleep well!20:10
*** amcrn has joined #openstack-keystone20:13
*** jasonsb has joined #openstack-keystone20:15
stevemarbknudson, morganfainberg dstanek i'm looking at tempest now, and according to their docs - they perform CLI tests, but apparently the tests can't 'change the cloud', so i guess no create tests?20:15
bknudsonstevemar: that's what I assumed... so no token_flush20:16
morganfainbergdstanek, going to post a fix with a couple minor changes to the recent patch by YorikSar unless you are currently reviewing20:16
stevemaris there a way to test the CLI so that it can change content in the cloud?20:16
morganfainbergdstanek, which case i'll post my 2 comments.20:16
bknudsonstevemar: ask on -qa20:16
dstanekmorganfainberg: nope, looking at another review now20:16
morganfainbergk will post with the 2 minor fixes.20:16
stevemarbknudson, yeah, figured i'd ask here first to see what we've done in the past20:17
bknudsonstevemar: if we'd done anything in the past we wouldn't have so many bugs20:17
stevemarthe amount of hate bknudson has for bugs is too damn high20:17
*** jasonsb has quit IRC20:19
*** aix has quit IRC20:23
openstackgerritMorgan Fainberg proposed a change to openstack/keystone: Add a pool of memcached clients
*** gyee has joined #openstack-keystone20:35
*** jsavak has quit IRC20:37
*** jasonsb has joined #openstack-keystone20:38
*** jasonsb has quit IRC20:43
*** _cjones_ has joined #openstack-keystone20:44
*** bjornar_ has quit IRC20:47
morganfainbergstevemar, lol20:48
*** radez_g0n3 is now known as radez20:54
*** cjellick has quit IRC20:58
*** marcoemorais has quit IRC21:02
*** marcoemorais has joined #openstack-keystone21:03
*** zzzeek has quit IRC21:04
*** cjellick has joined #openstack-keystone21:05
*** nkinder has quit IRC21:09
*** gordc has quit IRC21:10
*** stevemar has quit IRC21:12
*** stevemar has joined #openstack-keystone21:18
*** david-lyle has quit IRC21:19
*** r1chardj0n3s_afk is now known as r1chardj0n3s21:20
dstanekso what's the point of this bug?
uvirtbotLaunchpad bug 1362245 in keystone "Update  Endpoint Filter APIs" [Low,In progress]21:23
dstanekthe "why" we need this or "why" it's currently wrong is missing21:23
bknudsondstanek: I don't think that should be a release blocker.21:26
bknudsonit looks like a nice-to-have21:27
bknudsonespecially since the date for stability of the API is already passed21:27
bknudsonand the keystoneclient already has an implementation using the old api21:27
morganfainbergbknudson, agree. it should not be a release blocker, but it's also something that we could easily get in within the next day or so.21:27
morganfainbergbknudson, as long as the old impl still works i'm fine with this going in. if it slips to K1, i wont be upset though21:27
dstanekbknudson: yes, i definitely agree that it shouldn't be a blocker21:27
dstanekwhy switch the order of the URL though?21:28
morganfainbergdstanek, more consistent with our other APIs21:28
bknudsondoes anybody still use the URLs? just use JSON Home.21:28
dstanekmorganfainberg: the near duplicate options here is confusing
morganfainbergdstanek, yeah it is because cache != kvs21:30
morganfainbergdstanek, i added some extra details into the help text and they were removed in a subsequent patch21:31
morganfainbergdstanek, i can add the info back in easily21:31
dstanekmorganfainberg: ah i'll go back and look for those; like what's the difference between cache.memcache_socket_timeout and memcache.socket_timeout21:32
morganfainbergdstanek, the [cache] options are for the caching layer21:32
morganfainbergdstanek, the [memcache] options affect the kvs (e.g. token persistence driver)21:32
morganfainbergdstanek, i had added some information for that into the [memcache] option help text.21:33
morganfainbergeasy to add it back in.21:33
morganfainbergif needed21:33
morganfainberg(probably should be added back in)21:33
morganfainbergor we could move it to the KVS config section21:34
*** stevemar has quit IRC21:34
*** dims has quit IRC21:35
lbragstadmorganfainberg: six reviews from RC121:35
*** dims has joined #openstack-keystone21:35
morganfainberglbragstad i count it as 4 (2 are approved)21:36
lbragstads/six/six (including keystonemiddleware)/21:37
morganfainbergsomething looks like it isn't getting cleaned up on that patch.21:38
morganfainberghow did this pass check.21:39
*** dims has quit IRC21:40
dstanekmorganfainberg: i'm looking at the pool patch again and i don't see how we enforce max size - it looks to me like things could just deadlock in the put() when there are traffic spikes21:42
dstanekmorganfainberg: hmmm...wait...tracing this throught the Queue and it looks like that may not be the case21:42
morganfainbergdstanek, isn't _put behind the lock?21:42
morganfainbergdstanek, where put() isn't21:43
dstanekmorganfainberg: i was more worried about the get not properly checking the size and then the puts would block because the queue would be full21:45
*** rkofman has quit IRC21:45
morganfainbergdstanek, i think he is solving that with _qsize21:46
morganfainbergwhich factors in .acquired21:46
dstanekmorganfainberg: why isn't _qsize defined as the parent's queue size plus acquired clients?21:46
*** rkofman has joined #openstack-keystone21:47
morganfainbergdstanek, i don't know why he isn't calling super() there. but he is returning .maxsize which is a property of the parent i *think*21:47
dstanekin the Queue the _qsize is the count of things in the queue - ours is the available slots?21:47
morganfainbergther eis also the hasattr line at 9321:47
morganfainbergdstanek, yes that sounds correct. and it looks like ._get isn't checking anything21:49
morganfainbergso we'll just run out of connections.21:49
morganfainbergerm, run over limit and block21:49
dstanekit's get() that does the checking21:49
morganfainberg*looks like*21:49
dstanekyeah, something is bogus there because pool.full() would be True even if there was nothing in the queue21:52
morganfainbergdstanek, while self._qsize() == self.maxsize:21:54
morganfainbergdstanek, we only block on the condition of that21:54
dstanekmorganfainberg: ok, i'm confused now. i don't see how anything can be added to the queue at all. the put checks to see if there is room by seeing if qsize is maxsize and isn't that the starting state?21:55
morganfainbergwhoopse wrong line. sec looking at Queue.Queue21:56
morganfainbergah ok so .get in Queue.Queue blocks if .qsize is 021:56
morganfainbergif we have maxsize - acquired being .qsize we block as expected because we're "full" even though we're really not21:57
*** meker12 has joined #openstack-keystone21:57
morganfainbergand the only time the put fails is if ,_qsize == maxsize which, based on the .get should never happen21:57
*** jasonsb has joined #openstack-keystone21:58
morganfainbergi think i can write some unit tests to cover those cases since we can do timeouts now.21:58
dstanekmorganfainberg: isn't qsize == maxsize the initial state?21:59
*** jamielennox|away is now known as jamielennox21:59
morganfainbergdstanek, hm, yeah. you're right21:59
*** cjellick has quit IRC21:59
*** bknudson has quit IRC22:00
*** cjellick has joined #openstack-keystone22:00
morganfainbergdstanek, but it isn't failing in a "real" test22:00
dstanekget should fail weirdly because the queue is "full", but nothing is in it and put should block adding to the queue because the queue is "full"22:00
dstanekgoing to pull this version down and start messing with it :-) just looking at the code has me stumped22:01
*** nkinder has joined #openstack-keystone22:01
*** david-lyle has joined #openstack-keystone22:03
morganfainbergi'm doing the same atm22:04
morganfainbergthis *shouldn't* work afaict22:04
dstanekthis can't work anyway - it's using super with old style classes22:05
morganfainbergdstanek, eventlet22:06
morganfainbergdstanek, eventlet changes the behavior of the queue this would break under apache22:06
morganfainbergdstanek, {"error": {"message": "must be type, not classobj", "code": 400, "title": "Bad Request"}}[stack@localhost ~]$22:07
morganfainbergdstanek, eventlet mucks with things in a way that "fixes" the brokenness22:07
*** cjellick has quit IRC22:08
*** zzzeek has joined #openstack-keystone22:08
*** cjellick has joined #openstack-keystone22:08
dstanekeventlet monkey patches it's own version from eventlet.queue me thinks22:08
morganfainbergdstanek, yep, just confirmed it22:08
morganfainbergdstanek, look at eventlet.queue.LightQueue22:08
*** jamielennox is now known as jamielennox|away22:09
*** marcoemorais has quit IRC22:10
*** marcoemorais has joined #openstack-keystone22:10
openstackgerritMorgan Fainberg proposed a change to openstack/keystone: Ensure identity sql driver supports domain-specific configuration.
dstanekmorganfainberg: this does appear to be working somehow22:12
*** richm1 has joined #openstack-keystone22:12
morganfainbergdstanek, under eventlet the whole _qsize logic is very very different22:13
morganfainbergdstanek, i think it works in eventlet but wont limit max size how it's supposed to22:13
morganfainbergand under non-eventlet it blatently wont work because we're classobj vs new-style classes22:13
dstanekyeah i fixed the supers locally and it appears to enforce the max size22:14
morganfainberghenrynash, booted your SQL fix out of the gate, it was failing py27 each time i ran it locally and saw the failure in the gate22:15
morganfainberghenrynash, i'm not sure how it passed check. but commented on it can provide the traceback if needed22:15
henrynashhmmm…ouch….ok..let me investiagte22:15
morganfainberghenrynash, that for a *ton* of tests22:16
henrynashmorganfainberg: ok, I’m on it22:17
morganfainberghenrynash, :) i was shocked when i saw the error cause it passed check just fine!22:17
*** Tahmina has joined #openstack-keystone22:17
dstanekmorganfainberg: get doesn't check against maxsize at all which is the get works22:19
morganfainbergdstanek, ah22:19
*** thiagop has quit IRC22:19
morganfainbergdstanek, oh i see it in the non-eventlet one, it looks at _qsize not maxsize22:20
dstanekthe logic is kind of backward but it always says it full until it acutally is22:20
morganfainbergwhich would be an issue in the put case?22:21
morganfainbergunder non-eventlet22:21
morganfainbergunder eventlet i'm sure it'll behave reasonably well.22:21
dstanekno, because as they are actually created the acquired gets incremented - so it doesn't actually stay full like i thought22:23
morganfainbergdstanek, oh a side effect of get solves the put issue22:27
*** wanghong has quit IRC22:29
*** meker12 has quit IRC22:30
*** meker12 has joined #openstack-keystone22:31
dstanektook me a few minutes to piece it together22:32
morganfainbergdstanek, hmm. so looks like the big issue is super()22:32
*** packet has quit IRC22:33
*** achampion has quit IRC22:33
*** meker12 has quit IRC22:35
dstanekyeah. since we are recommending this new pooled version instead of dogpile's22:35
*** david-lyle has quit IRC22:40
*** david-ly_ has joined #openstack-keystone22:40
*** meker12 has joined #openstack-keystone22:41
*** meker12 has joined #openstack-keystone22:42
morganfainbergdstanek ok i'll remove the super calls22:43
morganfainbergdstanek, and repost unless you want to.22:43
dstanekmorganfainberg: sure, just got done dinnering22:47
morganfainbergdstanek, ok i'll get it posted once i run tests22:47
morganfainbergdstanek, almost done.22:47
morganfainbergit feels weird not using super() :P22:48
*** david-ly_ has quit IRC23:06
*** richm1 has quit IRC23:07
morganfainbergdstanek, should we put a maximum timeout on .get() ?23:14
morganfainbergdstanek, since i'm in here i can do that now.23:14
morganfainbergdstanek, and if so, what do we raise out / is it worth the string freeze exception for this?23:14
dstanekmorganfainberg: can we use an exising string and log the stacktrace?23:16
dstanek"An unexpected error prevented the server from fulfilling your request." -- seems vaguely correct23:17
dstanekat least more so than 'User not in domain: %s'23:17
morganfainbergi could just use UnexpectedError() with no string i guess23:18
morganfainbergdstanek, should this be a config option or just some static value we set (like 300s?)23:19
morganfainbergi'm thinking just some static value, maybe 120s?23:19
dstanekwe should LOG.exception though so there is an actual record of what happened23:19
dstanek120s means that an API request could block for that long without getting a response - i would think it would have to be configurable23:20
morganfainbergdstanek, log exception wont help we need a new string, the exception will be 'Empty'23:22
morganfainbergok i'll make it a config option as well.23:22
dstanekwhere i used to work our timeouts were mostly under 10s - because user's won't wait that long for a web browser23:22
henrynashmorganfainberg: think I’m too tired to debug this tonight…I’ll get some kip and re-attack early in the mornig23:24
morganfainberghenrynash, sounds good. yeah it was kindof icky23:24
morganfainberghenrynash, no worries :)23:24
*** henrynash has quit IRC23:25
*** achampion has joined #openstack-keystone23:26
*** sigmavirus24 is now known as sigmavirus24_awa23:32
openstackgerritA change was merged to openstack/keystone: Allow users to clean up role assignments
*** meker12 has quit IRC23:36
*** meker12 has joined #openstack-keystone23:37
*** alex_xu has joined #openstack-keystone23:41
*** achampio1 has joined #openstack-keystone23:41
*** achampion has quit IRC23:44
*** saipandi has quit IRC23:52

Generated by 2.14.0 by Marius Gedminas - find it at!