Wednesday, 2016-08-24

*** spzala has quit IRC00:00
*** david-lyle has quit IRC00:01
*** tonytan4ever has joined #openstack-keystone00:04
*** tonytan4ever has quit IRC00:06
*** tonytan4ever has joined #openstack-keystone00:07
*** david-lyle has joined #openstack-keystone00:07
*** esp has quit IRC00:07
*** ddieterly has quit IRC00:09
*** atod has quit IRC00:10
*** atod has joined #openstack-keystone00:16
*** ddieterly has joined #openstack-keystone00:17
*** atod has quit IRC00:19
*** atod has joined #openstack-keystone00:21
*** david-lyle has quit IRC00:26
*** sdake has quit IRC00:36
*** sdake has joined #openstack-keystone00:37
*** cheran has quit IRC00:49
*** code-R has joined #openstack-keystone00:56
*** su_zhang has quit IRC00:57
*** code-R_ has joined #openstack-keystone00:57
*** code-R has quit IRC01:01
*** tonytan_brb has joined #openstack-keystone01:11
*** tonytan4ever has quit IRC01:14
*** tqtran has quit IRC01:19
*** chlong has joined #openstack-keystone01:20
*** itisha has quit IRC01:20
*** gyee has quit IRC01:22
*** code-R_ has quit IRC01:26
openstackgerritJamie Lennox proposed openstack/keystoneauth: Implement caching for the generic plugins.
openstackgerritJamie Lennox proposed openstack/keystoneauth: Implement caching for the generic plugins.
openstackgerritOpenStack Proposal Bot proposed openstack/keystone: Updated from global requirements
openstackgerritOpenStack Proposal Bot proposed openstack/keystonemiddleware: Updated from global requirements
*** davechen has joined #openstack-keystone01:37
*** EinstCrazy has joined #openstack-keystone01:41
*** tonytan_brb has quit IRC01:41
*** haplo37__ has quit IRC01:41
openstackgerritzhufl proposed openstack/keystone: Remove unnecessary __init__
*** wangqun has joined #openstack-keystone01:45
*** Gorian has quit IRC01:53
*** willise has joined #openstack-keystone01:54
*** willise has left #openstack-keystone01:54
*** Gorian has joined #openstack-keystone01:54
*** dkehn_ has quit IRC01:57
*** EinstCra_ has joined #openstack-keystone02:03
*** tonytan4ever has joined #openstack-keystone02:04
*** EinstCrazy has quit IRC02:06
*** ravelar has quit IRC02:06
*** dkehn_ has joined #openstack-keystone02:10
*** ddieterly has quit IRC02:15
*** spzala has joined #openstack-keystone02:25
*** ravelar has joined #openstack-keystone02:26
*** atod has quit IRC02:36
*** ravelar has quit IRC02:42
*** sdake has quit IRC02:46
*** sdake has joined #openstack-keystone02:49
*** BjoernT has joined #openstack-keystone02:56
*** aswadr_ has joined #openstack-keystone03:17
*** code-R has joined #openstack-keystone03:18
*** BjoernT is now known as Bjoern_zZzZzZzZ03:20
*** Bjoern_zZzZzZzZ is now known as BjoernT03:21
*** BjoernT is now known as Bjoern_zZzZzZzZ03:22
*** Bjoern_zZzZzZzZ is now known as BjoernT03:24
*** EinstCrazy has joined #openstack-keystone03:25
*** BjoernT has quit IRC03:27
*** EinstCra_ has quit IRC03:28
*** iurygregory_ has quit IRC03:46
openstackgerritHarini proposed openstack/keystone: EndpointPolicy driver doesn't inherit interface
*** spzala has quit IRC03:50
*** tonytan4ever has quit IRC03:50
*** bigdogstl has joined #openstack-keystone04:01
*** bigdogstl has quit IRC04:05
openstackgerritJamie Lennox proposed openstack/keystoneauth: Allow specifying client and service info to user_agent
*** roxanagh_ has joined #openstack-keystone04:13
*** jraju has joined #openstack-keystone04:14
*** jraju has quit IRC04:15
openstackgerritJamie Lennox proposed openstack/keystoneauth: Allow specifying client and service info to user_agent
*** roxanagh_ has quit IRC04:19
stevemarrodrigods: i am now04:36
stevemarmordred: notmorgan we didn't -2 using versionedobjects, we just thought using triggers was an easier approach04:38
*** asettle has joined #openstack-keystone04:51
*** tonytan4ever has joined #openstack-keystone04:51
*** tonytan4ever has quit IRC04:56
*** asettle has quit IRC04:56
*** jaosorior has joined #openstack-keystone05:07
*** jaosorior has quit IRC05:09
*** jaosorior has joined #openstack-keystone05:10
*** roxanagh_ has joined #openstack-keystone05:39
*** richm has quit IRC05:39
*** roxanagh_ has quit IRC05:39
*** code-R_ has joined #openstack-keystone05:51
*** adriant has quit IRC05:52
*** tonytan4ever has joined #openstack-keystone05:52
*** EinstCrazy has quit IRC05:52
*** code-R has quit IRC05:54
*** EinstCrazy has joined #openstack-keystone05:54
*** tonytan4ever has quit IRC05:57
*** EinstCrazy has quit IRC05:57
*** EinstCrazy has joined #openstack-keystone05:58
*** atod has joined #openstack-keystone05:58
*** EinstCra_ has joined #openstack-keystone06:02
*** jaugustine has quit IRC06:02
*** tqtran has joined #openstack-keystone06:03
*** EinstCrazy has quit IRC06:05
henrynashstevemar: imho while we approved the spec for the standard approach, we DID (effectively) -2 the implementation - since in the IRC meeting dolphm and others would not support putting in the code since they did not trust oslo.versioned objects would ever be finished06:15
henrynashstevemar: I had the whole implementation kestone-manage for Newtron up for review for several weeks - and peopel would not support it06:15
*** rcernin has joined #openstack-keystone06:23
henrynashstevemar: even though we don't actually need versionedobjects for keystone in Netron (since we only have additive db changes anyway)...but we would not accept even the principle of the command structure in keystone-manage that would be used when we did support versionedobjects06:35
*** code-R_ has quit IRC06:43
bigjoolsFolks, is there a convenient way to discover all the URLs for which we route API requests?06:53
*** tesseract- has joined #openstack-keystone06:56
openstackgerritHa Van Tu proposed openstack/keystone: [api-ref]: Outdated link reference
jamielennoxbigjools: from memory if you hook the right place you can print() the mapper object and it will show you everything07:07
jamielennoxyou just have to do it on a request or something so that everything is loaded up07:07
bigjoolsjamielennox: ok, thanks, I just need to work out where the mapper lives inside a request07:08
jamielennoxbigjools: yea, i think it comes through as part of the environ in a request or something? i can't remember exactly but it's not too hard to find07:08
bigjoolsok cheers, good info07:09
bigjoolsI'm looking at doing some instrumentation - I might not need all the endpoints but it's useful to see anyway07:09
openstackgerritThomas Bechtold proposed openstack/keystone: Fix tempest.conf generation
openstackgerritJamie Lennox proposed openstack/python-keystoneclient: Use fixtures from keystoneauth
*** code-R has joined #openstack-keystone07:29
*** code-R_ has joined #openstack-keystone07:31
*** code-R has quit IRC07:34
*** ccard has quit IRC07:36
openstackgerritJamie Lennox proposed openstack/python-keystoneclient: Use AUTH_INTERFACE object from keystoneauth
*** code-R_ has quit IRC07:48
*** code-R has joined #openstack-keystone07:48
*** tonytan4ever has joined #openstack-keystone07:54
*** tonytan4ever has quit IRC07:58
*** zzzeek has quit IRC08:00
*** zzzeek has joined #openstack-keystone08:00
openstackgerritDavanum Srinivas (dims) proposed openstack/keystone: [WIP] Testing latest u-c
*** aloga has quit IRC08:10
*** aloga has joined #openstack-keystone08:10
openstackgerritDivya K Konoor proposed openstack/keystonemiddleware: Globalize authentication failure error
*** woodster_ has quit IRC08:19
*** asettle has joined #openstack-keystone08:28
*** pcaruana has joined #openstack-keystone08:29
*** jed56 has joined #openstack-keystone08:33
*** tqtran has quit IRC08:34
*** dikonoor has joined #openstack-keystone08:45
*** atod has quit IRC08:49
*** code-R_ has joined #openstack-keystone08:50
openstackgerritJamie Lennox proposed openstack/python-keystoneclient: Use fixtures from keystoneauth
openstackgerritJamie Lennox proposed openstack/python-keystoneclient: Use exceptions from Keystoneauth
openstackgerritJamie Lennox proposed openstack/python-keystoneclient: Remove generic client
openstackgerritJamie Lennox proposed openstack/python-keystoneclient: [WIP] Remove old method of creating a client
openstackgerritJamie Lennox proposed openstack/python-keystoneclient: [WIP] Migrate to keystoneauth
*** code-R has quit IRC08:53
*** tonytan4ever has joined #openstack-keystone08:55
openstackgerritJamie Lennox proposed openstack/python-keystoneclient: [WIP] Migrate to keystoneauth
openstackgerritJamie Lennox proposed openstack/python-keystoneclient: Remove unauthenticated functions
*** tonytan4ever has quit IRC08:59
*** asettle has quit IRC09:01
*** asettle has joined #openstack-keystone09:06
*** asettle has quit IRC09:08
*** davechen has quit IRC09:09
openstackgerritMerged openstack/keystone: Fix credential update to ec2 type
*** Gorian has quit IRC09:19
*** davechen has joined #openstack-keystone09:19
*** jaosorior is now known as jaosorior_lunch09:21
*** davechen has quit IRC09:27
*** davechen has joined #openstack-keystone09:33
*** davechen has left #openstack-keystone09:33
*** atod has joined #openstack-keystone09:47
*** atod has quit IRC09:51
openstackgerritDivya K Konoor proposed openstack/keystonemiddleware: Globalize authentication failure error
*** EinstCrazy has joined #openstack-keystone09:55
*** tonytan4ever has joined #openstack-keystone09:56
*** EinstCra_ has quit IRC09:56
*** tonytan4ever has quit IRC10:00
openstackgerritCao Xuan Hoang proposed openstack/keystone: TrivialFix: Remove logging import unused
*** richm has joined #openstack-keystone10:07
openstackgerritDavanum Srinivas (dims) proposed openstack/keystone: [WIP] Testing latest u-c
*** EinstCrazy has quit IRC10:16
*** asettle has joined #openstack-keystone10:18
openstackgerritDivya K Konoor proposed openstack/keystonemiddleware: Globalize authentication failure error
*** sdake has quit IRC10:28
*** jaosorior_lunch is now known as jaosorior10:30
*** wangqun has quit IRC10:34
*** rodrigods has quit IRC10:34
*** rodrigods has joined #openstack-keystone10:34
*** amakarov_away is now known as amakarov10:41
*** dikonoor has quit IRC10:44
amakarovbknudson, hi! Can you please review ? I'd really love to have your opinion, as you had concerns and I can satisfy those only partially.10:49
*** d0ugal has quit IRC10:59
*** d0ugal has joined #openstack-keystone10:59
openstackgerritMikhail Nikolaenko proposed openstack/keystone: [WIP] Move fernet utils to backend
samueldmqmorning keystone11:07
openstackgerritDave Chen proposed openstack/keystone: Handle the exception from creating access token properly
openstackgerritMerged openstack/keystone: Add key repository uniqueness check to doctor
*** asettle has quit IRC11:45
*** sigmavirus|away is now known as sigmavirus11:51
*** jaosorior has quit IRC11:51
*** jaosorior has joined #openstack-keystone11:52
*** tonytan4ever has joined #openstack-keystone11:57
*** jpena is now known as jpena|lunch12:01
*** tonytan4ever has quit IRC12:01
*** markvoelker has joined #openstack-keystone12:25
*** julim has joined #openstack-keystone12:28
openstackgerritMikhail Nikolaenko proposed openstack/keystone: [WIP] Move fernet utils to backend
*** pauloewerton has joined #openstack-keystone12:35
*** asettle has joined #openstack-keystone12:40
*** bjolo has joined #openstack-keystone12:43
*** edmondsw has joined #openstack-keystone12:50
lbragstadhenrynash do you have links to the versionedobjects implementations?12:51
dstanekmorning samueldmq12:52
*** markvoelker has quit IRC12:53
lbragstadhenrynash was this it?
*** tonytan4ever has joined #openstack-keystone12:58
lbragstaddstanek I see three different patches for cache region fixes...13:00
lbragstaddstanek is there a particular one you need reviewed?13:00
*** tonytan4ever has quit IRC13:02
*** ddieterly has joined #openstack-keystone13:03
dstaneklbragstad: fixed the glitch13:04
lbragstaddstanek which glitch?13:05
*** jpena|lunch is now known as jpena13:06
dstanektoo many reviews. I abandoned a few13:06
lbragstaddstanek ah - looks like is the only one left13:06
*** ddieterly has quit IRC13:07
*** asettle has quit IRC13:09
*** asettle has joined #openstack-keystone13:10
samueldmqdstanek is that your patch to invalidate across regions ?13:11
samueldmqacross processes ?13:11
samueldmqI am not getting how that works13:13
*** woodster_ has joined #openstack-keystone13:23
*** BjoernT has joined #openstack-keystone13:23
bretonsamueldmq: across processes13:25
*** openstackgerrit has quit IRC13:26
*** openstackgerrit has joined #openstack-keystone13:26
bretonsamueldmq: a cache entry is a pair of (key, value)13:27
dstanekbreton: ++13:28
bretonsamueldmq: key gets mangled and pair {mangled_key: value} is set to backend13:28
bretonsamueldmq: with David's patch a cache entry is a pair of {key+region_id, value}13:28
dstaneksamueldmq: lol. answered you in the wrong room13:28
openstackgerritOpenStack Proposal Bot proposed openstack/keystone: Updated from global requirements
samueldmqdstanek: I did the same yesterday after meeting13:30
bretonsamueldmq: region_id is ephemeral. It is generated and is stored in the memcached directly, without mangling13:30
samueldmqbreton: dstanek : so we're talking about process cache ? or a common cache server13:30
bretonsamueldmq: when invalidation happens, new region_id is generated13:30
samueldmqthat is shared for all processes / keystone servers ?13:30
dstaneksamueldmq: memcached13:31
lbragstadit should work across processes13:31
samueldmqdstanek: so let's say a separate memcache server for it13:31
bretonsamueldmq: so all old {key+old_region_id: value} pairs are forgotten13:31
samueldmqand old caches were still 'valid' because the key matches ..13:31
bretonsamueldmq: because there is no more old_region_id13:31
dstaneksamueldmq: it's all on the same server13:32
*** tqtran has joined #openstack-keystone13:32
samueldmqdstanek: but it's a separate backend, used by keystone, right ?13:33
samueldmqcommon to all processes13:33
samueldmqin that server13:33
dstanekno, I'm just changing the keys in the existing backend13:34
*** sdake has joined #openstack-keystone13:34
samueldmqdstanek: ok, separate case: we have a proxy, multiple servers behind it13:35
samueldmqone server gets a delete call, invalidate the cache locally13:35
*** tonytan4ever has joined #openstack-keystone13:35
dstanekinstead of key/value pairs I made it key: id/value13:35
samueldmqother servers behind the proxy are incosistent13:35
lbragstadthat shouldn't be the case of the memcache deployment is sharded, right?13:35
dstanekyes, that's what was happening13:36
samueldmqok, so in that case I just said, memcache should be shared across keystone servers behind the proxy13:36
*** tqtran has quit IRC13:37
samueldmqso memcache could be put in a separate server, and used by all of them13:37
*** sdake_ has joined #openstack-keystone13:37
dstaneksamueldmq: it had to be our tour cache is already broken.13:38
lbragstadsamueldmq I believe if you list your memcache servers in the same order in each keystone config, keystone will automatically shard the memcache contents across each13:38
dstanekmemcached is designed as a shared cache13:38
samueldmqhmm like a galera db ?13:39
samueldmqor just in a common place and shared to all servers ?13:39
* samueldmq goes to rather than keep asking dumb questions13:40
*** sdake has quit IRC13:40
dstaneksamueldmq: :) memcached you be setup as a separate cluster13:41
bknudsonwe're not going to be able to have a memcache cluster span separate dcs.13:41
*** ddieterly has joined #openstack-keystone13:41
samueldmqdstanek: so yes, I remmeber to set up that once for horizon13:41
samueldmqusing pki13:41
samueldmqdstanek: I had to setup a separate server for memcache13:41
dstanekbknudson: then your cache expiration times need to be shorter since invalidation won't work13:43
dstanekor no cache at all if you can manage it13:44
*** ddieterly has quit IRC13:46
*** ddieterly has joined #openstack-keystone13:46
*** su_zhang has joined #openstack-keystone13:53
*** raildo has joined #openstack-keystone13:54
*** asettle has quit IRC14:05
samueldmqdstanek: so you do {key+region_id: value}14:06
samueldmqand you change the region_id, right?14:06
samueldmqthen region_id needs to be shared by everyone14:07
samueldmqdstanek: you use memcached to store the region_id ?14:09
dstanekyes, the way all processes use the same value14:10
dstaneksamueldmq: I'm a little short on the explanation because I'm on my phone at breakfast :)14:10
samueldmqdstanek: that's okay, take your time14:10
bretonsamueldmq: yes, region_id is stored in the cache backend14:17
*** nkinder has joined #openstack-keystone14:18
*** spedione|AWAY is now known as spedione14:19
bretonsamueldmq: and fetched from there when get(key) is called14:19
*** Jehane has left #openstack-keystone14:20
*** ravelar has joined #openstack-keystone14:22
*** michauds has joined #openstack-keystone14:24
samueldmqbreton: what does expiration_time=-1 mean?14:28
samueldmqthat's used for region id14:28
*** michauds_ has joined #openstack-keystone14:28
*** michauds has quit IRC14:28
bretonsamueldmq: my guess is that memcache keys expire after some time. region_id should not expire unless invalidated.14:29
*** ayoung has joined #openstack-keystone14:31
*** ChanServ sets mode: +v ayoung14:31
*** michauds_ is now known as michauds__14:31
*** michauds__ is now known as michauds14:32
bretonsamueldmq: so -1 disabled expiry14:32
* breton should finish his thoughts14:32
dstaneksamueldmq: does that make sense?14:52
*** edtubill has joined #openstack-keystone14:55
lbragstaddstanek how come we return a function in key_mangler_factory()?14:55
dstaneklbragstad: the keymangler need to be a function14:56
dstaneki use the factory to create it using a closure so it has access to the things passed into the factory14:56
bretondstanek: maybe you should put all that as comments14:58
dstanekfor some reason that patch is failing15:00
*** jistr is now known as jistr|mtg15:00
lbragstaddstanek looks like a couple of the jobs failed to stand up15:01
dstaneklbragstad: yeah, maybe directory problems?
lbragstad[Errno 32] Broken pipe15:02
*** haplo37__ has joined #openstack-keystone15:03
openstackgerritDolph Mathews proposed openstack/keystone: Reduce log level of Fernet key count message
lbragstadnotmorgan henrynash dolphm dstanek will you be availavle to discuss the trigger conversation from yesterday?15:04
dstaneki'm here all date except 2:30-3:30 EST when i pick up my kids15:06
*** code-R_ has quit IRC15:07
lbragstaddstanek cool - we will probably need to air that out in order to continue with the encrypted credential implementation15:08
aloga( stevemar would you be available in about 10 minutes for a short discussion )15:10
aloga( I'm currently in the middle of a meeting :-( )15:10
henrynashlbragstad: i am here for the next 30 mins, then afk until later. Back on around 4pm EST15:11
*** hockeynut has joined #openstack-keystone15:16
*** david-lyle has joined #openstack-keystone15:17
stevemaraloga: i can be15:20
*** tonytan4ever has quit IRC15:21
*** asettle has joined #openstack-keystone15:21
*** tonytan4ever has joined #openstack-keystone15:24
alogastevemar: :)15:25
*** david-lyle_ has joined #openstack-keystone15:25
*** sdake_ has quit IRC15:25
rodrigodsstevemar, hey... available to rapidly discuss something that may affect OSC?15:25
*** david-lyle_ has quit IRC15:26
*** hockeynu_ has joined #openstack-keystone15:27
*** hockeynut has quit IRC15:29
alogastevemar: I have to leave, so nevermind, thanks anyway15:29
*** Ephur has joined #openstack-keystone15:29
*** marekd2 has joined #openstack-keystone15:29
*** slberger has joined #openstack-keystone15:32
*** hockeynu_ has quit IRC15:36
openstackgerritAlexander Makarov proposed openstack/keystone: Unified delegation model
*** sdake has joined #openstack-keystone15:36
*** BjoernT has quit IRC15:37
stevemaraloga: same, we can catch up later15:38
*** hockeynut has joined #openstack-keystone15:40
openstackgerritAlexander Makarov proposed openstack/keystone: Unified delegation assignment driver
*** Ephur has quit IRC15:43
*** code-R has joined #openstack-keystone15:44
*** tonytan_brb has joined #openstack-keystone15:44
openstackgerritAlexander Makarov proposed openstack/keystone: Unified delegation trust driver
*** tonytan4ever has quit IRC15:48
*** awayne has joined #openstack-keystone15:49
*** su_zhang has quit IRC15:53
*** su_zhang has joined #openstack-keystone15:53
*** adrian_otto has joined #openstack-keystone15:55
*** su_zhang has quit IRC15:57
*** dikonoor has joined #openstack-keystone16:01
*** gyee has joined #openstack-keystone16:03
*** jistr|mtg is now known as jistr16:08
*** chrisshattuck has joined #openstack-keystone16:12
lbragstaddstanek our sqlite database is initialized for every run right?16:15
dstaneklbragstad: i believe that's true, yes16:15
lbragstader - every time we run tests - out tests should stand up a new sqlite database16:15
dstanekin setUp16:15
bknudsonit's the same for the mysql and postgresql live tests. you get a new db every test.16:16
lbragstadin keystone.tests.unit.test_sql_upgrade.SqlContractSchemaUpgradeTests the setUp is failing because it's saying a trigger is already created16:16
lbragstadwhich is weird because that would mean we are running the expand repo upgrade in out setup twice somewhere16:17
dolphmlbragstad: in your patch?16:17
lbragstaddolphm not sure if this is related directly to my patch - or one of the previous sql upgrade testing patches16:17
dstaneklbragstad: the upgrade tests may be a little special since we want to use the same DB to go from one version to another. is it possible it's being executed twice in the same test run?16:17
lbragstadsqlalchemy.exc.OperationalError: (sqlite3.OperationalError) trigger credential_read_only already exists [SQL: "\nCREATE TRIGGER credential_read_only BEFORE UPDATE ON credential\nFOR EACH ROW\nBEGIN\n  SELECT RAISE (ABORT, 'Credential migration in progress. Cannot pe16:18
lbragstadrform writes to credential table.');\nEND;\n"]16:18
lbragstadthe flow is that it will upgrade the migrate_repo (which is the legacy repository) to version 10916:18
lbragstadthen it will go ahead and move on to the expand_repo16:18
lbragstadbut when it goes to do that - it fails saying that a specific trigger already exists16:19
lbragstadwhich is strange because it shouldn't have been created before the migrate_repo was initialized16:19
lbragstadunless we are initializing the legacy repo and running upgrade on the expand repo twice somewhere?16:20
lbragstadthis is the setUp
dstaneklbragstad: how does upgrade() know what repo to upgrade?16:23
dolphmlbragstad: can you update the commit message on vs -- i keep opening the wrong one because they have the same 1 line summary16:23
lbragstaddstanek you have to initialize it
dolphmdstanek: it depends on which repo was last initialized :-/16:23
*** chrisshattuck has quit IRC16:23
dolphmdstanek: ideally, we'd have 4 different upgrade methods16:23
dolphmdstanek: agree 110%16:23
dolphmdstanek: i believe that piece is still in review, though16:24
lbragstadself.upgrade() will behave differently depending on what repo was initialized last16:24
dolphmso: luck16:24
dstaneklooking at the code you can pass in the repo, it's just not required16:24
lbragstaddolphm what is this luck you speak of?16:25
openstackgerritLance Bragstad proposed openstack/keystone: WIP: Implement encryption of credentials at rest
dstanekalthough you have to pass in both 'repository' and 'current_schema'16:25
dolphmlbragstad: sorry, i guess you don't have any16:25
dstaneklbragstad: can you verify that the correct repo is being upgraded?16:26
lbragstaddstanek yep16:26
dstaneklbragstad: what patch is failing?16:26
lbragstaddstanek let me grab a paste16:26
dolphmif we use the migration helpers to manage things, we could implement sanity checks there that things are being upgraded in order, and we'd probably be able to debug this immediately16:26
lbragstaddstanek yields -
lbragstadso there we can see that the legacy repository is being updated first16:28
lbragstadand the initial db version for that repository is 66, which makes sense16:28
lbragstadthen it gets updated to 10916:28
lbragstadthen we move on to initializing the expand repository - the versions make sense, then it blows up16:28
*** rcernin has quit IRC16:29
dstaneklbragstad: actually it doesn't look like test_sql_upgrade is using the database fixture16:30
lbragstadso that would mean that we aren't using a fresh db every time?!16:30
dstaneklbragstad: or that i can't immediately say we are16:31
dstanekif we are not at all then i would have expected this to already be broken16:31
*** marekd2 has quit IRC16:31
*** hockeynu_ has joined #openstack-keystone16:31
*** marekd2 has joined #openstack-keystone16:32
*** haplo37__ has quit IRC16:33
lbragstaddstanek yeah - how would that not fail with just running migrations normally?16:33
*** esp has joined #openstack-keystone16:34
*** tqtran has joined #openstack-keystone16:35
*** hockeynut has quit IRC16:35
*** tesseract- has quit IRC16:35
*** marekd2 has quit IRC16:36
*** tqtran has quit IRC16:39
*** hockeynut has joined #openstack-keystone16:45
*** hockeynu_ has quit IRC16:45
*** roxanagh_ has joined #openstack-keystone16:47
*** jaosorior has quit IRC16:47
lbragstadI think if we are going to have multiple repositories - we should make repository required for _migrate()16:47
*** asettle has quit IRC16:50
*** asettle has joined #openstack-keystone16:50
dstaneklbragstad: i'm getting a DatabaseAlreadyControlled error on your test. it looks like maybe due to the multiple repositories?16:53
lbragstaddstanek actually - just figured it out16:53
dstanekwhat was it?16:53
lbragstaddstanek it was because I broke the sqlite triggers into two separate triggers16:53
lbragstadone for update and one for insert16:53
lbragstadthey were both named the same16:53
lbragstadwhich was causing the issue16:53
lbragstaddstanek but i can confirm that I get a database controlled error, too16:54
lbragstadnot sure what that is all about yet16:54
dstaneki think the repos are all using the same repository id16:54
*** asettle has quit IRC16:55
samueldmqdstanek: I am almost understanding it 100%16:55
samueldmqjust left a couple of questions in the review16:55
*** mordred has quit IRC16:56
*** tonytan_brb has quit IRC16:57
*** tonytan4ever has joined #openstack-keystone16:57
*** code-R has quit IRC16:58
*** code-R has joined #openstack-keystone16:59
*** mordred has joined #openstack-keystone16:59
*** lamt has quit IRC16:59
dstaneksamueldmq: i just replied17:00
*** su_zhang has joined #openstack-keystone17:01
*** hockeynut has quit IRC17:10
rderosenotmorgan stevemar: versioned objects aren't the only alternative for rolling upgrades, we also have a read-only approach17:11
*** hockeynut has joined #openstack-keystone17:12
*** code-R_ has joined #openstack-keystone17:12
rderosenotmorgan stevemar: I'd like to prove out the trigger strategy first though.  But if we can't agree, the read-only option gives us a quick win.17:14
rderose* quick and easy :)17:14
stevemarrderose: true, forgot about tht17:14
*** code-R has quit IRC17:15
rderosestevemar: it gets us mostly there for rolling upgrades and is a quick fix in the meantime17:15
rderosestevemar: anyway, something to keep in mind17:15
*** thumpba has joined #openstack-keystone17:17
*** thumpba has quit IRC17:22
*** thumpba has joined #openstack-keystone17:23
*** rcernin has joined #openstack-keystone17:23
*** tqtran has joined #openstack-keystone17:28
*** dikonoor has quit IRC17:28
*** code-R_ has quit IRC17:34
*** lamt has joined #openstack-keystone17:43
samueldmqdstanek: kk, looking17:46
*** gyee has quit IRC17:47
*** jpena is now known as jpena|away17:50
*** aswadr_ has quit IRC17:51
*** ddieterly is now known as ddieterly[away]17:55
notmorganrderose: as said in the convo yesterday, triggers are generally a bad idea wrt how openstack works and leveraging an ORM.17:56
notmorganrderose: also it is creating yet another method of doing something that is a unique snowflake compared to the other mechanisms (as much as I am not a fan of them) already in use and accepted by operators17:58
notmorganwriting DDL-specific code is very likely to result in subtle bugs/difficult to maintain code even if it's just for "transitional" states.17:59
dolphmnotmorgan: "generally a bad idea wrt how openstack works and leveraging an ORM" -- can you elaborate on why they're a bad idea? yes, they can't be managed by the ORM, but they also don't interfere17:59
rderosenotmorgan: the trigger approach seemed to be a common way of dealing with rolling upgrades18:00
dolphmnotmorgan: the operators we've spoken to (some VERY large) were enthused, but it was a small sample18:00
notmorgandolphm: because we are using an ORM that doesn't support it. We're adding in code that is highly dependant on advanced features of the RDBMS that we have historically been extremely lax on specifygin versions that are supported18:00
dolphmnotmorgan: and the same fragility argument can be made for maintaining data layer integrity in the application layer18:00
samueldmqdstanek: another question tehre18:01
dolphmnotmorgan: i'm not aware of any ORM that supports trigger definitions - are you?18:01
notmorgandolphm: since we do not test with a variety of versions (we don't) and we don't specify, we have little guarantees of making sure that "Features" used in the triggers will work.18:01
notmorgandolphm: hit and miss depending on how they are built18:01
dolphmnotmorgan: not sure what you mean by "extremely lax on specifygin versions that are supported" though18:01
notmorganwhat version of mysql is required for openstack to run18:02
notmorgansame question for PGSQL18:02
dolphmnotmorgan: oh, sure. have triggers broken backwards compatibility in the past or what?18:02
notmorgandolphm: they haven't been used in openstack, but you need to make sure whatever definition you're using conforms to the mysql capabilities18:03
notmorgansame for pgsql18:03
dolphmnotmorgan: i imagine no one is going to be upgrading their database version while triggers are in place though, as they're only in place temporary18:03
notmorganand the code is going to be DDL specific18:03
dolphmnotmorgan: ++ we're testing triggers against sqlite, postgres, and mysql in the gate18:03
notmorganso code for PGSQL, MySQL, etc18:03
dolphmbut i guess what versions of those we test are up to infra18:03
dolphmnotmorgan: ++ you should check out lance's review18:03
dolphmhe's defining 6 triggers for one migration18:04
notmorganand we also do not test with galera18:04
dolphmfocusing on mysql first, but you can see the pattern18:04
notmorganwhile galera doesn't say it breaks with triggers, they are not well tested afaict compared to the base featuresets18:04
notmorgansame w/ PGSQL clustering18:04
dolphmnotmorgan: do you expect galera or percona to behave any differently with regard to triggers? that hasn't been a concern that's come up thus far18:05
notmorgani always expect galera/percona to behave subtly different than a non-clustered environment18:05
notmorganuntil proven otherwise.18:05
notmorganusually it doesn't break anything but most places would be testing on a base that mirrors that18:05
dolphmwell, worst case, you could do an offline upgrade and the triggers will never fire18:05
notmorganand could adjust things if an error state/weirdness happens18:05
notmorganbasically, if we only supported MySQL and had a range of versions we specified as supported18:07
notmorgan(more on that last point)18:07
notmorganI would feel a lot better about the DDL specific code and triggers18:07
dolphmnotmorgan: well, mysql is obviously the first class citizen. if we want to gate against a second version of mysql or something, we could totally do that. we'll also be gaining a CI job against OSA (using galera) sometime after release18:08
stevemarjamielennox: SO MUCH DELETE
notmorganand as much as i trust lance and henrynash to write good code; when i hear folks like mordred say things to the effect of triggers being the less correct approach (especially since he knows how the stuff internal to mysql works), I tend to err to that sid to start18:09
rderosejamielennox nice!18:09
stevemarnotmorgan: dolphm oh nice we're talking about this18:09
*** Gorian|work has joined #openstack-keystone18:09
dolphmnotmorgan: i'd love to hear mordred's thoughts (what makes them "less correct" than an application-layer solution?)18:09
notmorgandolphm: from my discussions it's simply down to less defined versions of the RDBMS backend(s) and that the application layer solution is common regardless of the DDL18:10
stevemarnotmorgan: mordred using versionedobjects seems like overkill for keystone18:11
rderosestevemar: agree18:11
dolphmnotmorgan: i imagine that's a concern that would be resolved with sufficient testing though, no?18:11
notmorganmix those two things together, it really is openstack's architecture, use of an ORM to abstract the RDBMS differences, etc, leads to application layer being the most maintainable/supportable/straightforward18:11
rderosestevemar notmorgan dolphm: has any of the core openstack projects implemented rolling upgrades?18:11
stevemarrderose: neutron ?18:12
dolphmnotmorgan: exercise each trigger on the database variants we care about18:12
dolphmstevemar: they're trying18:12
stevemaranyone else?18:12
dolphmnova has theoretical support, so far as i can ascertain18:12
*** hockeynut has quit IRC18:13
dolphmironic has punted to a future release because versioned objects are a lot of work18:13
notmorgandolphm: possible. in all honesty I would prefer DDL-specific code that could make use of mysql optimisations etc vs the ORM18:13
dolphmi don't think glance has started, but i think triggers would apply there easily18:13
dolphmand none of this is applicable to swift18:13
notmorgandolphm: and in that case I would have no issues (with version specifics for mysql, and a cluster setup test) of saying "triggers are viable"18:13
*** Gorian|work has quit IRC18:14
*** roxanag__ has joined #openstack-keystone18:14
dolphmnot sure how much horizon cares (it doesn't use a db other than for "caching," right?)18:14
notmorganso it comes down to: I dislike versioned objects, however, they fit openstack architecture a bit better than triggers because of ORM, limited version specification, different DDLs, etc18:14
dolphmthey perhaps fit other services better, not necessarily ours18:15
notmorganand i say this as someone who would love to simply drop support for pgsql18:15
rderosenotmorgan: ++18:15
notmorganbecause it's statistically an outlier for use that nets more maintenance/headaches/etc18:16
dstaneknotmorgan: is there a simple example of versionedobject abstracting away DB differences. i've only heard about it being used at the RPC level18:16
notmorganin fact, i'd love if we *only* supported MySQL18:16
dolphmnotmorgan: i kinda figure this might push some deployers towards postgres in the near future :-/
notmorgandstanek: since nova uses a conductor, in almost all cases it is RPC.18:16
rderosestevemar: lets only support mysql!18:16
notmorgandolphm: doubtful unless oracle makes mysql even worse.18:16
rderosedolphm: oh great18:17
dstaneknotmorgan: i don't see how it would work at all for ORM other then writing application code to implement similar patterns18:17
notmorgandolphm: and/or percona changes.18:17
*** roxanagh_ has quit IRC18:17
mordredstevemar: I disagree. it's not about keystone. it's about openstack18:17
*** su_zhang has quit IRC18:17
mordredstevemar: if keystone was in a vacuum, sure.18:17
mordredstevemar: but keystone is useless without a corresponding openstack18:17
notmorganmordred: ++18:17
mordredstevemar: so whatever the openstack-wide solutoin is is what keystone should do18:17
*** su_zhang has joined #openstack-keystone18:17
dolphmdstanek: witness neutron's massive effort
*** dims has quit IRC18:18
rderosemordred: but openstack doesn't have a solution for rolling upgrades18:18
*** dims has joined #openstack-keystone18:18
stevemarmordred: you're giving us a toolbox when all we want is a screw driver :P18:19
dstanekmordred: should there be some freedom to innovate when the accepted solution isn't ideal?18:19
dolphmit'd be nice if we had a proven solution for real zero downtime upgrades *before* we pushed for every project to implement a solution18:19
samueldmqdstanek: ok, reviewed it (took me ages!)18:19
stevemardolphm: i was just gonna say that!18:19
mordreddstanek: sure. in the context of the existing not-quite-perfect thing18:19
samueldmqdstanek: I suggested to put some comments to help understanding the code18:19
mordredbecuase whatever problems you have with it are probably also applicable to nova18:19
rderoseand what's the accepted solution that no one has successfully implemented18:19
dstaneksamueldmq: great, thanks18:19
stevemarmaybe we wait and see how neutron and nova work out, and what the ops feedback is, before we claim rolling upgrade support in keystone18:20
mordredrderose: whatever it is that nova is doing, that has been developed in consultation with the ops community18:20
mordredlike, that's where the ops efforts have been focued18:20
mordredbecause nova migrations are much more complex18:20
dstanekmordred: or that neutron mess dolph linked to? just seems like the wrong tools for abstracting away database changes18:20
rderosemordred: exactly18:20
mordreddstanek: I'd follow nova, not neutron18:20
mordredI belive neutorn is also trying to follow nova18:21
mordredbut is likely not as far along18:21
rderosemordred: so nova needs a really complex (overkill for keystone) solution18:21
mordredso a bad example18:21
stevemarhooooly shiet
mordredrderose: but the solution isn't about being a solution for keystone18:21
mordredit's about being a solution for _operators_ who run _openstack_18:21
mordredso if you make them learn a new tool for keystone when they have a tool for nova already18:21
dolphmmordred: have any operators actually moved to exercising nova's rolling upgrades, or is it all still theory?18:21
mordredthey're going to flip their shit18:21
mordreddolphm: tons of them, all the time18:21
dolphmas far as i've seen, gate testing is non-existent18:21
*** Gorian|work has joined #openstack-keystone18:22
mordredbut whatever is there has gone through multiple ops summit sessions and feedback loops18:22
dstanekmordred: afaict they are not using it for DB stuff18:22
notmorganrderose: if every project implemented their own solution (*cough* wsgi layer *cough*) it makes it hard(er) to run openstack; which is only good if you're selling "we run openstack for you because it's too hard to run" (not a good thing for openstack)18:22
dolphmmordred: the handful of operators i've talked to were on board (mostly at the openstack-ansible midcycle)18:22
mordredso at the very least starting from that point18:22
dstanekmordred: do you now where they are?18:22
dstaneknotmorgan: if all solutions used the same commands then the implemenation would matter less18:22
rderosenotmorgan: it's a good point, but there has to be some middle ground here, then full blown versioned objects18:23
mordredI do not - I'm just saying please don't develop a new thing in a vacuum. the problem is exactly the same18:23
dstanekit's really a problem if the deployer have to do things a million different ways18:23
dolphmdstanek: ++ that's the most painful part to me at this stage18:23
dstanekmordred: afaict nobody does what we want to do and we are trying to prove out the concept. maybe a ML post is in order here?18:23
dolphmmordred: well, it's not quite the same problem, but i appreciate the notion18:23
dolphmstevemar: have you already started to draft one?18:24
notmorgandolphm: it kindof is the same probelem, just a separate interface to access the data.18:24
stevemardolphm: no, shall i open an etherpad?18:24
*** jlk has joined #openstack-keystone18:25
jlkI hear there's talk of online migrations happening in here!18:25
dolphmnotmorgan: right, it's not "exactly the same"18:25
dolphmjlk: ++18:25
mordredlook it's a jlk!18:25
*** d34dh0r53 is now known as b3rnard0-b0n-h4r18:25
jlkhi. I figured I'd try to lend an operators viewpoint on the discussion, just to make things interesting.18:25
dolphmjlk: you might be interested in seeing our zero downtime upgrades docs if you want to catch up
stevemarjlk: please do!18:26
* jlk reads18:26
jlkWell, step 9 is wrong, in a pedantic way18:28
dolphmjlk: how so?18:28
jlkwait, no18:28
jlkI misread18:28
rderoseRon looks up the word pedantic18:28
jlkso the difference here between Nova and Keystone18:30
dstanekrderose: In a sentence: "Sometimes my reviews are overly pedantic" :-)18:30
jlknova continues to read/write to the old location, and a background process is used to fully migrate the data18:30
rderosedstanek: ++ :)18:31
jlkwhereas keystone is using triggers to move data written to the old location into the new location18:31
dstanekjlk: and from new to old18:31
jlkmirroring them18:31
dstanekthat way we can have 2 vesions of the app running at the same time with no downtime18:31
jlkis the --contract the action that will remove the triggers?18:31
dolphmjlk: correct- nova writes to the old location from the application layer (so the new release has to know about, and detect, the old schema)18:31
dolphmjlk: yes, --contract drops triggers18:32
dstanekdolphm: ++ exactly... but also has to write to the old location so the old version continues to work18:32
dolphmdstanek: ++ triggers maintain sync bidirectionally18:32
jlkoperationally, this doesn't seem bad.18:32
dolphmso the new release writes to the new schema, triggers write to the old schema, old release reads old schema18:32
jlkin large environments with very active keystone, how much additional overhead do the triggers create, particularly with databases that are synced across WANs?18:33
dolphmjlk: operationally, the worst part might be that triggers will incur some performance cost, but they're also temporary (they only exist for the duration of the rolling upgrade process)18:33
notmorganjlk: if using galera, triggers are *supposed* to fire on all nodes.18:33
notmorganjlk: same as an atomic "write"18:33
notmorganreplication (standard) does not account for triggers directly18:33
dstanekjlk: does nova read from and write to the old columns in addition to the new columns during an upgrade?18:34
notmorganand leans on replication to cover data changes via binlog post trigger fire18:34
samueldmqdstanek: finally, some comments in the tests, thanks!18:34
dolphmnotmorgan: interesting18:34
jlkdstanek: that answer lies somewhere in the nova conductor code18:34
dolphmnotmorgan: i would have thought they'd fire on one node, and the result of the entire transaction would be replicated18:35
jlkI don't know the internals very well18:35
notmorgandolphm: since the trigger is low level, it is below the galera level iirc18:35
jlkbut from what I understand, nova can look to the old location first, and if it isn't found there, look to a new location18:35
notmorgandolphm: so it handles the write, which should fire the trigger, hence the fire on all nodes.18:35
jlkso yeah, from an operational and automation flow, this seems workable.18:36
notmorgandolphm: since a glaera write is "write to X on each node and wait for ack"18:36
*** su_zhang has quit IRC18:36
jlkI'm no db expert, so I can't speak for the impact of divergence from Nova, but it does sound like something Nova could actually make use of in the future to simplify their migration story18:36
dstanekjlk: i'm curious to know what they actually do18:37
jlkthey have other concerns due to multiple sub-components that may be at different versions18:37
jlkand passing messages back and forth18:37
dolphmjlk: that bit where nova has to try one thing first, then has to retry something totally different, sounds like an opportunity for a nasty race condition :-/18:37
jlkwhereas keystone is all about the db acess18:37
jlkdolphm: I'm likely paraphrasing badly18:37
dolphmjlk: well, that's my understanding too18:37
notmorganjlk: the only real concern *I* have is that support for declarations in triggers will differe based on DDL (PGSQL and MySQl, etc) and between versions of the given RDBMS18:37
dstanekjlk: you make a good case for not using the 'nova does it this way' hammer :-)18:38
notmorganjlk: so the triggers must be written per-DDL since the ORM cannot abstract this.18:38
jlkI think it only writes to old location, and reads from old, until $SOMETHING_HAPPENS so that it starts reading from new.18:38
dolphmnotmorgan: that's a developer problem :P18:38
jlknotmorgan: I think most of those words were English...18:39
notmorgandolphm: it ends up being an operator problem too.18:39
notmorgandolphm: just from knowning what to run and how to debug18:39
jlkit's an operator problem if the developer screwed it up for the DB the operator is using18:39
notmorganjlk: ++18:39
dolphmjlk: ++18:39
jlkso test coverage is important18:39
dolphmjlk: agree; i don't think any service can claim rolling upgrade support (much less zero downtime) until we've got a whole lot more testing in place18:40
dstanekalso testing major DB versions of our supported RDBMS?18:40
dstanekalso this still allows for the 100% downtime upgrades18:40
notmorgandolphm: so to address my concerns: test and specify a minimum version for the RDBMS18:40
* dstanek thinks that just sounds bad even though it isn't18:40
dolphmnotmorgan: we do that today, no?18:40
notmorganwe test one version18:40
dolphmnotmorgan: (the minimum version bit)18:40
notmorgandoesn't mean we specify the minimum really18:40
notmorganwe say it should be X18:41
*** gyee has joined #openstack-keystone18:41
lbragstaddstanek ++ i had to read it twice just to make sure18:41
notmorganbut nothing we're leaning on really is broken on older versions18:41
jlkhonestly I like what I hear, provided y'all can get the code right. The workflow feels right, and is nice and simple.18:41
notmorgani mean... 3.x mysql it owuld be broken18:41
jlkyou know somebody is going to make you make it work on db218:41
notmorganbut make sure that whatever version of (mysql|pgsql) you're developing against is the minimum specified version18:42
dolphmjlk: =P18:42
jlkI wish I were joking18:42
notmorganand/or specify a hard minimum18:42
* notmorgan would still love to just drop the ORM completely18:42
dstanekjlk: db2 users can just upgrade the old way18:43
dstaneknotmorgan: from all the code?18:43
jlkthat's a fair point18:43
jlkso, the only sad thing I see18:43
*** sc68cal has joined #openstack-keystone18:43
notmorgandstanek: minimums should be documented18:43
jlkwe'll have to be _already_ on Newton in order to make use of htis for Newton+18:43
* sc68cal also heard something about online migrations and rolling upgrades18:43
jlkso I'll have at least one more "hard" migration18:44
notmorgandstanek: we can just ignore people who are breaking documented minimums18:44
dstaneknotmorgan: no, i mean about removing ORMs18:44
notmorgandstanek: and we need to make sure we test the documented minimum version of mysql.18:44
jlkso, if I lay down newton code and config (but don't restart the service), I get the new keystoe-manage18:44
notmorgandstanek: oh i would LOVE to just drop it and use DDL specific/optimised sql (and only support mysql18:44
jlkwhich means, I should be able to do this?18:44
jlklike Mitaka -> Newton via the live method?18:44
notmorgandstanek: but that is not some argument i could win.18:45
notmorgandstanek: as a way forward for keystone :P18:45
lbragstadjlk yeah - that should technically work since Newton code would be used to create the triggers18:45
jlkwell I know what I'll be testing18:45
dstaneknotmorgan: we'd just write our own, see that it's inferior and then move back to sqla18:45
dolphmjlk: i haven't personally reviewed our "legacy" migrations thus far (mitaka -> master), but if they're all additive, then you'll be able to do zero downtime mitaka -> newton18:46
jlkoh right, because you haven't drawn a line in the sand yet on what kind of migrations you allow?18:46
notmorgandstanek: possibly not. but since we support > 1 RDBMS, an ORM is the only sane options18:46
notmorgandstanek: and that isn't to say we wouldn't use SQL-A, just not nessicarily the ORM layer18:47
dstaneknotmorgan: fair enough. although even if we use sqla-core we'd implement our own ORM on top to serialize data into our own objects, but that isn't a bad thing18:48
*** adrian_otto has quit IRC18:48
dolphmjlk: we have a patch in review to block different types of operations in each phase of the upgrade, if that's what you mean18:49
* dolphm has to step away for a bit18:49
*** amakarov is now known as amakarov_away18:50
notmorgandstanek: fair, i mean we would be building highly mysql-specific and optimised operations in that case vs a compromise that works on all (inc. sqlite) RDBMSs18:51
dolphmnotmorgan: have you reviewed lbragstad's migration?18:52
lbragstadWIP review here -
*** gordc has joined #openstack-keystone19:01
*** ddieterly[away] is now known as ddieterly19:04
*** haplo37__ has joined #openstack-keystone19:05
*** su_zhang has joined #openstack-keystone19:07
*** su_zhang has quit IRC19:11
*** b3rnard0-b0n-h4r is now known as d34dh0r5319:14
*** ddieterly is now known as ddieterly[away]19:15
lbragstadthoughts? comments? questions?19:16
*** ddieterly[away] is now known as ddieterly19:18
*** sdake has quit IRC19:20
*** thumpba_ has joined #openstack-keystone19:39
*** thumpba_ has quit IRC19:39
*** thumpba has quit IRC19:41
openstackgerritRichard Avelar proposed openstack/keystone: POC sql query revoked tokens
lbragstadi think the test_walk_versions test has a bug in it - too... I have a feeling that it's not upgrading the repositories in order when doing the upgrade of the contract repository19:42
lbragstadit fails to remove the triggers because they don't exist19:42
henrynashlbragstad: back on19:42
lbragstadhenrynash o/19:42
lbragstadhenrynash figured you'd be on soon19:42
*** ravelar has quit IRC19:43
henrynashlbragstad: where do we stand....?19:43
*** ravelar has joined #openstack-keystone19:44
lbragstadhenrynash looks like stevemar is going to email the -dev list to get some feedback19:45
lbragstadhenrynash but dolphm dstanek myself, notmorgan mordred rderose and jlk all visited about it19:46
henrynashstevemar, lbragstad: feel free to use as a working, simple example of the trigger approach19:46
lbragstadjlk was able to provide some useful info from an operator perspective19:46
lbragstadhenrynash I do have some questions19:47
rderoseyeah, jlk seemed to like it in the end19:47
henrynashlbragstad: shoot19:47
henrynashrderose, lbragstad: I spent today re-looking at the origional (nova-based idea), and actually talked my way back to the trigger idea as it being easier19:48
*** hockeynut has joined #openstack-keystone19:48
henrynashrderose, lbragstad: well, to be accurate, it conatins a small amount of hard stuff which is very localized...and the rest is much easier19:49
rderosehenrynash: true19:49
lbragstadI cannot get test_walk_versions to pass to save my life19:49
rderosehenrynash: no matter what, there will be some pain points19:49
lbragstadthere is always going to be pain19:49
henrynashlbragstad: it's failing for postgresql only?19:50
*** ravelar has quit IRC19:50
lbragstadhenrynash no it fails with sqlite19:50
lbragstadhenrynash that trace is what happens when I run it locally19:50
henrynashlbragstad: looking it it now19:50
lbragstadhenrynash I don't think you're hitting that because you're using 'IF EXISTS' in the statement19:51
lbragstadwhich won't make it fail hard19:51
lbragstadso - it would appear that test_walk_versions isn't running the expand repository fully before running the contract repository (?)19:51
*** asettle has joined #openstack-keystone19:53
henrynashlbragstad: I did have suspiosions about that, actually...since I think I saw a warning sign sometime to say it didn't exist....I'll experiment with my patch tonight to see if I can debug it19:53
henrynashlbragstad: btw, in your patch, I assume you are using the triggers to block write access becuase it's too hard to embed an encrypt step as part of the stored procesure!19:55
lbragstadhenrynash yes -19:55
lbragstadhenrynash i have a feeling that would be impossible19:55
henrynashlbragstad: you are probably correct!19:55
lbragstadsince it would have to encrypt credentials (and decrypt them) using the same exact crypto implementation and keys used by the cryptography library19:56
lbragstadin order to effectively copy the data back and forth from the old/new schemas19:56
henrynashlbragstad: that's technically what we would want do do in the trigger, but your fallback to RO for a period is also a good example of what you can do if you cannot provide the patch update required in teh trigger19:56
*** gordc has quit IRC19:57
lbragstadhenrynash right - it's a strange edge case19:57
lbragstadfor the trigger route19:57
*** BjoernT has joined #openstack-keystone19:58
openstackgerritLance Bragstad proposed openstack/keystone: Implement encryption of credentials at rest
*** su_zhang has joined #openstack-keystone20:01
*** hockeynut has quit IRC20:01
*** ravelar has joined #openstack-keystone20:02
*** marekd2 has joined #openstack-keystone20:02
*** hockeynut has joined #openstack-keystone20:03
*** ravelar has quit IRC20:03
*** asettle has quit IRC20:03
*** ravelar has joined #openstack-keystone20:03
*** BjoernT has quit IRC20:04
*** su_zhang has quit IRC20:06
*** marekd2 has quit IRC20:07
*** su_zhang has joined #openstack-keystone20:07
*** su_zhang has quit IRC20:11
*** lamt has quit IRC20:19
*** tqtran has quit IRC20:22
*** tonytan4ever has quit IRC20:36
*** nkinder has quit IRC20:39
*** tonytan4ever has joined #openstack-keystone20:40
*** edtubill has quit IRC20:41
*** su_zhang has joined #openstack-keystone20:42
bknudsonI didn't think I'd be able to recreate in a dev environment, but I'm seeing in my dev system (not devstack)20:50
openstackLaunchpad bug 1600394 in OpenStack Identity (keystone) "memcache raising "too many values to unpack"" [Critical,Confirmed] - Assigned to David Stanek (dstanek)20:50
*** roxanag__ has quit IRC20:51
*** tqtran has joined #openstack-keystone21:01
openstackgerritRon De Rose proposed openstack/keystone: Relax the requirement for mappings to result in group memberships
lbragstadhenrynash do you happen to get a migrate.exceptions.DatabaseAlreadyControlledError at all?21:02
lbragstadhenrynash so far - i've got my patch down to two failures.21:03
*** roxanagh_ has joined #openstack-keystone21:03
openstackgerritLance Bragstad proposed openstack/keystone: Modify sql banned operations for each of the new repos
openstackgerritLance Bragstad proposed openstack/keystone: Implement encryption of credentials at rest
lbragstadhenrynash ^21:05
lbragstadthat just seems to fail because of the repo upgrade order of test_walk_versions and that weird database already controlled error21:06
lbragstadhenrynash checking out your patch locally to see if it does the same thing mine does21:08
*** edtubill has joined #openstack-keystone21:10
henrynashlbragstad: it does....and it fails in test_sql_upgrade the same way...only for sqlite21:16
lbragstadhenrynash yup21:16
lbragstadhenrynash i'm going to paste what I did in the review so it's persisted somewhere other than chat21:16
lbragstadhenrynash here are the failures I saw -
henrynashlbragstad: agreed...tis a bit weird21:20
bknudsonhere's what the line is like that memcache gives back that the lib isn't expecting:
bknudsonit goes on like that for a while.21:21
*** roxanagh_ has quit IRC21:21
bknudsonhappens with a single client.21:22
henrynashlbragstad: I did also try a create trigger, drop trigger, create trigger sequence in the expand phase and that the triiger really is getting created....(and other tests would fail if it wasn't) just seems that for some reason in the contract phase it's no longer there...21:23
lbragstadhenrynash I wonder if that is because the contract phase is only attempting to upgrade the contract repo?21:23
henrynashlbragstad: or as you said...the contract tests are not properly running the expand phas efirst21:23
lbragstadhenrynash that would be my wild theory21:23
lbragstadbut that's only based on what it smells like21:24
*** shaleh has joined #openstack-keystone21:24
henrynashlbragstad: but I *tried* to run the otehr upgrades first....maybe that's not working for some resain21:24
shalehayoung: you around still?21:24
lbragstadhenrynash where do you attempt to do that?21:24
lbragstadfor test_walk_versions?21:24
henrynashlbragstad: i the setup on the ContractSchema test class21:25
lbragstadhenrynash that's not in is it?21:26
lbragstaddoens't look like it21:26
lbragstadi'm trying to find where that order is enforced for test_walk_versions21:26
lbragstadhenrynash ah - i see you do it here
shalehstevemar: how about you?21:28
henrynashlbragstad: yep...21:33
lbragstadhenrynash it doesn't look like we attempt that kind of process in
lbragstadand if test_walk_versions run against repositories individually - it would make sense that we are seeing this21:34
henrynashlbragstad: we do in my version....have you rebased on that?21:34
*** nkinder has joined #openstack-keystone21:35
lbragstadhenrynash i have based on
lbragstadhenrynash is there another patch I should be aware of?21:35
*** pauloewerton has quit IRC21:35
henrynashlbragstad: no, that's enough...if you look at the contract setup of you'll see I do something similar21:36
lbragstadhenrynash line 303?21:37
henrynashlbragstad: yes21:38
henrynashlbragstad: indeed21:38
henrynashlbragstad: I'm a bit suspsious as to whether my swapping betwen repos, upgradeing the previous ones etc. in both sql_upgrade and sql_banned is really working right all the time...I did struggle a bit to make this work at all21:41
lbragstadhenrynash would running this concurrently make a difference?21:41
henrynashlbragstad: how do you mean?21:42
lbragstadall tests running with a single thread versus many21:42
henrynashlbragstad: ah, you mean the tests? hmm21:42
henrynashlbragstad: interesting idea21:42
henrynashlbragstad: would be worth experimenting with that...21:43
*** haplo37__ has quit IRC21:43
lbragstadhenrynash what if one thread updates the expand repo before another thread attempts to do something with the contract repo for example?21:43
lbragstadhenrynash well - actually21:43
lbragstadthat wouldn't make sense21:43
lbragstadbecause i can run test_walk_versions as a single test and it still fails21:43
henrynashlbragstad: hmm,21:43
lbragstadtox -e py27 -- keystone.tests.unit.test_sql_banned_operations.TestKeystoneContractSchemaMigrationsSQLite.test_walk_versions fails for me in isolation21:44
henrynashlbragstad: off to mull on it....grab a shower and some late food, back on again in a while21:44
lbragstadhenrynash sounds good21:44
*** sdake has joined #openstack-keystone21:48
*** roxanagh_ has joined #openstack-keystone21:49
*** sdake_ has joined #openstack-keystone21:50
*** nkinder has quit IRC21:53
notmorganbknudson: i've been poking at that one21:54
notmorganbknudson: it's a weird issue honestly21:54
*** sdake has quit IRC21:54
bknudsonnotmorgan: I can recreate the validate token issue consistently.21:55
bknudsonnever saw it on devstack so might be interesting to see what's different.21:58
*** su_zhang has quit IRC21:58
*** ddieterly is now known as ddieterly[away]21:59
*** su_zhang has joined #openstack-keystone21:59
*** adriant has joined #openstack-keystone21:59
*** Ephur has joined #openstack-keystone22:00
bknudson2016-08-24 21:59:29.518 21277 ERROR keystone.common.wsgi Exception: key '1921523d6734d44e88ed58dfc76ef681a36b8e9b' for 'keystone.revoke.core:_list_events|None' gave error parsing line, line is VALUE 1921523d6734d44e88ed58dfc76ef681a3eg22:00
lbragstadcc dstanek ^22:00
bknudson2016-08-24 21:59:30.117 21277 ERROR keystone.common.wsgi [req-a6a7154f-de74-4090-8fa8-452d5ca33f99 bb24ac3236014d58ac34b9446b3f0b61 2163b9d114d54da1b682bbcac0608cdf - default default] key '1921523d6734d44e88ed58dfc76ef681a36b8e9b' for 'keystone.revo22:01
bknudsonke.core:_list_events|None' gave No serialization handler registered for type 'type'22:01
bknudsonsame key different exception22:01
*** sdake_ has quit IRC22:01
notmorganbknudson: hmmm22:02
notmorganbknudson, dstanek: responded to comments on
notmorganhave a question before I correct the 2 real blocking issues.22:03
*** sdake has joined #openstack-keystone22:03
bknudson2016-08-24 22:07:27.820 32201 ERROR keystone.common.wsgi Exception: key '1921523d6734d44e88ed58dfc76ef681a36b8e9b' for 'keystone.revoke.core:_list_events|None' gave 'unicode' object has no attribute 'payload'22:08
bknudsondifferent error again.22:08
bknudsonI guess it's always going to be the same string for the same function call22:09
notmorgansince it's a SHA1 hash22:09
bknudsonI don't understand why it doesn't fail all the time because keystone must be using this on every call I'm doing.22:10
bknudsonthe value that memcache returns changes22:11
notmorganthis is going to be related to the request_local cache (in theory)22:11
notmorganas well22:11
notmorganso 3 sources of data to validate: thread.local RequestCache, Memcache, DB22:12
notmorganI'm guessing someone screwed up the data in the RequestLocal cache subtly when refactoring code.22:12
*** roxanagh_ has quit IRC22:14
bknudsonif what's stored in memcache is pickled this does look like pickle string22:14
notmorgangerrit wont save my changes via the web interface22:15
*** roxanagh_ has joined #openstack-keystone22:15
openstackgerritMorgan Fainberg proposed openstack/keystone: Filter data when deserializing RevokeEvents
*** sileht has quit IRC22:16
*** spedione is now known as spedione|AWAY22:16
notmorganbknudson: hm.22:16
notmorganbknudson: i mean if somehow it's getting the data from memcache and then trying to de-msgpack it22:16
bknudsonI should be able to compare a good value vs a bad one.22:16
bknudsonnot sure if that will be interesting22:17
*** ddieterly[away] is now known as ddieterly22:17
*** sileht has joined #openstack-keystone22:18
bknudsonnotmorgan: this one's different:
*** marekd2 has joined #openstack-keystone22:19
bknudsonnotmorgan: compare with -- for some reason get <key> returns a different value.22:20
notmorganthe second one is a sane response/non erroring?22:21
notmorganbecause it looks like we are getting a response that isn't in a CachedValue wrapper object22:21
bknudsonnotmorgan: I know the first one is an exception, the 2nd one I don't know if that was ever used.22:22
notmorgan(hence the no attribute .payload)22:22
bknudsonI was surprised because I thought they'd be the same.22:22
notmorganit should be the same net result tbh22:22
bknudsonI could probably turn on memcache logging if that would be interesting22:23
notmorganthis is the one with the unpack values from the memcache library, right?22:23
bknudsonnotmorgan: is me connecting to memcached using netcat and doing get <key>22:23
*** marekd2 has quit IRC22:24
bknudson is from the keystone log where I added more info to the exception (the value and the key)22:24
* notmorgan nods.22:24
notmorganand you're debugging the too many values to unpack (or not enough) bug?22:24
bknudsonI'll try the memcache log and see if I can track the value for a key.22:24
notmorganor is this a different bug?22:24
*** ddieterly has quit IRC22:24
bknudsonyes, I was seeing ValueError: too many values to unpack22:25
bknudsonI caught that exception and raised it as "error parsing line, line is %s"22:25
notmorganwhere that error is coming from seems like... it's something wrong with the response from memcache server itself/the memcache libarary22:26
bknudsonand then I added a catch to dogpile.cache.region so that it prints out the key and orig_key22:26
bknudsonI would guess the string here is just not a valid one.22:26
bknudsonresp, rkey, flags, len = line.split()22:27
bknudsonVALUE 302516de06a66748926f0b482c739f437a4b9015 1 71022:27
bknudsonVALUE 302516de06a66748926f0b482c739f437a4b9015 f80292d893903849622:27
bknudsonfirst VALUE works, second doesn't22:27
bknudsonI assume the len is 710 which is ok.22:28
bknudsoninteresting that the invalid one still looks like pickle data22:28
notmorganand that actually should be fine.22:28
bknudsonbut the VALUE line is invalid since it doesn't have the length.22:29
bknudsonand I assume flags is supposed to be 122:29
notmorganit's missing something22:29
*** markvoelker has joined #openstack-keystone22:29
notmorganthis really looks like a mistake in python-memcache and / or memcache server returning something weird based on that22:29
bknudsonso the invalid one has p13 ..., whereas the valid value has p1 ...22:30
bknudsonso it's like it only has the end22:30
*** michauds has quit IRC22:30
bknudsonlooks like p1 , p2, etc., is just how pickle serializes a list22:31
bknudsonprint(pickle.dumps(['abc', 'def', 'ghi', 'jkl']))22:31
*** jrist has quit IRC22:33
bknudsonlooks like the one printed out in the error is just the valid one with the start stuff missing.22:33
*** hockeynut has quit IRC22:35
bknudsonso maybe the value in memcache is ok and it's the read that's getting messed up somehow22:35
notmorgani'm not sure how we would get a different result from a read one time but not the next?22:36
notmorganunless there is something really wonky happening.22:36
bknudsonI'm not going to rule out something really wonky happening22:37
notmorganbknudson: cosmic rays!22:38
bknudsonwe need to bury the server much deeper.22:38
bknudsonTime to try the memcached log.22:40
*** jrist has joined #openstack-keystone22:45
bknudsonI wonder if the "keystone.revoke.core:_list_events|None" isn't hit more often because it's a large list and also it's being used all the time (in this test scenario)22:49
notmorganoh wait22:49
notmorganhm. oh no the issue i was thinking of would be on write22:49
notmorgannot read22:49
*** rcernin has quit IRC22:53
*** bigdogstl has joined #openstack-keystone22:54
*** zigo has quit IRC22:56
bknudsonnotmorgan: here's a full stack trace if that helps:
*** sdake has quit IRC22:57
*** zigo has joined #openstack-keystone22:57
*** shaleh has quit IRC22:58
notmorganwhat was the line you added for debugging/catch?22:59
*** bigdogstl has quit IRC22:59
notmorganyou added it in right?22:59
notmorganbknudson: uh. weird... from what i can tell  VALUE 302516de06a66748926f0b482c739f437a4b9015 1 71023:01
notmorganshould be the one erroring23:01
notmorgansince we're getting more values to unpack than expected23:01
notmorganor.. ugh. this is weird23:01
bknudsonthere's the changes:
notmorganthis error doesn't make sense.23:03
*** markvoelker has quit IRC23:04
notmorganbknudson: can you confirm how much data is in that key?23:07
*** sdake has joined #openstack-keystone23:07
notmorganbknudson: i wonder if it's an issue with socket and not receiving all the data/abnormal early truncation23:07
notmorganor something similar (then again... WHY is it "too many values")23:07
notmorganthis is bothering me.23:08
bknudson35371 Aug 24 22:52 test.txt23:08
bknudsonVALUE 1921523d6734d44e88ed58dfc76ef681a36b8e9b 1 3530823:08
notmorganhm, ok yeah that shouild be a sane value size23:08
bknudsonI wrote the value to a file since it was too big to look at in the window23:08
notmorganwe've done things with 65K+ so that shouldn't be an issue23:08
bknudsonthat appears to be the entire revocation list or event list or whatever23:08
bknudsonalthough there's only 146 events23:09
bknudsonthat's about 241 bytes per event23:09
notmorganok so stupid question: can you get the key directly with python-memcache (not through dogpile)23:09
bknudsonI'll give it a shot.23:10
*** tqtran has quit IRC23:10
*** atod has joined #openstack-keystone23:11
notmorganbknudson: because dogpile doesn't do anything interesting besides .get/.set23:12
bknudson>>> mc.get('1921523d6734d44e88ed58dfc76ef681a36b8e9b')23:18
bknudson([<keystone.models.revoke_model.RevokeEvent object at 0x7ff6aad18650>, <keystone.models.revoke_model.RevokeEvent object at 0x7ff6a566c810>, <keystone.models.revoke_model.RevokeEvent object at 0x7ff6a566c850>, <keystone.models.revoke_model.RevokeEvent object at 0x7ff6a566c890>, <keystone.models.revoke_model.RevokeEvent object at 0x7ff6a566c8d0>, <keystone.models.revoke_model.RevokeEvent object at 0x7ff6a566c910>, <keyston23:18
bknudsonlooks like it automatically deserializes.23:18
bknudsonso I assume I'd get an exception if it failed to get the value23:18
notmorganas would be correct for pickle23:18
notmorganalso at the end of the datastructure you should have a timestamp23:19
bknudson>>> len(mc.get('1921523d6734d44e88ed58dfc76ef681a36b8e9b')[0])23:19
notmorgan([values], ts) basically23:19
bknudson{'ct': 1472080588.99701, 'v': 1})23:19
notmorganthat is 100% valid23:19
bknudsonit's kind of interesting it deserializes because it has to have access to the keystone.models23:20
notmorganand how it should look. now i just don't know why it's getting too many values to un...23:20
bknudsonI didn't import that23:20
notmorganbut you have keystone in ytour python path23:20
notmorganpickle does magic23:20
notmorganit's why it's so bad23:20
bknudsony, scary.23:20
*** edmondsw has quit IRC23:20
notmorganso... i want to try something23:21
notmorgansee if the issue goes away.23:21
notmorganbut it requires a keystone restart >.<23:21
bknudsonthis is a dev box so I can restart keystone any time.23:21
notmorganbut i worry about restarts and losing duplicatrion of the issue23:21
bknudsonI've restarted several times and had no problem recreating23:21
notmorganbut in short, stop using the memcachepool backend23:21
notmorganuse the normal dogpile memcache backend23:22
bknudsonthe key for keystone.revoke.core:_list_events|None isn't going to change.23:22
bknudsony, that's an easy change.23:22
notmorganbecause the pool does hackery in the memcache library23:22
bknudsonI've got a change up to devstack and our installer to switch to that already23:22
bknudson^ devstack change23:23
jamielennoxbknudson, notmorgan: does that mean in general we want to use dogpile.memcached instead of oslo.memcached_pool?23:24
notmorganjamielennox: possibly23:24
notmorganif you're not using eventlet almost assuredly23:24
jamielennoxor recommend oslo only when eventlet is running23:24
bknudsonjamielennox: if you're using eventlet with 1000 green threads you would want to use the pool23:25
notmorganwell since eventlet is deleted in mitaka23:25
notmorgandon't run keystone with eventlet mitaka and later :P23:25
jamielennoxyea, but i'm thinking a well that memcache_pool came out of auth_token so its always running there23:25
jamielennoxand i need to fix that review that transitions to oslo.cache23:26
*** hockeynut has joined #openstack-keystone23:30
bknudsonok, switched keystone to the dogpile memcache driver.23:31
notmorgani hope this eliminates the issue23:32
notmorganbecause then the fix is "stop doing weird hack-y things"23:32
bknudsonnope, hit it again.23:33
notmorgansame exact exception?23:33
notmorgantoo many values to unpack?23:33
bknudsonyep, same error23:33
bknudson2016-08-24 23:31:32.913 9976 ERROR keystone.common.wsgi Exception: key '1921523d6734d44e88ed58dfc76ef681a36b8e9b' for 'keystone.revoke.core:_list_events|None' gave error parsing line, line is VALUE 1921523d6734d44e88ed58dfcg1523:33
*** slberger has left #openstack-keystone23:34
bknudson>>> len(mc.get('1921523d6734d44e88ed58dfc76ef681a36b8e9b')[0])23:34
*** lamt has joined #openstack-keystone23:34
bknudsonso can still get the value from the server23:34
notmorgani just wanted to confirm it's in fact toomany values to unpack error23:34
notmorganvs. something else23:34
notmorganweird that mc.get() works but you error when calling via dogpile.23:34
bknudsonI could try not having threads for keystone23:35
notmorganthis running under uwsgi?23:35
notmorganor apache?23:35
bknudsonyes uwsgi23:35
bknudsonemperor mode23:36
bknudsonthanks to jamielennox23:36
notmorganmaybe we really aren't threadsafe23:36
jamielennoxoh, emperor mode seemed a good way to run it23:36
bknudsonthreads = 6 !23:36
jamielennoxi guessed at most of the values :P23:36
bknudsonthere was a mailing list thread where apparently people are running other services in uwsgi / mod_wsgi23:37
bknudsonI was surprised by that.23:37
*** iurygregory_ has joined #openstack-keystone23:37
bknudsonthought they all expected eventlet.23:37
notmorganthey mostly do23:38
notmorgansome folks have done the legwork to make it happen23:38
notmorganbut it requires disabling some stuff23:38
bknudsonfailed with threads=1 !23:39
bknudsongoing to try with enable-threads = False in uwsgi config23:41
bknudsonnotmorgan: promising... have been running for a while and haven't seen the 500 error (seeing 502 errors instead for some reason)23:48
*** Gorian|work has quit IRC23:49
bknudsonsetting enable-threads = False seems to be pretty stable.23:52
bknudsonnot sure why I'm getting Bad Gateway errors... must be coming from haproxy. Maybe timing out requests23:55
* notmorgan nods.23:56
*** roxanagh_ has quit IRC23:57
bknudson[Wed Aug 24 23:55:46.420068 2016] [core:notice] [pid 760:tid 140129222494080] AH00051: child pid 8812 exit signal Segmentation fault (11), possible coredump in /etc/apache223:57
bknudsonNot sure what that's about23:57
bknudson[Wed Aug 24 23:26:53.546161 2016] [proxy:error] [pid 15328:tid 140128877999872] (111)Connection refused: AH00957: uwsgi: attempt to connect to ( failed23:58
openstackgerritMerged openstack/keystone: Update `href` for keystone extensions

Generated by 2.14.0 by Marius Gedminas - find it at!