18:04:56 <morganfainberg> #startmeeting keystone
18:04:57 <openstack> Meeting started Tue Jul 28 18:04:56 2015 UTC and is due to finish in 60 minutes.  The chair is morganfainberg. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:04:58 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:05:00 <openstack> The meeting name has been set to 'keystone'
18:05:03 <morganfainberg> Hi and stuff
18:05:09 <bknudson> hi
18:05:22 <henrynash> yep, I;m here
18:05:27 <morganfainberg> #topic Reviews needed for List Role Assignments Performance patch
18:05:30 <morganfainberg> henrynash: lead on
18:05:33 <henrynash> ok, So this is something that samueldmq worked on during Kilo, but we bumped it
18:05:41 <henrynash> https://review.openstack.org/#/c/137202/
18:05:49 <samueldmq> henrynash: it has almost 1 year now :)
18:06:15 <henrynash> although it is great for performance…we really want to do this as well so we can base all code that needs to workout group and inheritance roles on this
18:06:34 <henrynash> so just a plea for some review time so ew can get it into L2 hopefully
18:06:47 <morganfainberg> wont land for L2 unless it's gating today
18:06:50 <morganfainberg> FYI
18:07:05 <henrynash> ok, so if not L2, then L3 !
18:07:07 <morganfainberg> so assume it's an L3
18:07:33 <henrynash> there are a whole string of patches that follow this that start using this backend filtering
18:07:42 <ayoung> oyez
18:07:50 * morganfainberg waves ayoung to the front of the room.
18:08:00 <henrynash> no need to take up more time here, unless anyone has specific questions
18:08:16 <ayoung> henrynash, we have hard numbers oin the perf issues?
18:08:31 <dolphm> I can probably help validate performance
18:08:40 <morganfainberg> henrynash: if it's a "perf" reason to do the work, I'd like to see numbers validating the gain/benefit
18:08:44 <dolphm> Been doing a lot of that lately
18:08:45 <morganfainberg> dolphm: ++ thanks.
18:08:49 <samueldmq> dolphm: ++ it'd be great ot have some benchmarking on that :)
18:08:57 <dstanek> i have not looked at this review since last week, but i'll get through it after this meeting
18:09:15 <henrynash> ayoung: no…but the current code is that any list of role assignments  means we retrieve ALL role assignmenst i the whole system, and tehn filter them in teh controller
18:09:20 <samueldmq> dolphm: let me know if you need a hand on that front
18:09:21 <samueldmq> dstanek: thanks
18:09:36 <ayoung> we probably should make sure to include LDAP in there.  I had a data set for a large user list...we should probably build up a "to scale" data set
18:09:44 <bknudson> target https://blueprints.launchpad.net/keystone/+spec/list-role-assignments-performance to l3?
18:09:55 <samueldmq> ldap assignment is dying a terrible death (cc morganfainberg )
18:10:07 <samueldmq> ayoung: ^
18:10:10 <morganfainberg> samueldmq: this is identity not assignment
18:10:11 <ayoung> yeah,  solemente LDAP for Identity
18:10:19 <henrynash> ayoung: the current code works for both ldap and sql (even thouse assignent ldap is a lower priority)
18:10:35 <morganfainberg> bknudson: target for l3 when it lands. lets not try and predict it landing.
18:10:39 <ayoung> users and groups in LDAP,  role assignments in SQL is my primary use case
18:10:40 <samueldmq> this is assignment, list_role_assignments
18:11:14 <morganfainberg> bknudson: or if you feel strongly, go ahead and target it - i'll work w/ relmngmnt to untarget if needed down the line
18:11:35 <ayoung> morganfainberg, bknudson ++
18:11:40 <samueldmq> I don't even touch the identity code
18:11:50 <ayoung> dstanek, I still owe you an LDAP functional test, too
18:11:52 <bknudson> I don't feel strongly either way... wasn't sure how we're tracking.
18:12:06 <morganfainberg> bknudson: i'm at the point we should target when it lands if we cna
18:12:07 <morganfainberg> can*
18:12:28 <morganfainberg> rather than saying "this will land at X" and then untargeting later
18:12:36 <dstanek> ++ makes it easier to see what's going on in a glance
18:12:41 <samueldmq> morganfainberg: and that could be done automatically by our tools
18:13:29 <morganfainberg> ok so, from my perspective - if we have clear numbers [since this claimed to be perf] i'm fine with this landing in l3
18:13:48 <bknudson> apparently it's not just perf since a bunch of other stuff is depending on it
18:14:00 <ayoung> dolphm, you've been testing speed, but not memory usage, right?
18:14:04 <samueldmq> bknudson: yes, it moves the expasion logic to the manager level
18:14:07 <bknudson> and I assume the stuff that depends on it isn't needing it just for perf
18:14:12 <dolphm> Correct
18:14:16 <henrynash> bknduson: ++
18:14:53 <dolphm> Our memory usage is generally not alarming
18:14:54 <ayoung> this might be more of a memory perf issue, if all those lists are duplicated.  Might be worth considering for future perf issues
18:15:01 <ayoung> we might hit memory management thresholds
18:15:09 <morganfainberg> memory usage is mostly fine in keystone atm
18:15:17 <ayoung> good to know
18:15:25 <ayoung> list users had some issues that way, IIRC
18:15:32 <morganfainberg> it's worth evaluating - but not in this context
18:15:32 <ayoung> before limiting results
18:15:38 <morganfainberg> as a separate specific evaluation
18:15:44 <samueldmq> if we have lots of group/inherited assingments, that would be toooo bad
18:15:46 <morganfainberg> since we haven't really done this for keystone in the past
18:16:00 <samueldmq> ++
18:16:01 <morganfainberg> lets not make that a requirement here [since we've mostly been fine]
18:16:32 <ayoung> morganfainberg, "for future ..."
18:16:39 <morganfainberg> ayoung: yes.
18:16:56 <henrynash> morganfainberg: Ok, that’s probably enough on that one
18:17:04 <morganfainberg> i see no issue with this landing in l3, please review it
18:17:15 <bknudson> I starred it.
18:17:17 <morganfainberg> unless someone has a concern
18:17:44 <morganfainberg> #topic Request from nova meetup: Document what other projects need to know about keystone
18:17:46 <morganfainberg> bknudson: o/
18:17:58 <bknudson> the nova midcycle was in rochester last week
18:18:14 <bknudson> so I went to it since it's just on the other side of the facility
18:18:30 <bknudson> they had a few questions, so I suggested I'd write up a doc so they'd be able to refer to it in future
18:18:34 <bknudson> mostly about what's v3.
18:18:53 <bknudson> rather than try to do everything in a review I started an etherpad: https://etherpad.openstack.org/p/keystone-info
18:19:07 <bknudson> which I'll copy into a review for our docs next week.
18:19:16 <ayoung> I think that is split into two pieces;  auth token (which is all Nova should care about) and stuff like what Heat and more workflow related projects need
18:19:19 <bknudson> so if you have anything to add just do it.
18:19:39 <ayoung> V3 auth token needs to be well supported.
18:20:07 <jamielennox> for auth_token i almost just need to start again on the docs
18:20:09 <topol> nice writeup bknudson
18:20:22 <jamielennox> there's so much that's been deprecated or otherwised moved that it gets confusing
18:20:58 <ayoung> nice.
18:21:07 <ayoung> dolphm, this kindof gets at why I wrote up that V2.0 bug
18:21:16 <jamielennox> but i agree, there are very few services that should know anything more than what auth_token provides
18:21:21 <ayoung> link in a sec
18:22:16 <morganfainberg> jamielennox: ++ yep, pretty much auth_token and auth [keystoneauth] is the real limit of what services need to know unless they are heat
18:22:21 <morganfainberg> or similar orchestration
18:22:34 <morganfainberg> auth if they need to do independant $things
18:22:38 <morganfainberg> and most don't
18:22:44 <ayoung> but there is there ever a case where a V2 token needs to be properly converted to a V3 token, to inlcude domain values?
18:22:59 <samueldmq> or they need to know, let's say, a given project's hierarhcy to do quota enforcement
18:23:01 <samueldmq> :)
18:23:03 <jamielennox> and that would be what i would add to that etherpad, yes we explain domains but be explicit that nova should never care about it
18:23:24 <jamielennox> we have lots of requests saying how should i handle domains and the answer is to ignore them
18:23:29 <morganfainberg> ok so lets get that info added to the etherpad
18:23:34 <ayoung> https://bugs.launchpad.net/keystone/+bug/1477373
18:23:35 <openstack> Launchpad bug 1477373 in Keystone "No way to convert V2 tokens to V3 if domain id changes" [Undecided,New]
18:23:37 <uvirtbot> Launchpad bug 1477373 in keystone "No way to convert V2 tokens to V3 if domain id changes" [Undecided,New]
18:23:38 <uvirtbot> Launchpad bug 1477373 in keystone "No way to convert V2 tokens to V3 if domain id changes" [Undecided,New] https://launchpad.net/bugs/1477373
18:23:46 <morganfainberg> ayoung: wait what?
18:23:47 <bknudson> bots are taking over
18:23:52 <morganfainberg> ayoung: that doesn't make sense at all
18:24:03 <morganfainberg> ayoung: if domain id changes?!
18:24:03 <ayoung> morganfainberg, we discussed at the midcycle.
18:24:13 <ayoung> morganfainberg, from "default" to some UUID
18:24:21 <ayoung> if the default domain changes...
18:24:28 <morganfainberg> if they are changing the default domain...
18:24:37 <morganfainberg> things are going to get weird
18:24:38 <ayoung> ok...lets say we have a setup where there is a lot of V2 tooling
18:24:50 <ayoung> but they are switching over to LDAP for end users
18:24:59 <ayoung> so the LDAP goes into a domain specific backend
18:25:01 <samueldmq> and we allow that https://github.com/openstack/keystone/blob/master/etc/keystone.conf.sample#L791
18:25:33 <bknudson> get rid of your v2 tooling
18:25:37 <ayoung> are we saying that the default domain always *MUST* have a domain ID of _default_?
18:26:07 <morganfainberg> ayoung: i'd say default domain needs to have an id that isn't changed post deployment
18:26:09 <samueldmq> ayoung: we can't as we provide a config option for it
18:26:19 <ayoung> bknudson, people have been building to V2 for a while.  Getting rid of it will take a while
18:26:24 <morganfainberg> unless you're really willing to do all sorts of insane things
18:26:33 <ayoung> morganfainberg, "isn't changed post deployment" doesn't quite cut it
18:26:41 <ayoung> we don't communitate the domain id in any manageable way
18:26:58 <morganfainberg> ayoung: honestly i want to push people towards v3 more and more
18:27:07 <ayoung> the two options we discussed at the mid cycle were "query the config options via a web api" and "put hints in the v2 tokens"
18:27:18 <ayoung> but...maybe the "hints" part is no longer an issue...
18:27:20 * morganfainberg doesn't remember this convo
18:27:23 <dolphm> Because v2 clients are not domain aware, and the default domain is bit special in v3
18:27:30 <dolphm> Not*
18:27:36 <ayoung> I was thinking PKI, but with Fernet, it is not really a problem...we can always tell them to get v3 tokens...
18:28:09 <browne> some v2 tooling is still necessary from my experience.  i've tried to use OSC but it does not have all commands available in the older CLIs
18:28:29 <morganfainberg> browne: ok so v2 keystone should have no impact on the other CLIs
18:28:30 <dolphm> Browne, like what?
18:28:39 <ayoung> browne, I saw that with neutron, but neutron CLI should work with V3 auth
18:28:41 <bknudson> so you're going to switch the domain_id for a couple of operations and then switch it back?
18:28:57 <ayoung> bknudson, neutron misses routers...maybe even subnets
18:28:58 <browne> like creating provider networks in OSC.  you can do it in neutron CLI but not OSC
18:29:09 <morganfainberg> browne: this sounds like a bug in OSC
18:29:37 <shaleh> OSC has not been updated to support all commands from the project CLIs
18:29:47 <ayoung> ok...so are we finally at the point where we can say "deprecate V2.0" and we will have an aswer for all use cases?
18:29:47 <haneef> neutron CLI supports v3
18:29:48 <browne> morganfainberg: agree, but becomes an inhibitor to going to keystone v3.  i was trying to utilize the multi-domain backend feature
18:29:50 <shaleh> make bugs, people will close it
18:29:50 <morganfainberg> so lets fix the bug in OSC
18:30:09 <morganfainberg> browne: this is not a reason we should be holding onto v2 keystone --
18:30:16 <ayoung> if so, and we can honestly kill v2.0 tokens, I'll be very happy
18:30:18 * morganfainberg is really getting tired of the auth is tied to the crud interface
18:30:29 <bknudson> if the CLIs aren't deprecated then they need to support v3 auth
18:30:35 <lifeless> morganfainberg: does it feel dirty?
18:30:57 <morganfainberg> lifeless: i am going to avoid a reaaaaaaly long rant on that here ;)
18:31:42 <bknudson> what doesn't work with users in ldap domain?
18:31:42 <ayoung> OK, so we can tell all of the other services "forget that v2.0 token API exists."  right?
18:31:52 <morganfainberg> ayoung: that is what we should do.
18:31:52 <dstanek> are the other CLIs not deprecated? i haven't been paying attention
18:32:05 <ayoung> bknudson, ah,  so if you do an install, the default domain gets all the service users
18:32:14 <morganfainberg> dstanek: i think we're the only one officially saying "no really stop using it and go to osc"
18:32:23 <dstanek> morganfainberg: bummer
18:32:24 <bknudson> dstanek: I haven't seen any other project deprecate their CLI
18:32:27 <ayoung> and then you creata new domain, put ldap in there, and make that the default domain for all the V2 tooling out there
18:32:47 <bknudson> esssentially you've got service users in your default domain
18:32:48 <ayoung> its more than just the osc thing.  Its the keystone.rc files
18:33:04 <morganfainberg> so lets work on making v2 auth die.
18:33:09 <haneef> browne: multi-domain backend will have problems with few apis.  if you are using  list_users ( all users), there is no such operation in  multi_domain_backend.  It is always list_user for a domain
18:33:13 <samueldmq> maybe we should move on .. otherwise we won't cover the other topics in the meeting :(
18:33:29 <morganfainberg> haneef: that isn't really a reason to hang onto v2
18:33:32 <samueldmq> at least the ones listed in the meeting page :)
18:33:42 <morganfainberg> haneef: in fact, i'd say that has no real bearing on v2 vs v3
18:33:45 <jamielennox> the only people that need service users in the default domain now are the projects that mix their keystone_authtoken parameters and if you switch it over to v3 auth then the service doesn't work
18:33:57 <jamielennox> there aren't many of those left (none off the top of my head)
18:33:58 <ayoung> bknudson, but if you have a thrid party app, that only knows about V2 auth, and your LDAP users will want to use that app, you make the default domain other thanthe one setup by the installer with the domain id of _default_
18:33:58 <dstanek> samueldmq's waiting to be grilled :-)
18:34:20 <morganfainberg> jamielennox: and i just sent another email to the ML asking people to avoid doing that again
18:34:21 <samueldmq> dstanek: eheh
18:34:25 <jamielennox> morganfainberg: i saw
18:34:25 <morganfainberg> i saw some more cases it was being implemented
18:34:34 <ayoung> that was the context behind that bug.  I'll try to make it clearer
18:34:40 <morganfainberg> ok
18:34:43 <bknudson> ayoung: I think you're going to have problems anyways... what if I have 2 LDAP domains?
18:34:57 <ayoung> bknudson, I'm just trying to solve common case
18:35:00 <morganfainberg> ok i think we're kind of off the topic here and into the weeds
18:35:07 <ayoung> obviosuly won't work for everyone
18:35:21 <bknudson> update https://etherpad.openstack.org/p/keystone-info with more info
18:35:28 <bknudson> and it'll eventually go in keystone developer docs
18:35:33 <morganfainberg> bknudson: ++
18:35:33 <bknudson> then you can update it there
18:35:39 <morganfainberg> please update the etherpad
18:35:42 <morganfainberg> ok lets move on
18:35:47 <morganfainberg> #topic keystoneauth release? K2K in Horizon is waiting for the K2KAuthPlugin, should we add it in python-keystoneclient so we can speed up things?
18:36:14 <morganfainberg> jamielennox: this is a question for you -- how soon can we get keystoneauth ready to go?
18:36:18 <ayoung> are there library deps?  thought that needed saml2
18:36:23 <bknudson> is this 1.0 release ?
18:36:24 <morganfainberg> we need to dump oslo.config and there were a couple minor things left
18:36:26 <bknudson> or 0.1 or something?
18:36:38 <marekd> ayoung: nope
18:36:41 <morganfainberg> bknudson: this depends on keystoneauth 1.0 *or* needs to go into keystoneclient
18:36:43 <bknudson> how do the docs look?
18:36:45 <jamielennox> i haven't been active on it for a week or two, but i posted a review the other day
18:36:45 <jamielennox> umm
18:36:46 <ayoung> cool
18:36:56 <jamielennox> https://review.openstack.org/#/c/205753/
18:36:56 <morganfainberg> so the biggest blocker is oslo.config
18:37:01 <morganfainberg> probably #2 is docs
18:37:17 <jamielennox> that review just removes everything to do with plugin loading and so removes the stevedore and oslo.config dep
18:37:25 <morganfainberg> ah
18:37:26 <morganfainberg> cool
18:37:36 <bknudson> neutron is being a real stick in the mud with keystoneauth reviews recently
18:37:37 <jamielennox> this would mean that to do all the automated plugin loading for now you would still need to use keystoneclient
18:38:06 <morganfainberg> jamielennox: we need to find out how to do automated plugin loading without keystoneclient and without oslo.config
18:38:07 <jamielennox> but it would mean we could unblock the k2k stuff and the federation stuff that doesn't have a great story around automatic loading anyway
18:38:20 <morganfainberg> jamielennox: lets not walk backwards to requring ksc for things
18:38:28 <jamielennox> and figure out what we want to do about loading as a seperate step
18:38:33 <bknudson> make it a separate project
18:38:50 <bknudson> keystoneauth-config
18:38:51 <morganfainberg> i don't want to force people back to keystonelcient which is what that would do
18:38:52 <jamielennox> morganfainberg: right - i'm not saying we keep it in ksc permanently, just get ksa moving along
18:39:10 <morganfainberg> jamielennox: we can't release a 1.0 if everyone keeps using keystoneclient and has no path forward
18:39:21 <morganfainberg> we need a path forward for 1.z
18:39:23 <morganfainberg> 1.x*
18:39:33 <jamielennox> bknudson: the failure isn't actually neutrons fault, there is some issue with git cloning the project i haven't figured out yet
18:39:52 <dstanek> jamielennox: is that the job failure i keep seeing?
18:39:53 <bknudson> jamielennox: thanks for looking into it
18:39:56 <rodrigods> if we still need to use ksc, what your opinion regarding k2k plugin?
18:40:11 <morganfainberg> rodrigods: we aren't going to require keystoneclient
18:40:12 <jamielennox> morganfainberg: particularly if the auth loading stuff is going into anotherproject then it would be fine to release 1.0 without it
18:40:23 <morganfainberg> jamielennox: then i want that other project ready as well
18:40:43 <morganfainberg> jamielennox: i am not willing to do a "we might or might not do this but we don't know and we aren't fixing anything but depending on this other thing"
18:40:53 <morganfainberg> sorry i'd rather scrub keystoneauth completly
18:40:57 <marekd> morganfainberg: how about rodrigods just depend on exiting ksa with some red flag in mind?
18:41:08 <marekd> rodrigods: k2k plugin will not change ....
18:41:11 <dstanek> i asked about http://logs.openstack.org/88/201088/5/check/gate-tempest-dsvm-neutron-src-keystoneauth/4439cd5/logs/devstacklog.txt.gz#_2015-07-28_13_55_04_277 in infra earlier today, but haven't gotten a chance to follow up on their hint
18:41:19 <marekd> so i don't expet your patches would change one day
18:41:28 <marekd> rodrigods: and we will just wait for ksa release
18:41:39 <marekd> morganfainberg: stevemar same could be with moving osc to ksa.
18:41:58 <marekd> morganfainberg: stevemar cause right now federated plugins are quite messy and it's just better to start using ksa.
18:42:02 <marekd> or ksa-saml2 repo even
18:42:11 <morganfainberg> if all we're doing is moving logic out but everyone has to keep doing it the old way and we have no real knowledge of where loading is going
18:42:18 <morganfainberg> we're doing it wrong.
18:42:25 <ayoung> anything else the rest of us need to do?
18:42:42 <morganfainberg> so lets solve where loading is
18:42:50 <morganfainberg> and break the dep on oslo.config *and* not force keystoneclient
18:42:57 <bknudson> I kind of like the separate project for config / loading since then it would shield users from having to know which keystoneauth version they need.
18:43:06 <morganfainberg> bknudson: i'd support it
18:43:08 <jamielennox> ok, i'll look at getting another project spun up for the loading as i've had mutliple people suggest it
18:43:13 <morganfainberg> jamielennox: ++
18:43:15 <morganfainberg> thankx
18:43:19 <jamielennox> OSC is very keen on seperate project
18:43:23 <jamielennox> well dtroyer anyway
18:43:27 <morganfainberg> #topic Reseller
18:43:32 <morganfainberg> htruta: o/
18:43:36 <htruta> morganfainberg: o/
18:43:39 <rodrigods> please review
18:43:41 <rodrigods> ^
18:43:42 <htruta> that should be quick
18:43:46 <htruta> yes, please
18:43:50 <raildo> ++
18:43:59 <htruta> should we set a target to L3 ?
18:44:01 <rodrigods> https://review.openstack.org/#/c/157427/
18:44:03 <htruta> we don't have it yet
18:44:03 <rodrigods> start from here
18:44:04 <morganfainberg> again
18:44:09 <morganfainberg> don't target until landing
18:44:22 <morganfainberg> don't try and predict landing, target when it has landed
18:44:52 <htruta> morganfainberg: k
18:44:54 <morganfainberg> anything else on reseller?
18:45:03 <htruta> that's all
18:45:10 <ayoung> samueldmq, your up
18:45:13 <morganfainberg> #topic Dynamic Policies
18:45:17 <ayoung> or you're up
18:45:19 <samueldmq> hi
18:45:20 <ayoung> either way
18:45:27 <morganfainberg> where is the spec?
18:45:32 <morganfainberg> again?
18:45:42 <david8hu> We need to cut the policy overlay piece and go straight to centralize complete policy files in Keystone.  Most deployers are not changing their policy files, but if they do, they want to test out locally with the local policy files.  Deployers might as well submitt the entire tested policy file to keystone.
18:45:43 <samueldmq> #link https://review.openstack.org/#/c/197980/
18:45:45 <samueldmq> and
18:45:51 <samueldmq> #link https://review.openstack.org/#/c/134655/
18:46:11 <samueldmq> those 2 are the remaining ones, one for ksmiddleware and another for keystone controlling the cache mechanism
18:46:29 <ayoung> I think 80 should be non-contraversial;  using HTTP properly
18:46:47 <samueldmq> we've got reviews, I updated the spec, everything looked good but dstanek has some concerns on it
18:46:49 <ayoung> it is the 55 one that is the client side that needs the SFE
18:47:29 <marekd> dstanek: ?
18:47:40 <ayoung> henrynash, so :endpoint_ids" are just the first hack.  eventually, we will resolve the urls to endpoint_ids, but we can do it in steps
18:47:49 <ayoung> endpoint_ids are already implemented by the server.
18:48:20 <samueldmq> basically dstanek's concerns are that, in our current endpoint model, we allow an effective service endpoint to have multiple policies (since multiple interfaces mean multiple ids)
18:48:22 <dstanek> samueldmq and i were discussing today how we could possibly use a list of endpoint ids and pick one for the service
18:48:32 <samueldmq> dstanek: please go ahead
18:48:54 <ayoung> dstanek, "hard than you think" we are aware of how hard it can be, which is why we are going to do endpoint_id first
18:49:32 <dstanek> ayoung: if you allow the service to specify a list of endpoints, which policy do they get if each has a policy defined?
18:49:42 <ayoung> dstanek, are any of your concerns "stop ship" type concerns
18:50:21 <ayoung> dstanek, I would limit it to 1 endpoint id to start in the authtoken config section
18:50:23 <dstanek> ayoung: i think the model in fundamentally wrong :-(  i guess it's possible that we can document and exact setup that would work
18:50:53 <ayoung> a service could, in theory have multiple config files, one per endpoint on the same server.  In practice, they will share a config fi;e
18:51:03 <david8hu> BTW, did anyone chk with nova and others to see if the proposal meet their needs?
18:51:30 <samueldmq> david8hu: this work of policy distribution is independent of what they have/want
18:51:44 <samueldmq> david8hu: we allow policy definition/associoation, just putting the distribution together now
18:52:06 <david8hu> samueldmq, but they are ultimately the consumers.
18:52:12 <ayoung> david8hu, we've had a distinct lack of response from the other projects on Dynamic policy, with the exception of Horizon, that really needs it
18:52:15 <dstanek> i would rather know if it handles deployer needs/concerns
18:52:19 <samueldmq> david8hu: our consumers here are the deployers
18:52:40 <ayoung> dstanek, horizon is currently caching the policy files.  It would be most useful to UIs
18:52:55 <ayoung> to know that the policy file they had matched the one the endpoint is using
18:53:01 <samueldmq> dstanek: so I've an email draft, but ayoung had some concerns about sending this out at this point
18:53:03 <samueldmq> #link https://etherpad.openstack.org/p/centralized-policy-delivery-operators
18:53:27 <david8hu> samueldmq, all other services still need changes to pick this up don't they?
18:53:40 <ayoung> david8hu,  no, that is the point
18:53:42 <samueldmq> david8hu: no that will be a change in ksmiddleware
18:53:48 <samueldmq> ayoung: ++
18:53:51 <ayoung> they do not need to change, they inherit the config option from keystone
18:53:51 <dstanek> the right thing to do would be to allow the endpoint id (only 1 and not a list) to be specified for each instance of middleware
18:54:01 <david8hu> not even minor changes?
18:54:03 <ayoung> dstanek, ++
18:54:20 <samueldmq> dstanek: ayoung so we don't do url -> discovery ?
18:54:23 <henrynash> ayoung: endpoint_ids are my concern as well…..reading the spec doesn’t explain to me how endpoint_ids wrok in this context, what the limitations/complexities might be for a deployer etc.  So I have a hard time supporting this as spec’d
18:54:29 <samueldmq> and let the deployer choose the id ?
18:54:31 <ayoung> david8hu, they need to change to use the oslo.poliyc library anyway.  Beyond that, it is all middleware
18:55:29 <dstanek> samueldmq: i've always said that you have to have a single ID for the middleware to use. i'd rather it just be a policy id, but any id would fix the ambiguity we keep creating
18:55:30 <ayoung> henrynash, I am asusming you are thinking that a singe server is backing multiple endpoints with multiple policy files, so how do we get the right one?   Short answer, we don't support that to start
18:55:43 <ayoung> we support "you get one for this server.  Pick one"
18:56:06 <morganfainberg> dstanek: the only concern with the policy id is if it's a uuid meaning you need to upload the data to know what the id is... bad bad bad for deployers
18:56:14 <ayoung> dstanek, if we go with the policy ID, we can't update the policy for an endpoiint without restarting the endpoint
18:56:16 <henrynash> ayoung: we just don’t eplain this in the spec…..other than saying “the deployer can set the endpoing_ids"
18:56:35 <dstanek> ayoung: if the middleware just knows about an id a service can deploy with muliple pipelines including that middleware with different Ids, right?
18:56:36 <samueldmq> morganfainberg: another point is that we aren't holding the endpoint/policy association in keystone server that way
18:56:46 <samueldmq> morganfainberg: and changign the policy id would require a restart
18:56:52 <dstanek> morganfainberg: endpint id has the same issue
18:57:06 <ayoung> henrynash, becasue the goal was to not have to do this... morganfainberg was pushing for a completely automated discovery...we want to support that. but please stop asking for us to boil the ocean...we are shooting for "actually implementable" here
18:57:10 <dstanek> samueldmq: that's a good point
18:58:02 <ayoung> dstanek, right, the goal here is to migratethe policy management from static files/ansible/config file to centralize in the keystone server
18:58:25 <henrynash> ayoung: I’m just asking for a clear spec….if the answer one endpoint, fine…just say so….what’s tough to support is a spec that isn;t clear about what is and isn’t supported
18:58:29 <ayoung> if that goal is not acceptable to the team...please speak up now.  This has been my concern, that we are arguing about details, but have not agreed on the big picture
18:58:43 <samueldmq> morganfainberg: should we continue in #keystone ? I think we're out of time .. infra folks should start they meeting
18:58:52 <morganfainberg> we have 1-2min
18:58:56 <samueldmq> ok
18:59:11 <dstanek> ayoung: i could see having a service_id or endpoint_id in the config (but if would be nice to support having multiple instances of middleware) - no lists because that can't work
18:59:28 <ayoung> dstanek, I agree
18:59:33 <ayoung> we'll make that modification
18:59:43 <henrynash> ayoung: I support the principle of a Keysteone Polciy CMS (that’s what this is)….so it IS the detail for me
18:59:48 <dstanek> ayoung: i'm just wondering if any deployer cares
18:59:50 <morganfainberg> #endmeeting