18:00:39 <stevemar> #startmeeting keystone
18:00:40 <openstack> Meeting started Tue Mar 29 18:00:39 2016 UTC and is due to finish in 60 minutes.  The chair is stevemar. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:00:41 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:00:42 <ayoung> \m/  (-v-) \m/
18:00:44 <openstack> The meeting name has been set to 'keystone'
18:00:52 <stevemar> #link https://wiki.openstack.org/wiki/Meetings/KeystoneMeeting
18:00:54 <stevemar> agenda ^
18:01:12 <stevemar> #topic Keystone RC status
18:01:32 <stevemar> we actually have 2 release blockers, reported yesterday :(
18:01:33 <stevemar> #link https://bugs.launchpad.net/keystone/+bugs?field.tag=mitaka-rc-potential
18:01:42 <ayoung> morgan, do you havethe PG stuff undercontrol?
18:01:49 <stevemar> migration fails for long running deployments, and one postgres bug
18:01:51 <ayoung> postgresql
18:01:55 <morgan> ayoung: huh?
18:02:13 <morgan> ayoung: the migration 88 one needs unit tests
18:02:29 <morgan> the migraion 91 one i haven't had time to explore/figure out what is going on
18:02:29 <stevemar> and the migration 091 one needs eyes
18:02:35 <ayoung> I'll take it
18:02:47 <stevemar> so, if anyone else wants to take a look, please do !
18:02:47 <shaleh> stevemar: do we not have anything like this automated?
18:02:57 <crinkle> fwiw i can't reproduce the 091 issue for the life of me
18:03:15 <stevemar> crinkle: you are using postgres for your db?
18:03:17 <morgan> 91 might be PGSQL only
18:03:18 <crinkle> stevemar: yep
18:03:22 <morgan> hm.
18:03:37 <crinkle> i am sure ayoung will have better luck
18:03:40 <morgan> this might be another "long running environment issue" =/
18:03:48 <ayoung> stevemar, link to where it is failing?
18:03:55 <ayoung> might be something in the log
18:03:55 <rodrigods> o/
18:04:00 <ayoung> and it might be a DB version
18:04:11 <stevemar> shaleh: not really, this is because the deployment was using havana and kept upgrading... his db didn't have some constraints apparently
18:04:17 <ayoung> morgan, We would see that error if the migration had already run
18:04:30 <rderose> i wonder if the issue is with this: use_labels=True
18:04:38 <rderose> in the select
18:04:41 <henrynash> (missed the start - this is the role name issue, I assume?)
18:04:54 <ayoung> the way the migration works is i makes a join, and the cols have names like user_password and local_user_password, but at the end, it drops the password column
18:04:56 <shaleh> stevemar: sounds like something we should have floating though. Install the release, put the vms to sleep, wake them up near the end for a round of testing.
18:04:56 <morgan> henrynash: no the password one
18:05:02 <henrynash> oohhhh
18:05:06 <samueldmq> hi, here now
18:05:08 <morgan> henrynash: the role_name one is mostly under control [just needs unit tests]
18:05:16 <stevemar> henrynash: there are 2 bugs: https://bugs.launchpad.net/keystone/+bug/1562934 and https://bugs.launchpad.net/keystone/+bug/1562965
18:05:16 <openstack> Launchpad bug 1562934 in OpenStack Identity (keystone) newton "liberty -> mitaka db migrate fails when constraint names are inconsistent" [Critical,In progress] - Assigned to Morgan Fainberg (mdrnstm)
18:05:16 <shaleh> \o
18:05:16 <ayoung> stevemar, I would have preferred we had done the drop in a separate migration, in the future
18:05:17 <openstack> Launchpad bug 1562965 in OpenStack Identity (keystone) " liberty -> mitaka db migrate fails on postgresql 091 migration" [Undecided,New] - Assigned to Adam Young (ayoung)
18:05:19 <henrynash> morgan: ok
18:05:41 <stevemar> one is 086, the other is 091
18:05:48 <morgan> stevemar: 088
18:05:51 <stevemar> oops, 088*
18:06:05 <stevemar> one has a fix, the other not so much
18:06:41 <ayoung> stevemar, let me get a linkto the failure
18:06:44 <stevemar> crinkle: the originator is prometheanfire, he's been hanging out in the -keystone channel and has been very helpful in testing out patches
18:07:20 <ayoung> stevemar,  I think that is spurious
18:07:30 <ayoung> stevemar, are we seeing the failure in a gatejob?
18:08:03 <morgan> ayoung: neither of these show in the gate
18:08:08 <ayoung> morgan, its not an erro
18:08:14 <ayoung> morgan, he was running these by hand
18:08:19 <stevemar> ayoung: nope, not in the gates
18:08:23 <ayoung> the migration is not designed to run multiple times
18:08:33 <ayoung> I discussed with him last night
18:08:33 <morgan> ayoung: oh 91 was a multiple run issue?
18:08:36 <ayoung> yep
18:08:37 <morgan> i know 88 wasn't.
18:08:43 <ayoung> morgan, that may be
18:08:45 <morgan> ok lets close 91 as "invalid"
18:08:49 <ayoung> will do
18:08:49 <morgan> if that is the case.
18:08:55 <ayoung> I'll past in the chat fragment
18:09:08 <morgan> yeah, migrations are not always idempotent
18:09:18 <stevemar> easiest way to close bugs -- mark them as invalid
18:09:42 <samueldmq> I thought we had a job running with other dbs e.g postgress
18:10:01 <stevemar> samueldmq: we do, but the originator has had the same db since havana
18:10:06 <samueldmq> should be good to have with the dbs we are supposed to support
18:10:08 <morgan> samueldmq: we do. the issue with 88 isn't related to pgsql, it just happens that the first person to upgrade was on pgsql
18:10:34 <stevemar> he has all sorts of constraints that we may not look for now, since we squashed migrations
18:10:42 <morgan> the lesson learned here is "do not assume the constraint/index names are known, always programatically find them"
18:10:49 <stevemar> yep
18:10:53 <samueldmq> morgan: ++
18:11:03 <henrynash> morgan: so that is a real change…we should add this to developemnt.rst
18:11:07 <samueldmq> is he trying to upgrade havana -> liberty?
18:11:17 <shaleh> henrynash: I was thinking the same thing
18:11:26 <stevemar> samueldmq: no, every release he does the db migration
18:11:32 <morgan> henrynash: i think we should prob. add to developing.rst - it's not really a "change" it's something we kindof ignored in the past
18:11:37 <stevemar> he's been running the same deployment since havana :)
18:11:46 <henrynash> morgan: agreed
18:11:58 <ayoung> DO we have an upgrade path from Havana?
18:12:01 <morgan> and if he hits it with a havana deploy... this bug is likely to hit a significant number of deploys
18:12:08 <ayoung> Anyway, he had 8 users in his database...I think he's going to be OK
18:12:10 <samueldmq> stevemar: oh, that's good he's been running since there
18:12:14 <henrynash> ayoung: not directly
18:12:27 <morgan> ayoung: it's not direct - it's "if you deployed havana and updated through current"
18:12:38 <morgan> havan -> mitaka directly is not supported
18:12:38 <stevemar> ayoung: he upgrades his db every 6 months... but his initial db is from havana
18:12:43 <ayoung> ah
18:13:01 <morgan> so anyone who hasn't done a wipe/redeploy from scratch since havana would be hit by this bug
18:13:06 <ayoung> ok...anyway, I think he got this error trying to debug the other, and doing so by hand
18:13:09 <morgan> regardless of the db (except HP cause mongo)
18:13:12 <samueldmq> stevemar: so some constraints have the old names ?
18:13:24 <stevemar> samueldmq: likely
18:13:36 <samueldmq> stevemar: I didn't quite catch the issue ... maybe we mad a mistake in our migrations ?
18:13:38 <stevemar> samueldmq: maybe the constraints were renamed in the squash, who knows
18:13:49 <samueldmq> stevemar: like we intended to change the constraint but didn't ..
18:13:57 <ayoung> we good?
18:13:59 <morgan> stevemar: the constraints were named differently depending on if they were automatically named or explicitly named
18:14:05 <samueldmq> stevemar: yes, squash is something very very hard to do
18:14:07 <bknudson> it's probably sqlalchemy changed how it assigns names.
18:14:13 <samueldmq> stevemar: do we really need to do it ?
18:14:18 <morgan> stevemar: and that automatic name is different based on sql-alchemy, mysql versions, pgsql versions, etc
18:14:30 <samueldmq> bknudson: maybe, good point
18:14:33 <henrynash> …and the squash removes the old code where the constaint was created by hand
18:14:33 <stevemar> samueldmq: ^
18:14:51 <morgan> samueldmq: the squash is to make upgrading / maintaingin migrations with oslo.db updates easier.
18:15:15 <morgan> basically we had a ot of cruft that did things the "old" way and likely this bug would have still occured as we fixed migrations to be more modern
18:15:30 <morgan> since the code would have had to be significant refactored anyway
18:15:47 <bknudson> the reason nova started squashing was because their unit tests were taking forever.
18:15:57 <morgan> bknudson: that too.
18:16:05 <samueldmq> maybe we should put explicit names on everything
18:16:13 <samueldmq> not even trust sqlalchemy naming
18:16:13 <morgan> samueldmq: we do now.
18:16:31 <stevemar> yep
18:16:36 <morgan> don't expect a name to be known still
18:16:41 <morgan> programatically find it
18:17:17 <morgan> it prevents typos in names from impacting your migrations/wedging things/etc
18:17:18 <samueldmq> morgan: makes sense
18:17:35 <stevemar> so, please review: https://review.openstack.org/#/c/298402/ and if someone wants to write unit tests, go ahead :)
18:17:57 <stevemar> i'm slammed in meetings all afternoon, but if no one picks it up, i may
18:18:04 <morgan> if no one writes unit tests, i'll plan to do it tomorrow.
18:18:06 <morgan> or tonight
18:18:11 <stevemar> i think morgan is busy too
18:18:18 <morgan> i just can't commit to doing it until then
18:18:25 <stevemar> yeah, understandable
18:18:32 <samueldmq> I can do it
18:18:38 <stevemar> okie dokie, i think we beat this horse enough
18:18:52 <stevemar> #topic cleaning up identity.core
18:18:55 <stevemar> rderose: ^
18:18:57 <stevemar> you're up!
18:19:02 <rderose> cool
18:19:04 <rderose> In working on shadow users, one thing that I noticed was that the backends for identity were referencing code in the core.
18:19:12 <rderose> For example, sql and ldap backends were calling the filter_user() method in the core.py
18:19:20 <rderose> To me, this is probablematic for a couple reasons.
18:19:33 <rderose> Separation of concerns.  The core needs to know about the backend interface in order to dynamically load the plugins, but the backends shouldn't know anything about the core or other higher level modules.
18:19:49 <rderose> The core (manager) is concerned with business logic, while the backends (drivers) are concerned with saving/retrieving data from the backend.
18:20:11 <bknudson> other programming languages don't try to put a bunch of classes in one file.
18:20:12 <rderose> The second issue is circular dependency. Backend code could reference code in the core, as well as other higher level modules inside identity, and those methods could call methods in the backend; thus creating a circular dependency.
18:20:39 <henrynash> redrose: is it just that the manager is being used as a useful place to have shared code
18:20:39 <rderose> My recommendation is to move the interface code out of core.py and into a separate module under backend, e.g.
18:20:40 <ayoung> that is a Newton thing
18:20:46 <samueldmq> henrynash: ++
18:20:47 <ayoung> right?
18:20:48 <rderose> yeah
18:20:49 <shaleh> bknudson: if by "other" you mean Java. Sure :-) but most allow it.
18:21:02 <rderose> - backend/interface.py or backend/base.py
18:21:17 <ayoung> so...core is supposed to be an interface.
18:21:18 <rderose> That way both the core and backends only depend on the abstraction.  The design would be more maintainable, less tightly coupled.
18:21:36 <ayoung> the circular deps are an artefact that we put the interface into the core file
18:21:49 <morgan> redrobot: so manage should be the only one that should ever call -> driver (core)
18:21:54 <rderose> henrynash: it creates a dependency from the backend to the core
18:21:56 <morgan> not redrobot, rderose
18:22:11 <ayoung> a driver should be able to call another driver, and cycles should not be an issue.  We could split the driver interface out of core
18:22:13 <henrynash> redrose: sure, understand the issue
18:22:19 <shaleh> seems reasonable. What is the negative?
18:22:23 <morgan> ayoung: no. a driver should never call another driver
18:22:25 <rderose> ayoung ++
18:22:25 <morgan> ayoung: ever
18:22:32 <stevemar> i always found it a bit weird that the manager and the interface were in the same spot
18:22:40 <morgan> ayoung: a drive should be allowed to call a manager method
18:22:50 <rderose> morgain: agreed
18:22:51 <bknudson> how can a driver call another driver?
18:23:00 <ayoung> morgan, so we have places where, when we delete a user, we clean up role assignments.
18:23:05 <ayoung> That suff should be in maanger
18:23:16 <ayoung> not sure why we still have vestiges
18:23:17 <morgan> ayoung: manager -> manager is typically where we have had it
18:23:18 <rderose> morgan: but a driver shouldn't call a manager method (circular dependency)
18:23:23 <knikolla> why would a driver need to call another driver?
18:23:25 <henrynash> redrtose: if you agree that a driver can call a manager method….isn’t that creating the dependency you are concerned about?
18:23:32 <morgan> we've done a lot of cleanup on that front
18:23:34 <amakarov> morgan, ++ What if we want one backend be SQL and the other - KVS?
18:23:46 <rderose> henrynash: didn't agree with that part
18:23:53 <henrynash> redrose: ok, thought not!
18:23:57 <morgan> actually... where are we having a driver -> manager call?
18:24:11 <rderose> amakarov: both sql and kvs will depend on the abstraction; not the core
18:24:14 <morgan> because iirc i did a ton of work to make sure all cross-manager calls were in the manager layer
18:24:24 <amakarov> morgan, I've thied to find an example of it myself without success
18:24:26 <ayoung> rderose, I'll give you a codre review on that one...lets leave it at that
18:24:28 <samueldmq> drivers should not be able to call managers
18:24:33 <samueldmq> they don't perform business logic
18:24:34 <rderose> sql and ldap both call filter_user() in the core
18:24:39 <morgan> samueldmq: yeah sorry, you are right
18:24:44 <morgan> rderose: that should be moved then
18:24:47 <bknudson> the backends shouldn't have been calling filter_user anyways
18:24:49 <morgan> filter_user shouldn't be on the manager
18:24:53 <bknudson> it should be the manager that handles filtering user
18:24:54 <morgan> if the driver needs to call it
18:24:59 <ayoung> I don't want identity/backends/common
18:25:22 <rderose> ayoung: why not, it's only common to the backend
18:25:27 <morgan> if it is business logic, it should be in the manager layer
18:25:30 <bknudson> the drivers don't have a way to call the manager
18:25:31 <morgan> bknudson: ++
18:25:31 <ayoung> rderose, there should be nothing common to the drivers
18:25:32 <stevemar> yeah, not big on the common file
18:25:39 <ayoung> all the common code should be in manager
18:25:49 <ayoung> if there is common code between LDAP and SQL we are doing it wrong
18:25:58 <bknudson> you'd have to pass the manager into the driver in order to be able to call methods on it
18:26:01 <morgan> ayoung: or they are functions that exist elsewhere
18:26:11 <henrynash> bknudson: ++
18:26:12 <morgan> ayoung: because they are truely more generic.
18:26:14 <rderose> ayoung: drivers shouldn't call manager methods.  we don't need a dependency between driver and manager
18:26:26 <morgan> rderose: right, so make the manager do the filter
18:26:27 <ayoung> OK, this one is on my radar. rderose we can work through this.  Want to move on?
18:26:33 <rderose> driver only needs to depend on the abstraction
18:26:34 <morgan> rderose: i think is the simple solution
18:26:55 <morgan> or, .filter_user is a base method on the driver parent class
18:27:05 <morgan> and the drivers are allowed to call self.filter_user
18:27:28 <ayoung> this is too big for a meeting.
18:27:31 <knikolla> morgan, that would be my suggestion. filter_user should be a base method
18:27:35 <morgan> rderose: you are 100% right, drivers don't call managers
18:27:38 <morgan> managers call drivers
18:27:40 <rderose> morgan: okay, but still would like to remove the backends interface out of core
18:27:43 <stevemar> rderose: you feel like you've god enough to move forward?
18:27:51 <stevemar> got*
18:27:53 <rderose> stevemar: yeah
18:28:01 <stevemar> rderose: cool :)
18:28:06 <morgan> rderose: shuffling location of driver interface is 100% fine (just leave a stub with deprecation warning for a cycle)
18:28:13 <rderose> have a patch out there, please send me your feedback
18:28:16 <rderose> thanks
18:28:17 <stevemar> anyone interested, please comment on  https://review.openstack.org/#/c/296140
18:28:19 <bknudson> moving the backends interface out of core makes sense just to clean up the dependencies.
18:28:32 <rderose> bknudson: sounds good
18:28:45 <stevemar> next topic
18:28:47 <morgan> rderose: suggest: <subsystem>.base or .driver_base
18:28:52 <stevemar> #topic summit topics
18:28:53 <ayoung> rderose, don't go crazy on this
18:29:00 <ayoung> samueldmq, can you publish your POC for Policy from last summer?
18:29:00 <rderose> ayoung: okay :)
18:29:11 <stevemar> i see i forgot to add the link
18:29:12 <stevemar> #link https://etherpad.openstack.org/p/keystone-newton-summit-brainstorm
18:29:17 <ayoung> Ididn't see it in a review
18:29:17 <samueldmq> ayoung: well, I am not sure I still have things
18:29:23 <ayoung> samueldmq, yes you do...
18:29:30 <samueldmq> ayoung: most patches are abandoned, but I can try to retrieve
18:29:39 <breton> i have an alternative suggestion for dynamic policy
18:29:42 <samueldmq> ayoung: what do you want specifically ? the code ?
18:29:43 <ayoung> samueldmq, I didn't see the abandoned ones
18:29:46 <ayoung> samueldmq, yes
18:29:46 <breton> lets use Apache Fortress
18:29:52 <ayoung> the POC for the middleware
18:30:04 <morgan> breton: worth discussing at the summit for sure
18:30:15 <ayoung> breton, yo udon't realize the amount of inertia in Openstack
18:30:18 <breton> i know that we already had a chat about it, but there was a chat with Fortress author and a company that backs Fortress and they might be willing to implement things required for keystone
18:30:38 <breton> like group-based authorization (for federated use case)
18:30:39 <samueldmq> so we kill keystone things other than authn ? and use apache fortress?
18:30:45 <ayoung> breton, I'm not going to hold my breath
18:30:53 <ayoung> nope
18:31:06 <ayoung> Fortress is a completely different topic.
18:31:09 <breton> i won't be at the summit, but there will be a person whom i will delegate the task of explaining how things work, with a POC
18:31:15 <jamielennox> the fortress author was in atlanta pushing this, but things have moved significantly since then
18:31:21 <ayoung> RIght now, we need to figure out how to sync policy files
18:31:43 <ayoung> Fortress will just lead to nothing being done.  Nothing against the technology
18:31:48 <ayoung> the problems are on the OS side
18:32:19 <ayoung> Its complete mismatch..lets not go that way
18:32:36 <amakarov> ayoung, why not?
18:32:38 <ayoung> believe me, if we could build a system around LDAP, I would have done so
18:32:47 <stevemar> i think we're better off making authorization pluggable, so people could use fortress if they want
18:32:52 <amakarov> breton, we need a spec :)
18:32:52 <stevemar> but we don't have to enforce it
18:32:53 <ayoung> stevemar, ok, stop
18:33:01 <ayoung> we havea very real topic to discuss
18:33:01 <breton> there is no problem of syncing policy files in fortress ;)
18:33:04 <ayoung> and it is not Fortress
18:33:08 <ayoung> or any other rewrite
18:33:25 <stevemar> don't leave us waiting in suspense?
18:33:36 <ayoung> stevemar, look at the etherpad from theagenda
18:33:43 <ayoung> we have two approaches we can take right now
18:33:49 <ayoung> 1 use Puppet etc to manage policy
18:33:54 <ayoung> 2 distribute via Keystone
18:34:02 <ayoung> I kinda like 2,
18:34:09 <ayoung> and we did already approve the spec
18:34:17 <breton> there is also #3
18:34:19 * breton ducks
18:34:19 <jamielennox> I kinda like 1
18:34:42 <morgan> ayoung: i am in the CMS manages policy files camp
18:34:45 <morgan> jamielennox: ++
18:35:06 <ayoung> so...I can go with either...
18:35:19 <ayoung> both have pros and cons which I tried to lay out
18:35:21 <jamielennox> and i know i got overriden on this previously, but the old dynamic policy spec assumed there would be a lot of continuous updates which doesn't seem to be the case anymore
18:35:25 <ayoung> so,  if we go puppet, here is the deal
18:35:30 <morgan> jamielennox: ++
18:35:34 <samueldmq> ayoung: if 1, what do we do ? make our policy API usable and advise people to use it ,
18:35:36 <samueldmq> ?
18:35:43 <jamielennox> so if you're updating something infrequently i'd  rather not have hundreds of api nodes polling into keystone
18:35:45 <ayoung> if we make a change to a policy file, we have a little bit of a sync workflow to clear up
18:35:50 <ayoung> samueldmq, that is one choice, yes
18:35:58 <shaleh> samueldmq: +++++
18:36:03 <samueldmq> :)
18:36:08 <ayoung> the reason to stick with the policy API even if we update via Puppet is for queryability
18:36:09 <jamielennox> people have the tooling for this and update this file and bounce the appropriate service is really easy
18:36:27 <jamielennox> bonus points if you improve oslo policy to reload the file if it notices the change
18:36:29 <ayoung> jamielennox, that was my feeling, which is why I backed off dynamic
18:36:34 <morgan> jamielennox: yep.
18:36:35 <shaleh> ayoung: true. It would be nice to know what a user/role/etc could do currently
18:36:36 <stevemar> jamielennox: it already does
18:36:38 <ayoung> so, what would the workflow look like
18:36:50 <jamielennox> stevemar: awesome
18:36:52 <ayoung> right now,  do we expect peole to have an external policy repo?
18:36:55 <morgan> jamielennox: oslo.policy doing a mtime check and reload isn't bad.
18:36:58 <ayoung> somewhere that Puppet syncs out of?
18:37:08 * ayoung using puppet as shorthand.  means all the deployment tools
18:37:17 <morgan> ayoung: i think most people template in the CMS
18:37:18 <jamielennox> morgan: was think inotify/fnotify/whatever the currently thing is
18:37:22 <morgan> jamielennox: same thing
18:37:29 <morgan> jamielennox: implementation detail
18:37:39 <shaleh> ayoung: where it comes from is irrelevant. Being able to ask what the current policy allows would be nice.
18:37:41 <ayoung> morgan, right now, in RDO I have a blank slate.  They only use the RPM based policy defaults.
18:37:52 <ayoung> shaleh, it is not irrelevant.  It is part of the workflow
18:37:57 <ayoung> shaleh, because we have a repo
18:38:13 <ayoung> and pulling policy out of the Keystone policy API is a viable approach, but does it work with Puppet?
18:38:15 <morgan> ayoung: ok i think what shaleh is saying is it doesn't matter what is picked "central repo, templated, etc"
18:38:24 <morgan> ayoung: the workflow could be anything at this point
18:38:26 <samueldmq> why not just write out a tool that runs on the policy files and is able to list user capabilities
18:38:28 <samueldmq> etc
18:38:30 <samueldmq> ?
18:38:36 <henrynash> I think the issue is that while we can write some tools that valide policy, if we are not in control of distribution we can never (easlily) preovide tools that set poliy to some customer requirement
18:38:37 <ayoung> samueldmq, did that already
18:38:38 <shaleh> samueldmq: that is a possibility
18:38:44 <shaleh> samueldmq: but it is somewhat limiting
18:38:56 <samueldmq> we didn't even needed to force people to use keystone API with their CMS
18:39:01 <samueldmq> need*
18:39:03 <ayoung> so, we have a bit of a mapping problem, too
18:39:10 <ayoung> the Keystone API maps policy to endpoints
18:39:19 <jamielennox> henrynash: by we there you mean keystone and i think thats a distribution problem not a keystone problem
18:39:20 <ayoung> but a single server might implement multiple endpoints
18:39:25 <shaleh> henrynash: as long as Keystone is provided a policy file, the service side is happy. Where it comes from does not matter to keystone
18:39:36 <ayoung> and we could have a breakage where the policy file is different for endpoints on the same server
18:39:43 <ayoung> I'd like to find a way to lock this down
18:40:00 <ayoung> and then ,either puppet pulls from Keystone, or Puppet pushes to the keystone policy api
18:40:12 <ayoung> I need to give openstack-puppet some guidance here
18:40:31 <morgan> ayoung: what does the push to the keystone-api do? [what is the reason for the push] - just outline it
18:40:34 <samueldmq> ayoung: as jamielennox and shaleh said above, maybe we shuld just not care about htis
18:40:36 <shaleh> ayoung: what exactly is your concern here?
18:40:38 <jamielennox> ayoung: i think that's out of our control
18:40:39 <crinkle> it's easier for puppet to work with files than REST but it can reasonably do either
18:40:40 <henrynash> shaleh, jamielennox: when we have 100s of fine granined roles, some common between services, some not, some tied to endpoints, some not……I think deployers will need a set of tools very different than we can provide today
18:40:41 <ayoung> morgan, for queryability:
18:40:44 <morgan> crinkle: ++
18:40:45 <samueldmq> provide the api, and let the cms decide where to put each thing
18:40:51 <ayoung> what policy file is endpoiunt xyz using
18:41:01 <ayoung> Horizon needs that
18:41:06 <morgan> so lets take a step back.
18:41:10 <shaleh> ayoung: why does Horizon need it?
18:41:10 <morgan> what if... what if...
18:41:17 <samueldmq> consul?
18:41:19 <samueldmq> :-)
18:41:24 <morgan> we use the prefix bit like we have support for
18:41:25 * shaleh smacks samueldmq
18:41:39 <ayoung> shaleh, it modifes the UI based on the capabilities of the user.  It uses policy to do that
18:41:45 <morgan> and we can recommend folks deploy a single policy file ?
18:41:57 <ayoung> morgan, too big a jump
18:42:00 <ayoung> tried that already
18:42:04 <ayoung> and It is a good idea
18:42:06 <shaleh> ayoung: because we fail to provide an api that lets people ask what is ALLOWED not what is in a file
18:42:08 <morgan> but we already do that today?
18:42:13 <jamielennox> morgan: yea, not sure how you slurp it together from all the services
18:42:14 <morgan> with ['compute']
18:42:28 <ayoung> morgan, there are conflicts on the internal rules
18:42:33 <ayoung> "is_admin" for example
18:42:34 <samueldmq> jamielennox: ++ that's the issue we hit when trying the unified policy thing
18:42:50 <ayoung> morgan, it is a distinct possiility, but remember, big tent
18:43:02 <ayoung> so doing this for nova and glance is do-able
18:43:06 <bknudson> on a somewhat related note there's a proposal from nova to have the default rules in their code base, and then generate their sample policy.json from the code.
18:43:10 <morgan> i think this goes back to the services defining thier policy in code (default) and a dingle override.
18:43:14 <ayoung> bknudson, I think that is OK
18:43:14 <bknudson> similar to how config options are handled
18:43:18 <morgan> bknudson: yeah that is the nova thing.
18:43:24 <samueldmq> in some cases we can't even know what a user is able to do by looking at the policy
18:43:32 <samueldmq> like nova has some scope checks in the code
18:43:41 <ayoung> bknudson, I actually like that their approach gives a sane "here is where you find the project on the VM object"
18:43:44 <jamielennox> bknudson: i can see the advantage to that
18:43:52 <morgan> samueldmq: scope checks are weird regardless.
18:43:59 <ayoung> the scope checks are what we want to be able to tweak
18:44:11 <ayoung> er...role checks
18:44:15 <morgan> ayoung: yeah
18:44:19 <ayoung> the role checks are what we want to be able to tweak
18:44:31 <samueldmq> so if we only want to look at the roles
18:44:39 <samueldmq> it should be very easy to write a tool for that
18:44:40 <samueldmq> shouldn't it ,
18:44:42 <samueldmq> ?
18:44:48 <ayoung> so...we are agreed that the general approach is to use Puppet etc, and that we will lay down support for making that clear?
18:44:58 <morgan> ayoung: i think it's the best option
18:45:11 <morgan> ayoung: it means we can leverage tools already existing
18:45:14 <shaleh> people like policy in CMS
18:45:20 <bknudson> I think the deploy tool is the best way to distribute policy files
18:45:21 <henrynash> morgan, ayoung: as you know, I really don’t like that we are defining scope checks in code, unless they can be totally overriden by the policy file…..too restrictive
18:45:23 <stevemar> ayoung: i'm in support of helping the cms tools
18:45:24 <shaleh> wht changed since last week?
18:45:25 <morgan> ayoung: and what shaleh said
18:45:26 <shaleh> etc
18:45:31 <ayoung> morgan, OK.  So, what happens if someone uploads a policy file to Keystone
18:45:41 <ayoung> should that be definitive, or is that an error?
18:45:43 <morgan> ayoung: i'd deprecate the API
18:45:49 <ayoung> should the CMS pull from Keystone?
18:45:49 <stevemar> ayoung: same thing that happens today? not much
18:45:51 <ayoung> morgan, we can't
18:45:58 <ayoung> morgan, we needthe queryability still
18:45:58 <morgan> ayoung: we can.
18:46:11 <samueldmq> why not a tool ? :(
18:46:17 <ayoung> morgan, no, remember, we don't control all the tools
18:46:18 <shaleh> keystone could easily server /etc/keystone/policy.json
18:46:21 <morgan> ayoung: what neess to query policy
18:46:25 <morgan> ayoung: lets define that
18:46:27 <morgan> clearly
18:46:29 <samueldmq> vs API, maybe a tool is better for a CMS than an API is
18:46:31 <shaleh> get /v3/policy
18:46:32 <shaleh> done
18:46:36 <morgan> shaleh: hold up
18:46:43 <ayoung> morgan, anythign that needs to know a-priori what a user can do.  Any user interface
18:46:45 <morgan> what things need to inspect/query policy directly
18:47:00 <ayoung> morgan, as I said, Horizon does.  Right now it holds it in place.
18:47:07 <morgan> ok lets start from horizon
18:47:07 <ayoung> Now, we could have Puppet sync to there as well
18:47:13 <bknudson> you could ask keystone what the policy is, but another instance of keystone might give a different answer anyways.
18:47:13 <shaleh> ayoung: we need to try and resolve Horizon's need
18:47:21 <henrynash> morgan: how about auditing if the a given security policy has been implemented
18:47:25 <morgan> i think if oslo.policy can provide the queribility
18:47:26 <ayoung> its just not a very friendly way , as it makes Horizon special
18:47:26 <bknudson> the cms can push the policy to horizon too
18:47:29 <shaleh> ayoung: Horizon parsing our JSON is nuts
18:47:45 <ayoung> shaleh, no it is not.  Please.
18:47:46 <morgan> it would be easy to say CMS pushes to the interface location
18:48:06 <morgan> ayoung: i was trying to make sure i understood it was horizon/ui not osc and cli tools
18:48:12 <morgan> ayoung: which is why i asked what needed to query
18:48:29 <jamielennox> shaleh: horizon should be using oslo_policy, oslo_policy can provide this
18:48:29 <ayoung> morgan, I'm also thinking 3rd party integration, and compliance
18:48:44 <shaleh> jamielennox: agreed
18:48:45 <ayoung> jamielennox, and it does
18:48:46 <morgan> if it's "server" applications, we *can* expose something via oslo_policy from disk
18:48:51 <jamielennox> ok, so not nuts
18:49:03 <morgan> or whatever backend
18:49:12 <morgan> i just am making sure we have a clear scope
18:49:14 <morgan> thats all
18:49:23 <shaleh> jamielennox: I still say Horizon parsing it is a hack. What they really want to know is "what can user X do".
18:49:30 <ayoung> ok...so is the best practice: external file repo, puppet updates, puppet uploads to Keystone?
18:49:38 <shaleh> they have to infer it from our policy which means we always break Horizon
18:49:49 <morgan> shaleh: you have a bigger issue too
18:49:52 <shaleh> there should be a better way.
18:49:52 <jamielennox> shaleh: everyone has to infer that from policy
18:49:55 <morgan> shaleh: some endpoints are different than others
18:50:03 <morgan> shaleh: and could be considered "compute" =/
18:50:05 <shaleh> morgan: understood
18:50:06 <ayoung> Understand, if Keystone is not the ssystem of record, we have very little control over policy in the future.  We cannot autogenerate, merge, manage, etc
18:50:27 <ayoung> the implie_roles will not be expanded in the policy files, for example
18:50:41 <ayoung> we can't make common rules for multiple servers to consume
18:50:49 <ayoung> I mean, we can, just with offline tools
18:50:58 <ayoung> and maybe that is the desired approach
18:51:04 <morgan> ayoung: i know the answer,but is keystone the right place for that? (the policy query)?
18:51:06 <bknudson> we don't have any say in what puppet does?
18:51:15 <ayoung> bknudson, I think we have a little
18:51:24 <ayoung> bknudson, puppet-keystone listens to us
18:51:27 <henrynash> ayoung: I concur….are we really saying that keystone should not be the system of record for policy?  Taht shuuld be teh conceptual question
18:51:29 <ayoung> if we workwith them
18:52:14 <ayoung> henrynash, what is you opinion?
18:52:21 <bknudson> puppet might not be too happy to push something that's not a part of gate testing
18:52:28 <henrynash> ayoung: I think keystone should be the system of record for policy
18:52:32 <ayoung> henrynash, me too
18:52:34 <morgan> henrynash: i think making keystone the system of record is going to be as much (if not more work)
18:53:04 <bknudson> it didn't sound like nova wanted to give up control of their policy files
18:53:05 <morgan> and will continue to lag in adoption even if we do everything right. look at introducing new things in keystone and how hard it is to garner the adoption
18:53:14 <shaleh> bknudson: agreed
18:53:14 <morgan> and other projects do not wa... what bknudson said
18:53:43 <ayoung> bknudson, I'vetalked it over with them.  THere wre two distinct issues
18:53:48 <morgan> I think the best solutuon is to double down on the CMS tools - it's where people (and projects) seem to like policy files.
18:53:55 <ayoung> one is that the binding of project to resource is a a Nova decision
18:53:55 <henrynash> morgan: that maybe true….supporting old driver version is a SHED load of work, but we diecided to do it since it was the right thing to do (I hope)….we should answer this as to teh correct conceptual answer…and if it’s loat sof work, then we need to plan it over many releases
18:54:11 <ayoung> Ideally, that would be done by code, as policy.json does not buy anything
18:54:20 <morgan> henrynash: the issue is i think adoption would just stall permanently
18:54:23 <ayoung> the other, though, is what is the intention of the PAI.
18:54:28 <ayoung> Is it meant to be used by end users or not
18:55:02 <ayoung> so, while they should give good defaults, that is what the deployers should be able to customize
18:55:08 <stevemar> henrynash: i'm with morgan and bknudson on this one, adoption is so slow right now, we're better off helping the CMS tools in the short run
18:55:19 <henrynash> morgan: maybe hard, yes, but if the conceptual answer is correct, (and we see the road blocks coming down the road), then it will only be a matter of time
18:55:20 <bknudson> deployers do customize their policies
18:55:34 <ayoung> stevemar, I have people writing custom poliucy at customer sites.  They need "read only" roles
18:55:43 <shaleh> ayoung: ++
18:55:46 <jamielennox> ayoung: we have that spec
18:55:52 <jamielennox> which i haven't updated
18:55:55 <morgan> henrynash: i disagree. i don't think it will ever be a good adoption rate internal to openstack.
18:55:59 <stevemar> ayoung: which is... what jamielennox just said ^
18:56:08 <ayoung> jamielennox, and how are we going to distribute it. Especially if Nova hardcodes the role?  They can't
18:56:15 <bknudson> we haven't seen a lot of pushback on the cross-project policy spec. so maybe things can move in a better direction
18:56:15 <morgan> henrynash: unless we address the clear distribution story first. which CMS tools are already there.
18:56:35 <jamielennox> nova's not hardcoding the rules, AIUI they want to generate the policy file in a similar way they generate a config file
18:56:45 <jamielennox> rather than maintain it manually
18:56:46 <ayoung> morgan, so I'm fine with CMS as the distro mechanism, but that is not a complete solution
18:57:01 <stevemar> jamielennox: theres some hardcoding going on in projects
18:57:17 <jamielennox> which i think is fine because it means it doesn't matter so much if your CMS policy gets out of sync a little because at least there is a real default
18:57:21 <morgan> i think the best option is improve it, make it so distribution is clear, we have the tools to show how policy works, and supply the interface for horizon [which will still be semi-icky, btu eh it'll be better]
18:57:26 <jamielennox> stevemar: right - but that's just wrong
18:57:35 <morgan> once we're there, moving to a more direct api driven system or not would be easier.
18:57:39 <ayoung> jamielennox, that is my understanding, too. So In my view, the nova policy file would be the starting point for custom policy in a deployment.
18:57:43 <henrynash> morgan: if we just provide anothe CMS - I agree, few will use it, we have to show some of the advantages of keystone being the CMS….
18:57:55 <morgan> henrynash: as it stands i don't think there is one
18:58:08 <ayoung> ok..so I'll pursue Keystone<->puppet sync
18:58:26 <crinkle> ++
18:58:39 <stevemar> ayoung: you and crinkle can swap reviews
18:58:39 <shaleh> ayoung: why is it bi-directional?
18:58:42 <ayoung> I suspect that the right approach is to validate policy offline, long before you touch puppet or Keystone anyway
18:58:43 <henrynash> ayoung: is there a speci of that? I’m not quite sure what that means?
18:58:56 <ayoung> henrynash, there is an etherpad right now
18:58:59 <morgan> ayoung: i also want to push on the oslo_policy being able to do the lifting for "can you do X" more easily
18:59:00 <jamielennox> ayoung: ++ i'm in favor or way better tooling
18:59:02 <bknudson> do you think puppet would accept handling policy files themselves so they can have a read-only role?
18:59:12 <ayoung> #link https://etherpad.openstack.org/p/tripleo-policy-updates
18:59:31 <stevemar> 1 minute left
18:59:38 <jamielennox> so my concern with puppet managing this sort of thing is that policy files get out of date in the CMS
18:59:46 <ayoung> jamielennox, right
18:59:52 <ayoung> there is always that risk
19:00:05 <ayoung> anyway...I have the general answer I was looking for.
19:00:07 <jamielennox> so maybe the fastest solve for that is a .d dir or some way to layer these files that offsets that
19:00:10 <bknudson> our sample policy.json can get out of sync with the code, too.
19:00:20 <ayoung> If you have comments or proposals, update the etherpad
19:00:27 <stevemar> #endmeeting