16:00:02 #startmeeting keystone 16:00:03 Meeting started Tue Jul 17 16:00:02 2018 UTC and is due to finish in 60 minutes. The chair is lbragstad. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:04 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:06 The meeting name has been set to 'keystone' 16:00:07 #link https://etherpad.openstack.org/p/keystone-weekly-meeting 16:00:10 agenda ^ 16:00:15 o/ 16:00:18 Hello 16:00:20 o/ 16:00:23 ping ayoung, breton, cmurphy, dstanek, gagehugo, hrybacki, knikolla, lamt, lbragstad, lwanderley, kmalloc, rodrigods, samueldmq, spilla, aselius, dpar, jdennis, ruan_he, wxy, sonuk 16:00:24 o/ 16:00:29 o/ 16:00:30 o/ 16:01:22 o/ 16:01:28 we have a relatively light schedule today - so we'll give folks another minute or two to show up 16:02:51 \o 16:03:02 #topic release status 16:03:08 just a couple quick announcements 16:03:56 we have non-client library freeze by the end of the weke 16:03:58 week* 16:04:00 #link https://releases.openstack.org/rocky/schedule.html#r-final-lib 16:04:21 so if there is anything we need from an oslo/ksa perspective, we'll need to get those things squared away 16:04:32 i don't think i have anything on my radar 16:04:36 also ksm 16:04:47 ++ yeah 16:04:50 there's at least one ksm change that needs attention 16:06:14 these are the open reviews 16:06:17 #link https://review.openstack.org/#/q/project:openstack/keystonemiddleware+status:open 16:07:11 cmurphy: which review were you referring to? 16:07:39 lbragstad: well i guess there's more than one :) 16:07:49 lol 16:08:03 one i'm having trouble with is https://review.openstack.org/578008 would be good to have other eyes on it 16:09:21 just the purpose of it? 16:09:24 or something else? 16:09:35 i'll be honest, i haven't looked at this one yet 16:10:24 well we don't need to use the meeting to look at it, just wanted to highlight it 16:10:50 i'll make a note to review it today 16:11:32 any other patches we need to get into ksa, ksm, or oslo libraries before Friday? 16:12:12 note that oslo.limit will be exempt from the freeze since it's not revved past 1.0 yet 16:13:19 if someone does stumble across something we need to include, just say something 16:13:38 along the same vein - requirements freeze is next week 16:14:00 so we'll need to be mindful of versions we need, if any 16:14:06 https://review.openstack.org/#/c/583215/ this one in ksa, since it's a bug from s10 for a long time. 16:14:34 wxy|: nice - i can review that today, too 16:15:23 lbragstad: thanks. 16:15:37 thanks for the patch 16:16:12 that's about all i had for release stuff 16:16:17 #topic keystoneauth url discovery bug 16:16:25 that bug sucks. 16:16:32 #link: https://bugs.launchpad.net/keystoneauth/+bug/1733052 16:16:32 Launchpad bug 1733052 in keystoneauth "Usage of internal URL in clouds.yaml causes a 404" [High,In progress] - Assigned to wangxiyuan (wangxiyuan) 16:16:38 ^ that one you mean :) 16:16:50 look closely at the code, we need to be sure we're not going to break anything 16:16:54 i'm not entirely sure who put this on the schedule for today 16:17:13 but we need to get that landed if possible 16:17:42 i don't know who added it, but mordred, wxy|, me, and a few others have a vested interest in that bugfix landing 16:17:57 it will unbreak some folks. 16:18:19 sounds good 16:18:30 yeah. it's important 16:18:49 it's definitely a thing we should fix in the client library and not in the services themselves ;) 16:18:51 we'll looks like we just need some reviews 16:19:28 yep 16:19:38 we should land it - because there are old clouds out there that are broken - but it's also fundamentally broken from the pov of the services 16:19:38 is there anything we want to discuss about the fix? 16:19:51 and at some point someone should actually fix all of the services 16:19:57 looking 16:20:29 nothing besides what has already been said. 16:20:38 kmalloc: I could rage a bit more if it's helpful 16:20:55 mordred: lol 16:20:59 why does it not have a test? 16:21:48 ayoung: oh, i'll add one later. 16:22:05 wxy|, I won't touch it without a test 16:22:10 tests first. Tests always 16:24:09 ok - anything more on this topic? 16:24:16 otherwise i'll open it up for discussion 16:24:56 has anyone looked at our test coverage numbers lately? 16:25:06 yes 16:25:17 we're around 92% in keystone server 16:25:21 how we looking? 16:26:05 i would like to gate on coverage, but i'm not sure if we've tried that in the past 16:26:09 #topic open discussion 16:26:39 yeah...it would be great if a patch was immediately rejected if the lines of codes changed were not covered by a test 16:27:10 does anyone know if other projects gate on test coverage? 16:27:24 i'm not aware of that done by any of the other services 16:27:47 I don't think so 16:29:03 could we build it out of existing tools? 16:29:13 what do you mean? 16:29:25 something like: upon checking, run test coverage. THen, from the git patch, get the new lines of code 16:29:29 one specific concern around using coverage in that way is it can legitimetely drop (say you delete a bunch of unused code that was only covered by test suite) 16:29:40 we have a coverage job defined in our tox.ini 16:29:41 and no one has built a thing to address those concerns 16:29:43 and then query the test covereage output to make sure those lines were covered? 16:30:11 clarkb, I'm looking for "all new lines are covered by some code" 16:30:16 which is a way to move forward 16:30:59 deleting code would only fail if you ended up with a partial line change, and that line was uncovered 16:31:21 sounds like a great intern projec 16:31:22 t 16:32:28 Oh, BTWE, I have some good news for people trying to develop on RHEL75 etc 16:32:48 http://adam.younglogic.com/2018/07/running-openstack-components-on-rhel-with-software-collections/ 16:33:03 which can be used to run upstream code in a supported way. ish 16:33:20 so for running the covereage code, I can use that 16:33:28 scl enable rh-python35 bash 16:33:46 tox -e cover 16:33:56 * ayoung running now 16:33:57 i am -2 on a "if coverage goes down error" because of what clarkb said. 16:34:15 kmalloc, agreed 16:34:19 yeah - that's a good point 16:34:21 I don't want to do it on percentage 16:34:34 what about at least publishing it formally somewhere? 16:34:50 (i don't think we do that either) 16:34:54 I want to do it on "if you are adding or modding a line of code in a patch, it needs a test." 16:34:59 and there are legitimate cases (e.g. flask) that testing could not have been fully written without it being a massive 3000+ line patch 16:35:09 and then tests on top of it 16:35:25 you can mock out a lot 16:35:37 ok, let me just say this is a people problem not a tech problem. 16:35:42 disagree 16:35:47 this is something we can automate 16:35:54 it is on us, reviewers to say "this needs tests" and look at the coverage report 16:35:56 and that makes it something not part of a code review 16:35:59 it is not automatable 16:36:08 there are too many variables 16:36:16 I tend to agree with kmalloc 16:36:21 at some point you have to review the code and make sure it's doing what it's suppose to 16:36:23 lets give it a try 16:36:47 lbragstad, agreed, but automated checks for "you must have a test" are like automated pep8 16:37:10 mocking it all out is a terrible policy just to get past a hurdle of "well you're going to just undo these once the next patch lands" 16:37:22 ok, I have an idea 16:37:26 this was first proposed at the cactus summit fwiw, and so far nobody has managed to build a thing that works enough to be usable 16:37:27 HOWEVER 16:37:35 instead of making it a gerrit check 16:37:37 it was first proposed at the cactus summit and people have wanted it ever since 16:37:46 so I thik it would be welcome if someone can figure it out 16:37:50 ayoung: if you can produce something legitimately reliable and handle the edge cases, i'll be the first to +2 it 16:37:55 ++ 16:38:00 lets get a tool that at least we can run, that does a cover check + "these lines are covered or not" check 16:38:04 and i'm fine trialing it in keystone 16:38:13 * lbragstad would settle for publishing (not gating on) coverage reports as a way to encourage people to use it as a learning tool while filling the gaps 16:38:15 but i want a well designed tool. 16:38:22 step one is showing it can be done 16:38:24 ayoung: yes. agree. a tool we can run is a GREAT step 1 16:38:35 lbragstad: it is published in the coverage run. 16:38:39 we just need to look at it 16:38:41 just like docs 16:38:46 right... 16:38:50 step 4 is automating it. Not sure about 2 and 3, but I am sure there are steps there 16:38:58 http://logs.openstack.org/58/580258/10/check/openstack-tox-cover/9140707/cover/ 16:39:09 ^ example 16:39:10 in addition to that i gues it would be nice to have a badge displaying the coverage of master 16:39:58 CLI is our worst offender, followed by LDAP (my quick scan) 16:40:21 * kmalloc also very strongly believes the it is a fallacy that 100% code coverage means anything. it results in the terrible thing you see in a lot of java projects with bazillion mocks 16:40:43 ayoung: easy win - delete CLI :) 16:40:47 mordred: ++ 16:40:54 mordred: didn't we already do that in keystone? ;) 16:40:59 Didn't we try that already? 16:41:06 keystoneclient has no cli 16:41:11 osc is our cli 16:41:15 this is keystone-manage 16:41:19 ah. 16:41:25 deleting that is probably not a great idea 16:41:26 keystone/cmd/cli.py 16:41:41 we should be writing tests to test behavior 16:41:56 one approach some projects have taken is to run non voting coverage jobs in check to produce coverage reports for projects separately from the main unittes run 16:41:56 want me to open some bugs for that? 16:42:10 if we are specifying behavior rather than "did you write a test that exercises the code" 16:42:11 this is done because python coverage impacts timing in ways that can break things and be difficult to fix 16:42:14 it is better. 16:42:40 clarkb: our coverage report is voting, at least in check. 16:42:41 iirc 16:42:41 "Mapping engine tester is untested" 16:43:16 is the current coverage job enough? if not, what is it missing? 16:43:26 writing purely synthetic tests that shows the code behaves as it was written is a lot less useful than "here is the expected behavior, does it do that" 16:43:38 and we tend to do a lot more of the former 16:43:54 knikolla: i don't know what is missing from the coverage report being at least initially useful. 16:45:09 i think it's initially useful - i guess i just want it to be more accessible? 16:45:47 publish it alongside docs? make the coverage report a part of the dev docs? 16:46:11 *coverage*->>docs.o.o/developer/keystone/coverage [not real url, but example] 16:46:32 sure 16:46:45 i'm fine with that 16:46:51 yeah - that might be a good idea 16:47:45 it will make our docs job a LOT slower 16:47:59 so, when submitting a patch, run covereage, and make sure your new code is covered. When reviewing a patch, do the same. 16:47:59 or we would need a docs-coverage publisher 16:48:05 it only have to run unit tests? 16:48:12 yeah it has to run all the unit tests 16:48:16 docs does not have to do that today 16:48:48 ayoung: but ftr we already do that on every patch: example of one I am working on http://logs.openstack.org/24/582724/1/check/openstack-tox-cover/7263f00/cover/ 16:49:09 the report is already run in check [and if the job fails, the patch gets a -1] 16:49:10 from zuul 16:49:33 ayoung: right now, we just need to, as reviewers, look at that result :) 16:49:39 does not check every line is covered, or these would never get through 16:49:49 that is the job of reviewers 16:49:50 right 16:49:58 kmalloc: ++ 16:50:03 100% test coverage does not mean anything useful 16:50:06 never has 16:50:32 because you then have 1000000000 mocks. look at the bad pattern in a lot of bigger java (historically) projects 16:50:49 does this test the bits of code you'd expect: yes/no, that is a human question 16:51:05 that gets into design issues. All I am saying is that changed lines of code need tests. 16:51:25 I'll see if I can automate that 16:51:38 i have to step out of this convo, i am so opposed to this line of thinking as an automatable task... 16:52:12 the only proposal I recommend at this point is: Reviewers, look at the coverage report please. and add it to the factors you use to review code 16:52:13 anything else we want to cover with this topic? 16:52:32 kmalloc: ++ 16:53:03 it's my failure as a reviewer if i let something in which isn't tested, and that tests don't cover edge cases or behaviors 16:53:13 i wouldn't trust the automation anyway because edge cases 16:54:29 "Never send a man to do a machines job." 16:55:17 "Never send a Human to do a machine's job." is the proper quote 16:55:46 so to see the lines that change from a patch looks like it is scriptable... 16:55:55 and then pull thyose line numbers out of cover: 16:56:56 for example, in cover/keystone_server_flask_common_py.html 16:57:05 ayoung: i can point to concrete examples of why automation is very hard on this: any abstract base class that does NotImplementedError() is NOT tested 16:57:21 kmalloc, as I said, first step is tooling 16:57:23 ayoung: the wsgi framework loaders don't get captured in unit tests and can't be 16:57:28 because of how they load 16:57:32 a machine's job here would be aiding the reviewer to make a decision. the binary tested or not doesn't help me too much. what would help me would be a "here's where this changed line is tested from". 16:57:39 this is why functional tests, a real keystone, is useful 16:57:42 and behavior 16:57:49 three minute check 16:58:36 i'll go on record and say i'm a hard -2 on adding jobs that do coverage level checks until we have clear examples on it working and not forcing needing to mock things for the sake of "did we test this" 16:59:09 knikolla: ++ better "we tested with testcase X and Y and Z" would be good 16:59:49 but that is also *hard* 17:00:15 alright - wrapping this up since we're out of time 17:00:21 thanks for coming all! 17:00:25 #endmeeting