17:05:19 #startmeeting 17:05:21 Meeting started Wed Jan 11 17:05:19 2012 UTC. The chair is jaypipes. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:05:22 Useful Commands: #action #agreed #help #info #idea #link #topic. 17:06:06 OK, so there are two separate topics I'd like to discuss first: 17:06:13 nayna is another meeting 17:06:17 1) Test case style 17:06:36 2) Using code from novaclient for basic REST client 17:06:49 both topics based on email from AntoniHP today... 17:07:50 #topic Test case style 17:08:11 so, has everyone seen AntoniHP's email from this morning? 17:08:54 here is link https://lists.launchpad.net/openstack-qa-team/msg00023.html 17:09:11 there are a number of points AntoniHP raises, and I think we should discuss them here now in order. AntoniHP, ok with you? 17:09:16 I have not seen . may be not sent to the gruo email 17:09:36 It went to the group email list, which seems a bit flaky for some reason 17:09:44 [Openstack-qa-team] Implementing tests ? 17:10:05 Ravikumar_hp: yeah, I did not get notified of it either... see link above. 17:10:10 #link https://lists.launchpad.net/openstack-qa-team/msg00023.html 17:10:14 ok 17:10:49 I think yes, we can discuss it here, or if more time is needed discussus it on the end of meeting? 17:10:52 AntoniHP: so, let's start with 1) Dependability of test case on each other 17:12:04 AntoniHP has a good point - there is a NOSE test driver that alleviates some of that concern however: http://pypi.python.org/pypi/proboscis/1.0 17:12:08 #link http://pypi.python.org/pypi/proboscis/1.0 17:12:23 heckj: there is no need for a test driver... 17:12:38 heckj: nose already supports skipping based on conditions just fine 17:12:46 heckj, AntoniHP: as an example, see https://github.com/openstack/tempest/blob/master/tempest/tests/test_list_servers.py 17:12:53 it is an addition to Nose - an extension, not a test driver, that allows specifications of dependencies between tests 17:13:14 I suppose we already discussed about Dependability. Class way and method way 17:13:22 Is this different topic 17:13:23 ? 17:13:24 I think the core issue is not whether we can have dependencies between tests, but should we 17:13:31 jaypipes: please take a look at it before you just dismiss it. I've been using it with some success to resolve some dependency issues between tests 17:13:43 heckj: I have looked at it. :) 17:14:17 heckj: This moudle looks cool. 17:14:44 dwalleck - is that the concern? If so, apologies for the random link. It wasn't clear from AntoniHP's email 17:14:49 I think the better question is "what is the problem we are trying to solve by having test dependencies" 17:15:02 so, basically, I'd like to know from AntoniHP what about nosetests does not allow for 1) to be taken care of.. 17:15:29 heckj: No, but the "should we" question is the one we need to answer here as a group 17:15:38 right 17:15:46 it does, within nosetests the tests could be easily depndable 17:16:41 I think we have some philosophical differences on test design, so the goal is to find a solution that will either directly address everyones concerns, or allow people to use Tempest in different ways to ease those concerns 17:16:50 Dependency could be occur by nature when we wanna resuse testcode or exisiting resource. 17:17:16 it is possible to be implemented in different ways, within nose, without nose, with extra driver etc 17:17:17 So I think this is a matter of code style. "Class vs Method" 17:17:28 So if we want to reuse existing resources, wouldn't it be easier to have an external library/process handling that? 17:18:02 but as nati says this is about style, what nose test case should NOT be equal to test case 17:18:13 It seems like that would be a more robust solution, and addresses the concern of execution time and reuse of resources 17:19:07 hmm. I suppose we are doing same discussion. I suppose both of Class style and Method style has merit and demerit. 17:19:11 AntoniHP: I guess I'm failing to see how the 001_, 002_ test method examples in your email would be of benefit over something like this: http://pastebin.com/2pdV34Ph 17:19:48 Then I think our next action is discuss with actual code example. 17:20:03 then vote it? 17:20:07 But if reuse of resources isn't the core problem, help me understand where the desire for test dependencies comes from 17:20:07 if there is some code between asserts and first assert fails then code is not executed 17:20:26 nati: Actually, I think the easier idea would be for someone to submit a patch to Tempest 17:20:49 That way it can follow the traditional code acceptance path, and make it easily visible 17:20:56 AntoniHP: but why would you have separate test methods for things that are dependent on each other to be done in an order? 17:20:58 dwalleck: Yes. I suppose reuse is core problem and right way to solve it is use libs. 17:21:34 AntoniHP: you can't parallelize them, and so you only add fragility to the test case because now dependencies between test methods must be tracked. 17:22:02 AntoniHP: the only advantage that your approach gives is more verbose output on successful test methods, AFAICT 17:22:07 nati: I'm working on that solution as part of my next sprint. There's varying levels of complexity to how it could be implemented, but it will be done in some form or fashion 17:22:26 jaypipes: they can be paralellized as classes 17:22:28 AntoniHP: and I'm confused why anyone would care about successful methods -- I only care about stuff that errors or fails? 17:22:46 jaypipes: ++ 17:23:00 jaypipes +1 17:23:16 AntoniHP: but in the case where you put a series of dependent actions in a single test method, the methods of a class can be run in parallel even with a shared resource... 17:23:18 i care sucess or failed . 17:23:19 jaypipes: it provides context to results 17:23:37 AntoniHP: how so? 17:23:46 AntoniHP: could you provide example? 17:23:51 i want to fail dependent tests also instead of skiipinf 17:23:56 AntoniHP: most of the asserts are dependent on the previous assert to have passed so would it be useful to run the second assertion if the first one failed? 17:23:58 skiiping 17:24:14 I think by using class way. The test log looks more easy to read without adding logging code. 17:24:25 #sorry typo 17:24:40 The merit of Class way is the log looks more easy to read without adding logging code. 17:24:40 jaypipes: sometimes yes, and sometimes no - this also provides easy entry point for automated handling of errors 17:24:57 AntonioHP: So what if, regardless of test design practice, you could see the results of all assertions in the results. Is that the goal you're trying to reach? 17:25:08 vandana: yes, in my example a failed response to API call could still create new object 17:25:20 for reporting purpose - Success , Failed ... 1) I will fail volume-attachment tests if create volume is failed , 17:25:42 dwalleck: yes, that is why I'm totally not insitent on using this way - I proposed different solutions 17:26:03 AntoniHP: sorry, perhaps this is just lost in translation :) could you provide some example output that shows the benefit for automated handling of errors? 17:26:49 create oject call -> verify response from call -> verify that object exists 17:27:26 Ravikumar_hp: our point is that if you need to "skip" a dependent set of actions based on an early bailout or failure, the dependent set of actions should be in the same test case method... 17:28:01 AntoniHP: but if those calls were in the same test method, the assert message would indicate which step failed... 17:28:19 but won't there be a lot of overhead in figuring out these dependent assertions 17:29:00 so result .F. would point to problems with API, FSS network connectivity, ..F to nova scheduler not working 17:29:33 and then .FF would be different to .F. 17:29:35 AntoniHP: but so would a single F, with the error output message indicating the step that failed... 17:29:49 jaypipes: Right, like you did with images based on what's in the system. It makes sense for the test suite to be aware of it's surroundings and resources 17:30:27 So right now I get failures like this.... 17:30:45 ====================================================================== 17:30:45 ERROR: The server should be power cycled 17:30:45 ---------------------------------------------------------------------- 17:30:47 Traceback (most recent call last): 17:30:49 File "/var/lib/jenkins/jobs/zodiac chicago smoke/workspace/zodiac/zodiac/tests/servers/test_server_actions.py", line 33, in setUp 17:30:51 self.server = ServerGenerator.create_active_server() 17:30:56 File "/var/lib/jenkins/jobs/zodiac chicago smoke/workspace/zodiac/zodiac/tests/__init__.py", line 27, in create_active_server 17:30:56 client.wait_for_server_status(created_server.id, 'ACTIVE') 17:30:57 File "/var/lib/jenkins/jobs/zodiac chicago smoke/workspace/zodiac/zodiac/services/nova/json/servers_client.py", line 193, in wait_for_server_status 17:30:59 raise exceptions.BuildErrorException('Build failed. Server with uuid %s entered ERROR status.' % server_id) 17:31:01 BuildErrorException: u"u'Build failed. Server with uuid e0845137-61d7-48b8-9db8-128db00cd7b5 entered ERROR status.' 17:31:01 we aim to automate, so if such logic is not in test, we would need to parse output messages then 17:31:03 Ack 17:31:05 https://gist.github.com/3da4cc395268f5ca36cb 17:31:10 Try that instead, bit easier to read :) 17:31:46 AntoniHP: automate what exactly? the reading of test results to put on some report? Then we can just use xunit output, no? 17:31:59 jaypipes: exactly ! 17:32:20 jaypipes: by having separate entries in units, we do not need to be very smart about parsing error messages 17:32:30 AntoniHP: but all that would mean is a simple --with-xunit 17:32:44 jaypipes: no, because you need to parse a message in result 17:32:58 to see which case has happend 17:33:19 I can't really post a link to my Jenkins reports, but that's pretty much what I have now with the --with-xunit 17:33:29 otherwise you have a code that pinpoints failure, and captured output could be used for technical data 17:33:31 AntoniHP: If your automation depends on the sequence of E, F, ., and S in the test output, then something is more fundamentally wrong than the order of the test method execution IMHO 17:34:36 jaypipes: Create server sometimes fails sometime oK. And some test fails not because of this. 17:34:38 AntoniHP: for instance, what happens when you insert a new 00X method and F.. becomes F.F.? How does your automated reporting handle that? 17:35:04 AntoniHP: you would have to make a corresponding change to your automation report, no? 17:35:19 that is question to donaldngo_hp 17:35:26 nati_: that's a totally separate issue :) 17:36:18 but still this allows for a) continuing execution of following steps 17:36:20 AntoniHP: I guess my hesitation is to change to a test class/method style in order to just support a certain type of output to the test run. 17:36:36 cant we achieve what antoni wants (which is each test class is a testing scenario with dependent steps) and what tempest provides which is code resusabitlity through service classes 17:36:50 jaypipes: I agree with that statement 17:37:10 AntonioHP: My case is that if an assertion fails, I probably don't want to make any more assertions, and the rest will likely fail and dirty my results 17:37:18 donaldngo_hp: but what we are saying is that test *methods* should contain all dependent series of actions, not the class. That way, there is no need to have dependency tracking. 17:37:25 I think fundamentally this problems goes from using unit testing framework for working on other types of tests 17:37:46 AntoniHP: virtually every functional/integration testing framework derives from unit test frameworks. 17:37:53 dwalleck: +1 17:38:29 jaypipes: yes . current test methods already contains dependent actions. 17:38:33 yes, but I have a feeling that we are still quite bound by thinking of those tests as single unit tests 17:38:59 AntoniHP: they aren't. :) test *methods* are single functional tests. 17:39:04 jaypipes: there will be still dependencies in one way or another. in antonis approach i think its a logical grouping of steps to run. using methods you still have to keep track that you need to do a before b before c ect 17:39:17 AntoniHP: with the test class housing shared resources the test methods use. 17:39:52 jaypipes: my proposal is test *classes* are single functional tests with few test *methods* or generators (implementation detail) 17:39:54 I don't think switching frameworks would solve that problem. Whether unit, integration, or functional, each test has a goal. It makes assertions towards that result and then ends 17:40:05 donaldngo_hp: no, that's wrong. if the test method is a series of dependent actions, assertions in a or b will mean c will not be executed... 17:40:39 donaldngo_hp: sorry, shouldn't say that's "wrong"... :) just my opinion... 17:40:59 I think what we're talking about only applies to some tests as well. I'm not sure I could see that style in use for negative tests, say one verifying that if I use an invalid image, a server won't build 17:41:26 Would it be fair to say that we care most about these results for the more core/smoke/positive tests that we have? 17:41:36 err, more=most 17:41:40 AntoniHP: and in your approach, if test method 002_xxx failed, then test method 003_xxx should be skipped, right? 17:41:51 no 17:42:21 if 1) fails then 2) and 3) skips, if 1) succeeds then 2) and 3) execute 17:42:34 because we can get malformed API response, yet be able to actually boot VM 17:42:47 but then if 3 depends on 2 and it not skipped, 3 will fail, which would be a false positive 17:43:28 AntoniHP: right, and we are arguing that those three steps are assertions that should be in a single test method called test_basic_vm_launch(), otherwise you need to add stuff to test case framework to handle dependencies between test methods 17:43:29 dwalleck: yes, there are different scenarious possible, sometimes test would be just like assertions and sometimes not 17:43:32 I think we need code here... 17:44:01 jaypipes: that is false statement, generators allow to execute logical flows wihtout any nosetest additions 17:44:14 I can assert that I can create tests that give the same results, but are not dependent 17:44:31 AntoniHP: and putting the assertions in a single test method allows to execute logical flows without generators ;) 17:44:52 Which then break the dependency chain, allow for class level parallel execution, and for isolated test execution 17:45:01 jaypipes: no, because first raised assetion stops the test method 17:45:11 AntoniHP: yes, that's what you want! 17:45:46 jaypipes: not in case of integration test, as I mentioned before malformed response from REST call does not indicate final result of initial call 17:45:59 AntoniHP: So how about this...why not re-write some of the core servers tests in the style you propose 17:46:11 AntoniHP: that's a totally diffrerent test than "launch this VM in a normal operation" though :( 17:46:20 That way we're talking about concrete things instead of concepts 17:46:25 dwalleck: ++ 17:46:45 ok, I will do this 17:46:47 I think it would be easier to be able to put this all on the table and compare things with real world examples 17:47:26 agreed 17:47:58 And that way we can see the results, compare the output, and see what is different and/or lacking 17:48:09 alrighty, let's let AntoniHP put some example code up to a pastebin/gist... 17:48:19 Until then, I don't think further discussion will help much 17:48:38 ok 17:52:11 Good, I think that will help quite a bit 17:52:39 how about we set up some time where we can see the code on someones desktop? i think we would all reach the end goal a lot faster then our current approach of code pasting 17:53:25 donaldngo_hp++ 17:53:30 donaldngo_hp: I think that's a good idea. I'd still like to have a chance to see and run it before as well 17:53:30 we can discuss real time 17:54:07 donaldngo_hp: I think I'd actually prefer pastes and the public mailing list for discussion... 17:54:50 And it may help if I also share what the tempest results I'm using now look like. I think there's quite a bit that comes out of the --with-x-unit results that are fairly helpful 17:56:06 dwallect: would love to see what your report looks like 17:56:30 awesome, I'll find a way to get that viewable 17:56:42 donaldngo_hp: are you using xUnit output for the feed into your reports? 17:56:46 And then we can see better what we have, and what is missing 17:57:09 jaypies: yea we are using xunit to product xml and then aggregate into junit style report 17:57:19 *produce 17:57:27 donaldngo_hp: k 17:58:13 donaldngo_hp: Ahh, then you're probably seeing pretty much what I am 17:58:21 i can send our report out to the group as well 17:58:29 donaldngo_hp: yes, please do! :) 17:58:55 let's use the main mailing list, though, with a [QA] subject prefix... the team mailing list is proving unreliable at best :( 17:59:20 Though I'd like to add more...for example, my devs love that I say that a server failed to build, but without more info (the server id, IP, etc), it's not much help. I'm trying a few things to make that better 17:59:42 * dwalleck ideally would like to pull error logs directly from the core system, but not today 17:59:55 dwalleck: that's what the exceptions proposed branch starts to address :) 18:00:19 jaypipes: yup! 18:00:29 dwalleck: and I've been creating a script that does a relevant log file read when running tempest against devstack... 18:01:54 jaypipes: I was thinking along the lines of that. It's good start, but I'm afraid of how verbose it could be 18:02:32 dwalleck: well, the script I have grabs relevant logs, cuts them to the time period of the test run, and then tars them up :) 18:02:58 dwalleck: figured it would be useful for attaching the tarball to bug reports, etc 18:03:03 nice! 18:03:12 That sounds very useful 18:03:31 dwalleck: yeah, just can't decide whether the script belongs in devstack/ or tempest! 18:03:45 good question 18:04:05 when you decide can you post link to it on the list? 18:04:19 sounds like an alternate plugin for tempest for those using devstack 18:04:50 dwalleck: speaking of that... one other thing we all should decide on once we come to consensus on the style stuff is when to have tempest start gating trunk :) currently, only some exercises in devstack are gating trunk IIRC... 18:04:54 AntoniHP: absolutely! 18:05:38 jaypipes: I was thinking of the same thing. When I saw the gating trunk email, I was excited until I realized it wasn't on Tempest :) 18:05:40 dwalleck: right.. I added the tempest/tools/conf_from_devstack script recently to allow someone to generate a tempest config file from a devstack installation... very useful after running stack.sh ;) 18:06:12 dwalleck: since stack.sh wipes everything and installs new base images, which are needded in the tempest conf :) 18:06:20 * jaypipes needs to blog about that... 18:06:52 Hmm...I would say once we can confidently say we have a solid set of smoke tests that we consider to be reliable. That seems like a reasonable goal 18:07:56 dwalleck: ++ 18:08:01 dwalleck: we're getting there... 18:08:41 I think we're close. The one thing I'm wrestling with is that it's bit hard to visualize coverage based on the bug list in Launchpad 18:09:28 dwalleck: agreed. though the tags help a bit.. 18:10:14 jaypipes: They do. I'm still going to keep bouncing that idea around in my head 18:10:58 Well good folks I need to bow out, off to the next meeting 18:11:59 Or not :) Jay is stepping away for a sec 18:12:49 nati_: Are you still here? 18:13:05 AntoniHP: how about we just comment on the code on the mailing list? ok with you? 18:13:35 I think using list would be more productive, as it is less interactive and code needs time to be read 18:13:42 AntoniHP: not a problem. 18:13:52 ok good discussion so far, we will continue on the ML. 18:13:56 #endmeeting