17:01:40 <jaypipes> #startmeeting
17:01:41 <openstack> Meeting started Wed Dec  7 17:01:40 2011 UTC.  The chair is jaypipes. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:01:42 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic.
17:02:48 <jaypipes> looks like a number of folks aren't here yet... let's give a couple minutes...
17:02:53 <lloydde> is there a QA meeting now?
17:02:57 <jaypipes> lloydde: yep...
17:02:58 <lloydde> oh, just answered my question
17:03:12 <lloydde> Thanks, I've been having connectivity problems.
17:03:27 <jaypipes> np
17:04:02 <rohitk> morning!
17:04:39 <jaypipes> rohitk: mornin.
17:04:54 <jaypipes> OK, let's get started I guess..
17:05:07 <jaypipes> #topic Tempest status
17:05:28 <jaypipes> Daryl, Rohit and myself spent time this weekend getting Tempest a bit more orderly...
17:05:55 <jaypipes> There were issues with the configuration, with auth_url, and a number of other things that were preventing a simple nosetests -x tempest from working.
17:06:24 <donaldngo_hp> thats awesome!
17:06:31 <jaypipes> I'm still working through trying to get a clean run of Tempest. So far, I've found many of the test cases have side-effects
17:06:42 <jaypipes> which is kinda annoying and I'm working on fixing that up.
17:07:16 <jaypipes> Daryl also completed the storm -> tempest renaming and so at least we're done with that milestone and merge hell
17:07:33 <jaypipes> what we need now is a concerted effort to get tempest stabilized and gating trunk..
17:07:41 <jaypipes> and that means a couple things:
17:08:22 <Ravikumar_hp> jaypipes: yes. i pulled code yesterday and saw the changes, but like like config looks for storm folder or so
17:08:30 <jaypipes> 1) We need lieutenants for each of the major pieces...  these folks should be responsible for carefully going over a section of the test cases
17:08:42 <jaypipes> Ravikumar_hp: that's been fixed :)
17:08:49 <jaypipes> Ravikumar_hp: pull origin master...
17:08:57 <Ravikumar_hp> ok. thanks
17:09:22 <jaypipes> 2) We need to work with jeblair and mtaylor to get Tempest runnable against a devstack deployed clusster
17:09:52 <jaypipes> #2 is what I've been struggling to fix up on my local machine... and I've been reporting bugs to horizon and tempest steadily for the past 4 days ;)
17:10:33 <jaypipes> I will take the image test cases. I'm looking for volunteers to focus on the server tests.
17:10:37 <rohitk> on 1) we have started working on that and filing bugs
17:10:56 <jaypipes> And if someone wants to lead the flavors tests, that might be an easy one to start with
17:11:03 <jaypipes> rohitk: yep, I've noticed.
17:11:16 <jaypipes> rohitk: still I'd like to have a go-to person for each major component of testing
17:11:18 <rohitk> work-in-progess though
17:11:57 <rohitk> ah, yes, i'll get back to you on that shortly
17:12:02 <jaypipes> rohitk: and it's now 1 week out from the E2 milestones and we're still a ways away from a gateable solution.
17:12:10 <rohitk> agreed
17:12:11 <rohitk> totally
17:12:17 <nati> HI sorry late
17:12:40 <jaypipes> nati: np
17:12:46 <jeblair> regarding #2, we're ready to run tempest against stable/diablo, and we can start gating on that soon.  the next thing we would expect to do is to try to get devstack configuring milestone-proposed, and then run tempest there.
17:13:11 <jaypipes> jeblair: yup. though tempest doesn't even run cleanly yet...
17:13:14 <jeblair> once we can run something on trunk, we're likely to do that post-commit for a while before gating.
17:13:24 <jaypipes> jeblair: sure
17:13:48 <jeblair> yep, just a quick update from my perspective.  we're ready to consume tempest when you think it's ready for us. :)
17:13:59 <jaypipes> jeblair: awesomeness
17:14:01 <jeblair> (on stable/diablo)
17:14:19 <jaypipes> jeblair: I expect it will be at least another week.... just judging from the number of failures I'm seeing
17:14:52 <nati> jeblair++
17:15:25 <jaypipes> I think the biggest issue I've seen with the test cases so far is that most of them contain side-effects .. and some even depend on the side effects of other tests cases, which is not good.
17:15:28 <nati> jeblair: Is there any doc or memo for your configurations?
17:15:50 <rohitk> has anyone got a 100% pass result with the tests?
17:15:53 <jaypipes> So, I'm going to try to get some documentation up on qa.openstack.org today that highlights how to avoid side effects in test cases
17:16:38 <nati> jaypipes: I think some test case could  be depend each other.
17:16:41 <jaypipes> rohitk: not me.... plus, there's a bunch of stuff you need to do to a stock devstack deployment to even get past one or two tests -- namely, you need to remove the silly ratelimit middleware
17:17:06 <jaypipes> nati: no. otherwise there is no way to get decent parallelization in the testing...
17:17:07 <jeblair> nati: https://raw.github.com/openstack/openstack-ci/master/slave_scripts/devstack-vm-gate-host.sh
17:17:21 <jeblair> nati: it's devstack with the localrc in that file
17:17:27 <jaypipes> nati: all test cases need to create, query, and then delete fixtures that the tests look for
17:17:30 <rohitk> i have got most tests to re-run after fixing the side-effects as you mentioned, request qaers file bugs if you see inconsistencies
17:17:38 <nati> jeblair: Thanks
17:17:49 <jaypipes> rohitk: cool. you push code yet?
17:18:15 <rohitk> have fixes in pipeline, maybe by today/tomorrow
17:18:19 <nati> jaypipes: Ideally, it is true. But when we  start writing test, it is not effective way to implement
17:18:21 <jaypipes> gotcha
17:18:47 <nati> jaypipes: I think we should introduce @depentent decorator such as backfire
17:18:48 <jaypipes> nati: I will respectfully disagree ;)
17:19:17 <rohitk> well, me too, dependencies just build up, they should be handle in setup(), teardown()
17:19:46 <nati> jaypipes: hmm, or we should reuse test method for another test
17:20:31 <jaypipes> test cases should test a series of related actions...
17:21:15 <nati> jaypipes: Current code looks like test case == action. May be, this is another problem
17:21:53 <jaypipes> so what I am referring to is the (current) practices of doing a test_get_image_metadata(), then test_set_image_metadata() and a test_delete_image_metadata(). Those tests all hit a class-level image and assume state changes from other test methods have occurred. Those should just be a single test method IMHO
17:22:31 <rohitk> jaypipes: agree, also @depends_on decorator would be useful when the test depends on external resources, say for ex, creation of key-pairs, libvirt  but test shouldnt depend on "each other"
17:22:49 <jaypipes> and that single test method can be run in parallel if it creates its own fixture, tests against that fixture, and cleans up after itself.
17:22:56 <jaypipes> rohitk: sure, agreed
17:23:02 <nati> jaypipes: if so, how about another test case which use test_get_image_metadata?
17:23:16 <donaldngo_hp> thats what I struggle with in our framework. we have super long end to end scenarios
17:24:08 <jaypipes> nati: then that test case is testing a series of actions that should create an image fixture, use the image metadata, etc, and then clean up after itself.
17:24:19 <jaypipes> donaldngo_hp: you struggle with test methods that are too long?
17:24:33 <jaypipes> donaldngo_hp: or struggle with test methods that interrelate with each other?
17:24:42 <nati> jaypipes: I got it. Basically, I agree with you to avoid dependency. I wanna discuss test actoin reuse also.
17:25:34 <donaldngo_hp> some of the tests cases require repeated actions so you have test name like test_gettoken_create_container_upload_file
17:25:37 <jaypipes> #action jaypipes to put together some example tests in QA docs
17:25:53 <donaldngo_hp> and test_gettoken_create_container_upload_file_delete_file_delete_container
17:26:19 <jaypipes> donaldngo_hp: but the second test method covers everything from the first... so the first is redundant and should be removed...
17:27:21 <donaldngo_hp> this is a scenario that i just made up but sometimes its needed to run tests in series but treat them as a group
17:27:29 <jaypipes> Sometimes, for longer integration tests like we're dealing with in Tempest, I prefer to have a "use case name" and name the test based on the use case. For example:
17:28:16 <jaypipes> A use case for a user that creates a snapshot of a server, stores that image in Glance, then later deletes the snapshot.
17:28:50 <jaypipes> I might call that use case "snapshot_archive_delete" and make the test method called test_snapshot_archive_delete()
17:29:11 <jaypipes> and add comments in the docstring or top of the module that detail the use cases the tests in the module are testing
17:29:43 <Ravikumar_hp> jaypipes: that test makes call to common functions ... assembed from blocks
17:29:49 <jaypipes> In this way, you start to get away from the (unittest-focused) behaviour of splitting everything into tiny little test methods -- which is great for unit testing but not so much for functional testing
17:30:19 <jaypipes> Ravikumar_hp: sure, absolutely, but those building blocks should operate on local state, not a shared class state, as much as possible...
17:30:40 <jaypipes> Ravikumar_hp: pretty much the only thing I think should be in the class-level state should be the openstack.Manager and config stuff.
17:30:54 <AntoniHP> well, not really for us - this high resolution of tests means that we can produce more meaningful bug reports
17:31:20 <jaypipes> AntoniHP: could you elaborate on that?
17:31:42 <AntoniHP> your test case is more useful as three tests with side effects
17:31:43 <jaypipes> Ravikumar_hp: and authentication tokens/users should be in class-level state too...
17:31:54 <jaypipes> AntoniHP: why?
17:32:20 <AntoniHP> easier to report, as failures are more visible
17:32:45 <jaypipes> AntoniHP: not sure I agree with you on that...
17:32:52 <AntoniHP> it is more useful to report that storing in glance failed, then that images do not work
17:33:06 <nati> AntoniHP: I agree.  API A:OK API B:OK  API A and API B:NG. I think your point is this
17:33:18 <AntoniHP> the whole point of automation is making development faster, not to make ideal test cases
17:33:26 <jaypipes> AntoniHP: that has nothing to do with whether you have test methods own their own fixtures...
17:34:02 <AntoniHP> yes, but nosetest was created for unittests, which in fact we do not run
17:34:23 <AntoniHP> i think good compromise would be to refer as a case to set of nosetests cases
17:34:24 <Ravikumar_hp> Jaypipes: yes. even in integration testing to some extent  we need to modularize
17:34:56 <jaypipes> AntoniHP: sorry, you lost me a bit on that last sentence... could you rephrase?
17:35:39 <donaldngo_hp> would it be beneficial for all of us to schedule a remote desktop meeting to go over these issues so we can actualy discuss these topics in practice
17:35:41 <AntoniHP> i think a test case should consist of set of sequences of single nosetest tests
17:36:00 <Ravikumar_hp> common functions . and the integration tests will be written like create_container (test1), add_file (test1.pdf, test1) , delete_file () , delete_container
17:36:03 <AntoniHP> for example a test case should be a class, and should always be run as class
17:36:16 <Ravikumar_hp> AntoniHP: ++
17:36:57 <AntoniHP> instead of single nose test mapped  to single test case
17:37:05 <Ravikumar_hp> within test case class , make a call to methods which are mostly common functions
17:37:13 <jaypipes> AntoniHP: well then we are using the wrong tool then, if that is how we want to structure things... nosetest (and Python's unittest2 module on which it is based) would require that we name our methods like test_001_some_action(), test_002_next_action()
17:37:24 <rohitk> all that nosetests does is discover and run based on certain patterns, are fixtures within the realm?
17:38:02 <AntoniHP> jaypipes: yes, we use imprefect tool, but there are libraries that allow us to modify this tool to suit our needs
17:38:22 <jaypipes> rohitk: all I'm trying to say is that I believe fixtures should be created and destroyed within test methods. AntoniHP is advocating for having class-level fixtures that test methods modify.
17:38:50 <AntoniHP> jaypipes: advantage with nosetests it already fits good with reporting, autodiscovery etc
17:39:08 <AntoniHP> last but not least, we already try to make all tests be working with nose
17:39:14 <jaypipes> AntoniHP: it actually already works well for parallelization, too, if we design test cases and methods properly ;)
17:39:32 <nati2> I think we are saying same thing as diffrent way
17:39:57 <jaypipes> nati2: no, I'm not so sure. I think AntoniHP is saying that test fixtures should remain in class-level scope. right, AntoniHP?
17:40:05 <nati2> Class implementation or method implementation is not important. IMO
17:40:29 <nati2> Action implementation and test-case should be separated. Isn't that?
17:40:35 <AntoniHP> jaypipes: yes, maybe not for all, but I think that would be good option
17:40:45 <jaypipes> nati2: if test methods in a test case class can be run in parallel, that is important, no?
17:40:58 <donaldngo_hp> i really like how kong deals with the sequencing of test flows within a test class. we could declare a class as tests.FunctionalTest and still be able to run our test classes in parallel but tests within a test class are run in sequence
17:41:00 <jaypipes> AntoniHP: that is how the existing Tempest test case classes work...
17:41:09 <nati2> I think  class-level scope should be converted to one method
17:41:15 <jaypipes> nati2: ?
17:41:21 <nati2> So they are not difference
17:41:39 <nati2> jaypipes:  I think it is same, one class has one method.
17:41:59 <jaypipes> donaldngo_hp: :) I don't like having to name all my tests 098_blah, 097_blah :)
17:42:02 <AntoniHP> I think one class, many methods in sequence
17:42:04 <rohitk> AntoniHP: is your concern specific to larger system test cases?
17:42:05 <nati2> jaypipes: Big method could not be run in parallel.
17:42:13 <dwalleck> Whew, finally
17:42:28 <jaypipes> nati2: sure it could! just make sure the test method creates and destroys its own fixtures.... that's my whole point.
17:42:43 <nati2> AntoniHP: Yes. But if these method depends each other, it look like one bit method same as Jay mentioned.
17:43:20 <jaypipes> dwalleck: welcome :) we are having a rousing discussion about how to structure test cases and test case methods.
17:43:29 <AntoniHP> nati2: yes, but it is better to have them as multiple methods
17:43:33 <rohitk> IMO for functional/medium tests, fixture handling by individual test methods works fine
17:43:40 <dwalleck> explicit test dependencies can make for very flaky tests
17:43:48 <AntoniHP> nati2: because it gives more detail in reporting
17:43:54 <dwalleck> jaypipes: Just my type of discussion :)
17:43:56 <nati2> AntoniHP:  We can also define helper method for test. so it is same. IMO
17:44:09 <jaypipes> dwalleck: to summarize, AntoniHP (and I think Ravikumar_hp and donaldngo_hp) are in favour of class-level fixtures (like we have now, but are being misused IMHO) and I am in favour of test methods creating and destroying thei own fixtures.
17:44:16 <nati2> AntoniHP:  Ah, reporing is another point
17:44:45 <nati2> I think class method level isolation is good for reporting
17:45:01 <nati2> jaypipes: How about that? It is a kind of big method
17:45:03 <dwalleck> Hmm...if/when we move to parallelization, class level fixtures will create some challenges
17:45:12 <jaypipes> dwalleck: that was my point exactly.
17:45:22 <nati2> dwalleck: If we write big method. it is same
17:45:49 <dwalleck> From a best practices standpoint, other than basic connectivity, I'm always for a test maintaining its own data
17:45:55 <dwalleck> nati2: How so?
17:46:03 <jaypipes> dwalleck: test *case* or test *method*? :)
17:46:20 <nati2> For instance,  class X has test_A, test_B,test_C,  and test_X which do ( call_A,call_B,call_C)  is same
17:46:20 <dwalleck> A long test should not have a negative effect on paralleziation
17:47:02 <dwalleck> jaypipes: Well that's very specific :) When I say test case, I believe I mean test method based on what you're saying
17:47:04 <nati2> dwalleck: we can have many smoll test class as test case
17:47:28 <jaypipes> nati2: if test_A, test_B, and test_C all create their own fixtures, they can be run in parallel. But that's NOT the same thing as testing that the *sequence of A, B, and C* happened in a correct fashion.
17:47:34 <dwalleck> So right now we have test classes, each with many test methods, which to me are individual test cases
17:47:47 <AntoniHP> my proposition is test case to be one test class with many nosetest/unittest test methods inside
17:48:01 <donaldngo_hp> jaypipes: i think the 001, 002, naming faciltates test steps in a test case. and the test class will be the test case
17:48:05 <jaypipes> AntoniHP: we already have that... that's how it is today.
17:48:23 <rohitk> AntoniHP: that is exactly how the unittests work today
17:48:24 <AntoniHP> and we are happy with that?
17:48:24 <nati2> jaypipes: Then we could have class XA test_A, Class XB test_B,
17:48:33 <dwalleck> donaldngo_hp: I'm confused. Why would we split a test case across many test methods?
17:48:45 <jaypipes> AntoniHP: and unfortunately, you need to know the side effects that each method has to understand the beginning state of another test method :(
17:49:14 <donaldngo_hp> dwallect: for resolution and modularity
17:49:25 <dwalleck> Perhaps we all have different definitions of what a test case is. To me, a test case is a very atomic thing. If I'm asserting on many things, that becomes a test scenario
17:49:48 <AntoniHP> jaypipes: yes but I do not see anything wrong about that
17:49:52 <donaldngo_hp> your test case would piece together calls to modules
17:50:01 <Ravikumar_hp> jaypipes: unit tests can create test data , and remove data. but integration tests depend on othe rcomponents, we struggle a bit
17:50:41 <dwalleck> donaldngo_hp: I would have to see an example understand better. I think I see what you're saying, but to me I think I normally handle that as a helper methods to a test class
17:50:51 <Ravikumar_hp> donaldngo_hp: integration tests case has to be assembly . calls to modules (functions)
17:50:54 <dwalleck> If I have common code I want to share between tests
17:51:25 <dwalleck> So one step backwards: what problem are we trying to solve?
17:52:00 <dwalleck> Right now I think we've reached a best practices discussion, on which it sounds like we all have our own opinions :)
17:52:21 <jaypipes> dwalleck: we are trying to decide whether side-effects (changes of state to class-level scope/variables) is OK.
17:53:06 <nati2> dwalleck: I agree for "If I have common code I want to share between tests"
17:53:25 <nati2> I think reporting is only discussion point
17:53:56 <dwalleck> jaypipes: Ahh, then we're stuck. To me, side effects are a bad thing. In my opinion, no one test case/method should assume anything, other than provided reference data
17:54:08 <jaypipes> dwalleck: I agree with you.
17:54:09 <dwalleck> nati2: So what are you looking for from reporting?
17:54:20 <AntoniHP> I half agree
17:54:25 <jaypipes> :)
17:54:27 <AntoniHP> I agree that test case should not assume
17:54:36 <AntoniHP> I do not agree that method should not assume
17:55:05 <nati2> dwalleck: AntoniHP's point is that if we implement a test case as a test class, we can get detailed report from nose result.
17:55:09 <jaypipes> it's a matter of scope... AntoniHP and others think the atomic unit is the test case class. dwalleck and I think the atomic unit is the test method.
17:55:13 <dwalleck> Sorry, when I say test case, I mean test method. When you say test case, are you thinking test class?
17:55:14 <nati2> AntoniHP: Is this right?
17:55:34 <AntoniHP> i thik jaypipes just explained what i mean much better that i did
17:55:48 <dwalleck> I think maybe I get it. Let me try an example:
17:56:20 <dwalleck> So if there was a test class called "Create Server", it would have many test methods, each which asserts very specific things about the test creation process
17:56:59 <jaypipes> dwalleck: right.. it would have methods named test_001_create_instance(), test_002_check_instance_active(), etc
17:57:00 <Ravikumar_hp> i would create test_Create_Serrver with couple of methods
17:57:04 <dwalleck> So the passing of each test method has a single assertion, each which tells you something specific about that process
17:57:16 <jaypipes> dwalleck: right.
17:57:27 <AntoniHP> dwalleck: yes
17:57:30 <dwalleck> Right...so I see the fundamental different. I do the same thing, except I handle that at the test method level
17:57:45 <jaypipes> dwalleck: right... that's the crux of this point.
17:57:50 <Ravikumar_hp> 1) create_server(a), 2) Create_server(b) 3) create_server(invalid)
17:57:59 <jaypipes> Having said my opinions, it sounds like dwalleck and I might be in the minority here, so I'd be willing to change direction on this. But, we need to make a decision TODAY on this and stick to it...
17:58:14 <dwalleck> I don't mind to not see the results of each individual assertion normally, I just want to see where it failed
17:58:45 <dwalleck> Ravikumar_hp: #3 is what I would have a problem with. Now you're not testing something about that specific test case
17:58:49 <jaypipes> dwalleck: I agree with you... but I can change if the team wants to. Would you be willing to go the test_001_blah, test_002_next_blah route?
17:59:03 <rohitk> dwalleck: for your test_001 and test_002, is it accessing common code "create_server()"? or ist the creat done in each test
17:59:17 <dwalleck> If #1 was assert_response_code_valid, #2 verify password works, etc, I could possibly understand that
17:59:33 <rohitk> i just want to understand where atomic unit lies
17:59:39 <jaypipes> rohitk: for the test_001, test_002, the tests would be run in numeric order and would assume state changes from previous test method calls.
17:59:40 <dwalleck> rohitk: Yes. That's the point of having the servers clients and such
17:59:57 <dwalleck> We would end up with very, very many test classes this way
17:59:58 <rohitk> jaypipes: ah, then that's not too good
18:00:23 <jaypipes> AntoniHP: did I state the above correctly?
18:00:29 <jaypipes> Ravikumar_hp: you too ^^
18:00:37 <AntoniHP> actuall implementation of such sequence could be in form of test generator, rather  than numbers
18:00:42 <dwalleck> My concern is that I often run single test cases. I'm not very comfortable with having to run a whole test class to get a single result
18:00:49 <AntoniHP> this way also setUp and tearDown are called once
18:00:50 <dwalleck> It becomes a time sink
18:00:56 <rohitk> dwalleck++
18:01:36 <dwalleck> so if the problem is reporting, why not modify nose to give us a result for each assertion?
18:01:52 <rohitk> if tests are dependent then we should be prepared for wasted nightly runs, one failed caused a whole lot many to fail
18:01:57 <AntoniHP> if there are many small classes, one can run each class safely
18:02:03 <dwalleck> rohtik: ++
18:02:14 <jaypipes> dwalleck: that's something you can change in your routine, though, right? simple case of nosetests tempest.tests.test_server_create vs. nosetests tempest.tests.test_server.ServerTest.test_server_create
18:02:19 <AntoniHP> I do not mean all tests to be dependent but small portions of them
18:02:22 <dwalleck> AntonioHP: But it comes incredibly verbose
18:02:59 <rohitk> AntoniHP: in that case those tests should be intelligent enough to recover from side-effects
18:03:03 <dwalleck> jaypipes: You could if each test method in a test class didn't depend on the state created by previous test methods
18:03:11 <dwalleck> But what is being suggested is that it should
18:03:25 <jaypipes> dwalleck: yes, I understand...
18:04:30 <rohitk> AntoniHP: i can foresee your requirement for writing larger end-end system tests where knowledge of state is needed, but we just need one example to start off with
18:04:34 <AntoniHP> rohitk: yes, propably they should SKIP if previous part of sequence failed
18:04:41 <dwalleck> We seem to be at an impasse...
18:04:50 <jaypipes> in general, I'm only interested in the stuff that *failed*. I don't care about what passed... So, for me, showing the failed assertBlah() result is what I'm interested in, not necessarily seeing 001 ... OK, 002 .. OK, 003 ... OK...
18:04:52 <rohitk> AntoniHP: right
18:05:00 <AntoniHP> rohitk: this way in reporting we see error where it happened and where tests where skipped
18:05:01 <dwalleck> jaypipes++
18:05:33 <Ravikumar_hp> jaypipes++
18:05:55 <jaypipes> OK, we might be at an impasse here...
18:05:57 <AntoniHP> often I do not run my tests, and often I do not even report on bugs from my tests - ability of tests to self explain, skip, fail or error where needed is essential
18:06:01 <rohitk> jaypipes: agree, but for functional tests, I would also like to have the report/logs containing the "Test data" that was used
18:06:19 <dwalleck> rohtik: Why wouldn't you know what test data you used?
18:06:21 <donaldngo_hp> i like seeing passes as well as failures
18:06:31 <dwalleck> Isn't it specified implicitly?
18:06:31 <rohitk> be it a passed/failed test
18:06:38 <jaypipes> rohitk: you should check out the glance functional integration tests then :) you will be pleasantly surprised
18:06:56 <rohitk> dwalleck: im just saying that test data should be explicitly available in reports
18:07:14 <rohitk> jaypipes: gotcha
18:07:36 <jaypipes> rohitk: it should be, yes. But it should be done in a way like self.assertEquals(x, y, "x != y. Test data: %s" % my_test_data)
18:08:22 <jaypipes> OK, so what to do... :)
18:08:33 <dwalleck> Yeah, we have to be consistant
18:09:42 <dwalleck> And we're way over on time :)
18:09:53 <dwalleck> Examples? perhaps?
18:10:04 <jaypipes> well, I don't want AntoniHP and donaldngo_hp to not write test cases in Tempest because they don't like the way the test case atomicity... I want everyone participating. :)
18:10:38 <Ravikumar_hp> jaypipes: yes,
18:10:42 <jaypipes> yeah, examples and code speak... I'll put together examples of the two styles and we'll vote on them on the mailing list. Does that sound ok?
18:10:45 <nati2> I think decorator could be solve
18:10:54 <Ravikumar_hp> sounds good
18:10:56 <donaldngo_hp> sounds good
18:11:03 <nati2> By using decorator we can use both of class way and method way
18:11:11 <AntoniHP> yes, let discuss it on the list
18:11:13 <jaypipes> ok. AntoniHP, Ravikumar_hp, I will shoot you the examples before I send to ML to double-check I "got it right"
18:11:22 <donaldngo_hp> i also like the idea of having a working session with a remote deskop and code on someone's screen
18:11:32 <dwalleck> I think examples, discussion, and then voting is reasonable
18:11:34 <jaypipes> donaldngo_hp: worked well for you and me last time! :)
18:11:37 <AntoniHP> but decorators are quite complicated
18:11:41 <donaldngo_hp> exactly!
18:11:53 <dwalleck> But I hope we can come to some agreement so everyone gets what they need
18:11:57 <jaypipes> OK, I'll get those examples done today and expect an email from me.
18:12:00 <jaypipes> dwalleck: yep.
18:12:09 <Ravikumar_hp> ok Jay.
18:12:17 <nati2> jaypipes++
18:12:22 <jaypipes> alrighty... good discussions everyone.
18:12:25 <jaypipes> #endmeeting