17:01:02 <jaypipes> #startmeeting
17:01:03 <openstack> Meeting started Wed Nov 23 17:01:02 2011 UTC.  The chair is jaypipes. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:01:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic.
17:01:26 <jaypipes> #topic nati2 to give quick status update on forward-porting unit test bug fixes to trunk
17:02:08 <jaypipes> nati2: ?
17:02:29 <nati2> Yes Jay, Donald, Ravi helps forward-porting. And Jay also make a document how to forward-port
17:03:08 <jaypipes> nati2: how many are left?
17:03:15 <Ravikumar_hp> nati2: It is only one tranaction that I and Donald did
17:03:33 <nati2> Some problems occurs. At first, Essex code changes, so many logical conflicts occurs, especially for test cases
17:03:49 <nati2> ok hold on
17:04:12 <westmaas> o/
17:04:15 <nati2> 39 branch left
17:04:20 <jaypipes> k
17:04:31 <jaypipes> nati2: however some of those branches are crazy big ;)
17:04:44 <jaypipes> nati2: not sure some of them will be able to be done in a single patch...
17:04:44 <nati2> Ah for test case branch
17:05:14 <nati2> But patch branch is small, but test case branch is large.
17:05:19 <jaypipes> right
17:05:25 <nati2> So let's start bug patch branch
17:05:48 <jaypipes> nati2: well, let's deal with them one at a time, eh?
17:06:12 <nati2> at a time?
17:06:18 <jaypipes> nati2: the process I outlined in that email should work for most things... it's just that during reviews (example: https://review.openstack.org/#change,1834), we need to be sure to respond to folks
17:06:59 <jaypipes> nati2: I was just saying that most of the branches are small... we should just press forward with the smaller ones and try to get some momentum
17:07:41 <nati2> Yes I agree
17:08:15 <jaypipes> nati2: OK, well, please do respond to Mark McLoughlin on that review above... I'd like to get that smallish patch forward-ported so we can look to a good example going forward.
17:08:39 <nati2> jaypipes: I got it! Thanks
17:08:46 <jaypipes> nati2: even for 4 of us working on this, it's going to be a stretch to get this forward-porting done by start of December...
17:09:16 <jaypipes> ok, let's move on to functional/integration testing...
17:09:24 <nati2> jaypipes: hmm, I agree
17:09:35 <jaypipes> #topic westmaas and dwalleck to give status on openstack-integration-tests progress
17:10:08 <westmaas> we have a name, and jeblair is doing prepwork on the git/gerrit migration
17:10:29 <dwalleck> Still working on bringing new branches of tests in. I'm holding off on the whole domain name concept until we have more content
17:10:44 <westmaas> we do need to find a time to do the migration -anything proposed during that time will be confused, apparently
17:10:58 <jaypipes> dwalleck: domain name concept?
17:11:08 <anotherjesse> dwalleck: for those who want to work on writing tests - what is the recommended process/examples?
17:11:17 <westmaas> dwalleck: do we have a public list of what needs to be migrated yet, and/or are you already working on things?
17:11:23 <rohitk> dwalleck: would there be a lot of rework with the existing tests after bringing in domain?
17:11:27 <westmaas> er, working with others on things*
17:11:32 <dwalleck> jaypipes: The objected orietented reference we talked about last time
17:11:36 <jaypipes> ah
17:11:50 <dwalleck> rohitk: Not much, but some
17:12:04 <jaypipes> dwalleck: so, I set up https://github.com/openstack-dev/openstack-qa and started some initial documentation for integration testing...
17:12:05 <rohitk> dwalleck: thanks
17:12:25 <dwalleck> anotherjesse: I was supposed to get with jaypipes in the last week to find somewhere to put the offical docs up that I have my team working from
17:12:30 <dwalleck> So I can add to that
17:12:36 <jaypipes> dwalleck: I believe jeblair has openstack-qa set up in Gerrit and we can make a job that populates qa.openstack.org the same way as ci.openstack.org
17:12:56 <jaypipes> dwalleck: https://github.com/openstack-dev/openstack-qa is the official docs :)
17:13:08 <dwalleck> westmaas: I only have additional test cases setup in Launchpad. I should also mark test suites in progress to bring over so there's no extra work done by folks
17:13:43 <Ravikumar_hp> https://github.com/openstack-dev/openstack-qa will map qa site?
17:13:44 <dwalleck> I can bring everything over in one swoop, or I can keep going test suite by test suite. I just thought going one at a time might be easier for reviews
17:13:45 <jaypipes> dwalleck: yes, please do that. I know that Ravi's team is eager to contribute tests and the last thing we want is duplication of effort of course..
17:13:56 <jaypipes> Ravikumar_hp: yes
17:14:07 <jaypipes> Ravikumar_hp: or at least the /doc/ folder in there will be...
17:14:19 <anotherjesse> jaypipes: the openstack-qa is going to talk about the strategy / format for adding tests to openstack-integration-tests?
17:14:28 <jaypipes> anotherjesse: yes
17:14:37 <anotherjesse> https://github.com/openstack-dev/openstack-qa/blob/master/doc/integration.rst <- there?
17:14:38 <jaypipes> anotherjesse: https://github.com/openstack-dev/openstack-qa/blob/master/doc/integration.rst
17:14:44 <jaypipes> anotherjesse: heh, yes.
17:15:02 <anotherjesse> look forward to "Adding a New Integration Test"
17:15:03 <anotherjesse> ;)
17:15:13 <jaypipes> anotherjesse: I just rushed up a quick starter doc...
17:15:27 <westmaas> dwalleck: is integration test the right name there?
17:15:47 <westmaas> should that file be moved to something else? functional.rst or something else?
17:16:03 <donaldngo_hp> will be be discussing on general framework issues like running tests in parallel and how reports will be generated?
17:16:22 <dwalleck> westmaas: Integration is fine for right now
17:16:59 <anotherjesse> donaldngo_hp: ++
17:17:09 <dwalleck> donaldngo_hp: Paralleziation can be achieved through the nose parallel execution plugin. It's a bit strange, so the option to develop a better plugin is on the table with my team
17:17:19 <donaldngo_hp> on our project we are producing a junit style report from nosetest from the xmlunit plugin
17:17:31 <dwalleck> Reports can easily be generated with the xunit nose plugin
17:17:31 <jaypipes> donaldngo_hp: we can discuss whatever you'd like, however I ask that we approach the issues from a "how do we improve the existing storm test suite" instead of "let's rewrite the test framework to support X:
17:17:45 <dwalleck> That's what our team has been using internally
17:17:59 <dwalleck> But of course, writing a new plugin is also always an option :)
17:18:08 <jaypipes> dwalleck: parallelization cannot, unfortunately, be achieved using the nose parallel execution plugin.
17:18:42 <donaldngo_hp> jaypipes: i've ran some parallel tests last night its working fine. can you clarify?
17:18:43 <jaypipes> dwalleck: I run into this virtually every time I attempt it: https://gist.github.com/1144287
17:19:35 <dwalleck> jaypipes: Since we're using class setup/teardowns, I believe nose asks you to set some fields in each test to make it aware so it doesn't stumble over itself
17:19:57 <donaldngo_hp> i created 4 sample test that ran in 4 minutes. set the processes=4 and reran and they ran in 1 minute
17:20:04 <jaypipes> donaldngo_hp: can you clarify what "on our project" means? :) I want to make sure we aren't duplicating more work...
17:21:39 <jaypipes> donaldngo_hp: ?
17:21:40 <donaldngo_hp> jaypipes: i invision our team being able to drop in "tempest" and running something like nosetest -v tempest/ --with-nosexunit --proccesses=5
17:21:54 <jaypipes> donaldngo_hp: what is tempest?
17:22:09 <nati2_> jaypipes: Is'nt that new name?
17:22:13 <donaldngo_hp> tempest is a subset of "our project" meaning we will run this suite along with our web gui tests, some jcloud tests, rubby wrapper test
17:22:16 <dwalleck> The new project name
17:22:25 <jaypipes> oh
17:22:28 <jaypipes> sorry...
17:22:37 <Ravikumar_hp> no more storm
17:22:48 <donaldngo_hp> storm was a good name :(
17:22:55 <jaypipes> gotcha... sorry, wasn't aware of that
17:23:05 <nati2_> donaldngo_hp: yeah it have already taken
17:24:11 <nati2_> donaldngo_hp: So basically, you cloud run the tempest with  nosetest -v tempest/ --with-nosexunit --proccesses=5 ?
17:24:18 <nati2_> And result was correct?
17:24:20 <jaypipes> donaldngo_hp: OK, so is your team writing additional tests, or is your team compiling other test suites and running them?
17:24:36 <donaldngo_hp> nati2_ no thats why i brought it up :)
17:24:50 <donaldngo_hp> what i did run was a proof of concept of creating 4 tests running them in parallel
17:25:31 <nati2_> donaldngo_hp:  Ah, but many actual tests have dependencies.
17:25:35 <donaldngo_hp> jaypipes: yes but these tests are not valuable to the community. for example our web gui tests run on selenium and hit our website
17:25:43 <jaypipes> donaldngo_hp: I've found as long as the tests don't import eventlet, everything is fine with the parallel testing... once you import eventlet, it dies... but let's move on. If you have the parallel stuff running the integration tests, then that's good...
17:25:44 <donaldngo_hp> ------------------------------------------ TEST AUTOMATION FRAMEWORK --------------
17:25:44 <donaldngo_hp> Web UI <----------------------------->
17:25:44 <donaldngo_hp> JCloud Wrappers <-------------->
17:25:44 <donaldngo_hp> Ruby Wrappers <---------------->        Jenkins    <------> Gradle --->JUnit Report
17:25:44 <donaldngo_hp> Tempest <------>
17:25:44 <donaldngo_hp> CLI <---------------------------------->
17:25:46 <donaldngo_hp> TBD <--------------------------------->
17:25:48 <donaldngo_hp> -----------------------------------------------------------------------------------
17:25:52 <donaldngo_hp> this is what our framework looks like
17:26:25 <dwalleck> He's just injecting tempest into the middle of a larger project
17:26:34 <jaypipes> understood...
17:26:49 <jaypipes> donaldngo_hp: and you are saying that running nosetests /tempest does not work, right?
17:27:17 <donaldngo_hp> jaypipes: right now out of the box no
17:27:32 <jaypipes> donaldngo_hp: OK, then let's get a bug reported and have that fixed :)
17:27:40 <donaldngo_hp> i have been able to run some of the tests in kong
17:27:43 <jaypipes> donaldngo_hp: sounds like a pretty important thing to fix to me...
17:28:00 <dwalleck> You have to run nosetests tempest/tests
17:28:08 <jaypipes> donaldngo_hp: the kong tests, IIRC, were eventually going to go away, as they were replaced with identical tempest ones.
17:28:12 <dwalleck> Since the /tests directory is where everything lives
17:28:16 <rohitk> i had reported a similar bug last week, and it was fixed after dwalleck's missing files were brought in
17:28:37 <AntoniHP> I think it might be more useful to run larger groups of tests in pararell, rather than executing single tests in that manner - this will bring best of two worlds, deterministic execution of series of actions that might depend on each other and fast execution times - equivalent of running multiple nosetest process on each file containing tests
17:28:53 <jaypipes> does the tempest/ directory have an __init__.py? if not, just add it and nosetests tempest/ should just work...
17:28:58 <dwalleck> donaldngo_hp: Are you setting your storm.conf with values for your environment?
17:29:21 <AntoniHP> however even then it does require maintaining plenty of configuration details, which is head ache
17:29:41 <donaldngo_hp> dwalleck: i haven't touch the new tempest folder yet, i have an old code that still uses kong
17:29:58 <jaypipes> AntoniHP: could you elaborate on what you mean with "rather than executing single tests in that manner"?
17:30:09 <donaldngo_hp> thats why i wanted to have a discussion so that we are all aligned on how the tests will run and report
17:30:21 <dwalleck> donaldngo_hp: Then why did you say you couldn't get them to run? I'm confused
17:30:30 <jaypipes> donaldngo_hp: tempest is the suite we are using going forward. kong is a dead-end.
17:30:36 <AntoniHP> nosetest would search for tests, and for each test do processs that will execute setup, test, teardown
17:30:57 <donaldngo_hp> well the kong tests out the box i had to modify the endpoints to use ssl since we go through an apigee loadbalancer
17:31:23 <jaypipes> AntoniHP: but with --processes=4, you get parallelism in execution (not collection of tests, but execution is parallelized...)
17:31:44 <dwalleck> AntoniHP: I think that was always the plan. The short term goal though was on test coverage, so that's what I've been focused on
17:31:59 <jaypipes> dwalleck: ++
17:32:34 <jaypipes> donaldngo_hp: OK, so what's your plan? Are you going to update to the recent tempest stuff?
17:32:41 <AntoniHP> it will run every test in pararell, so every test has to be completely self contained - usually a _sequence_ of tests is self contained
17:32:54 <dwalleck> jaypipes: Would the correct action be blueprints for parallel execution and perhaps better logging output beyond the xunit plugin?
17:33:13 <AntoniHP> and can be run together with other _sequences_
17:33:14 <donaldngo_hp> jaypipes: yea i plan to do that. right now we are using kong which is working well for acceptance tests of apis
17:33:39 <dwalleck> AntoniHP: By sequences of tests, do you mean dependent tests?
17:33:40 <jaypipes> AntoniHP: hold off on that thought for a sec... let's finish this conversation about kong/tempest first.
17:33:43 <rohitk> dwalleck: ++ on logging, there is practically no logging in the current suite
17:33:52 <jaypipes> dwalleck: hold off on that... want to get a decision on one thing at a time...
17:34:09 <dwalleck> rohitk: I have logging implemented locally. I can push a patch for that this week
17:34:19 <jaypipes> donaldngo_hp: OK, good to hear. When do you plan to move off of Kong?
17:34:24 <rohitk> dwalleck: ok
17:34:31 <dwalleck> It's just a matter of wrapping logging around the rest client
17:34:43 <jaypipes> donaldngo_hp: in other words, what needs to be fixed in tempest to make that transition happen? :)
17:35:02 <donaldngo_hp> jaypipes: hopefully as soon as i get it working on our env. Maybe the end of next week?
17:35:22 <donaldngo_hp> jaypipes: I will let the group know next meeting :)
17:35:26 <dwalleck> donaldngo_hp: And if you need any help or have any questions, please let me know
17:35:31 <jaypipes> donaldngo_hp: Can we aim for next Wednesday instead? Perhaps a follow-up with me, dwalleck and you to hammer out remaining issues?
17:35:58 <donaldngo_hp> jaypipes: I will let the group know next meeting :)
17:36:05 <jaypipes> hehe, ok :)
17:36:22 <jaypipes> alright, AntoniHP, let's discuss dependent tests now...
17:36:55 <jaypipes> I do know that the original idea was to have test cases be self-contained with no side-effects, making them parallelizable.
17:37:30 <jaypipes> AntoniHP: are you suggesting that requirement be lifted or loosened?
17:37:36 <AntoniHP> there are two requirement IMHO for tests to be self contained, they must not depend on other tests
17:37:44 <AntoniHP> and they must not depend on same resources
17:38:30 <AntoniHP> this is difficult to achieve, because sometimes tes would need a separate project
17:38:46 <AntoniHP> other cost is if test requires a running VM for example, it takes time to spin it up
17:39:40 <AntoniHP> so far in testing I tried to group similar tests to reuse resources, if i have tes toPUT into server/id/metadata I can safely use same /server/id/metadata to test reading metadata
17:39:40 <nati2_> AntoniHP: How about backfire way? It uses @depend decorators
17:39:59 <dwalleck> Test dependencies will hurt us in the end
17:40:04 <jaypipes> nati2_: backfire uses DTest, not nose, just FYI. That would really be a complete rewrite of tempest.
17:40:16 <dwalleck> Then that hurts the ability for us to run tests in isolation
17:40:23 <jaypipes> dwalleck: agreed.
17:40:27 <nati2_> jaypipes: I got it.
17:40:57 <dwalleck> A solution our team has thought of and been working on is a test resource pool module, which holds a "cache" of test data, such as servers, etc
17:41:02 <rohitk> nati2_: proboscis had support for dependent test ordering
17:41:39 * jaypipes not entirely convinced that functional integration tests can be effectively run in parallel against the exact same deployed platform... seems to me that any assertions on platform state would need to be very carefully planned...
17:41:39 <dwalleck> And then you simply ask the cache for the data you want. If a clean object exists, you get it. If not, it creates it
17:41:43 <AntoniHP> dwalleck: i was thinking about similar solution
17:41:46 <nati2_> rohitk: proboscis is the library?
17:41:57 <nati2_> dwalleck: Cache looks good
17:41:58 <rohitk> nati2_: https://github.com/rackspace/python-proboscis
17:42:10 <dwalleck> I strongly believe we can get test parallelization working in a sane and clean manner
17:42:40 <jaypipes> dwalleck: but should that be our *focus* right now? I'm really not sure...
17:42:44 <AntoniHP> rohitk: proboscis is interesting, but a) would it work with multiprocess plugin in nose? b) would it support resource sharing?
17:43:00 <dwalleck> Well, I thought our original focus was test coverage, so that's where I've been going
17:43:22 <rohitk> AntoniHP: not sure, need to check
17:43:26 <jaypipes> I think this group could definitely come up with a good parallelization solution, but frankly, our charter right now is to increase the quantity and quality of the functional integration test cases and get those gating trunk. Speeding up the test suite is a secondary priority
17:43:41 <rohitk> jaypipes++
17:43:44 <AntoniHP> jaypipes++
17:43:47 <dwalleck> The basis of zodiac also takes into account the idea of parallel execution, so its capable, just not ready
17:43:52 <dwalleck> jaypipes++
17:44:18 <Ravikumar_hp> jaypipes++
17:44:32 <nati2_> jaypipes++
17:44:40 <Ravikumar_hp> increase coverage top priotiry
17:44:53 <jaypipes> OK, so I would ask that we backburner efforts to parallelize for right now (maybe we make parallelization a blueprint in the openstack-qa LP project and we have a hackathon to try and do it). But we focus in the next couple months on increasing the quality and quantity of tests in the *tempest* suite.
17:45:13 <nati2_> To define workflow to add new integration test
17:45:35 <jaypipes> nati2_: yes, dwalleck and I will get that documentation up on qa.openstack.org ASAP.
17:45:38 <Ravikumar_hp> Jaypipes: sounds good
17:45:44 <nati2_> jaypipes: Thanks!
17:45:48 <jaypipes> dwalleck, westmaas: the second priority is that list of missing tests...
17:45:52 <dwalleck> Right. I'll talk with jaypipes to understand how best to get docs in. I'll try to get something in today
17:45:58 <donaldngo_hp> agree
17:46:06 <anotherjesse> looking forward to docs on how we should write tests if we want to help coverage
17:46:11 <dwalleck> jaypipes: by missing you mean not ported in yet?
17:46:13 <jaypipes> the third priority is working with donaldngo_hp and AntoniHP to ensure tempest can replace kong in their project...
17:46:16 <jaypipes> dwalleck: yep
17:46:27 <jaypipes> anotherjesse: yes, we know...
17:46:51 <dwalleck> jaypipes: I'll submit all remaining tests today, in addition to making the name change to tempest. Sound fair?
17:46:51 <jaypipes> alright... let me make some action items here...
17:47:00 <donaldngo_hp> jaypipes++
17:47:16 <dwalleck> And then going forward, I'll make sure my team opens bugs for each new test they are working on
17:47:17 <jaypipes> #action jaypipes and dwalleck to get "How to Contribute an Integration Test to Tempest" done by EOD Thursday
17:47:57 <jaypipes> #action dwalleck and westmaas to provide list of areas that are not covered by integration tests *after dwalleck pushes all remaining tests his team has into tempest*
17:48:20 <jaypipes> #action jaypipes and dwalleck to schedule meeting with donaldngo_hp to discuss kong migration to tempest
17:50:04 <jaypipes> OK, let's open up the discussion.
17:50:08 <jaypipes> #topic open discussoin
17:50:22 <rohitk> Nati's team at NTT  wants to contribute to an idea for negative tests, https://blueprints.launchpad.net/openstack-qa/+spec/stackmonkey
17:50:43 <rohitk> so we have come up with a supporting spec and High level design
17:51:10 <jaypipes> rohitk: hmm, very interesting.. :)
17:51:14 <dwalleck> Nice! That looks like a very interesting concept
17:51:40 <nati2_> Yes we would like to implement very bad monkey for OpenStack
17:51:50 <rohitk> a gorilla would do too
17:51:51 <jaypipes> lol :)
17:52:13 <Ravikumar_hp> nati2: Is it unit tests ? Choas monkey?
17:52:42 <nati2_> Ravikumar_hp:  Is is a library which could be used from tempest
17:53:02 <rohitk> Ravikumar_hp: this would be larger tests
17:53:04 <jaypipes> Ravikumar_hp: basically, it's a library that will kill off important processes and wrreak havoc on a running system..
17:53:12 <rohitk> things that would mess up the infrastructure
17:53:19 <jaypipes> Ravikumar_hp: it would be a very bad monkey.
17:53:39 <Ravikumar_hp> not to run on production!!
17:53:42 <nati2_> Using the monkey, we can check error messages and logs
17:53:45 <jaypipes> hehe
17:53:48 <nati2_> Ravikumar_hp: :)
17:53:50 <rohitk> Ravikumar_hp: right :)
17:54:23 <jaypipes> nati2_, rohitk: well, I think you've gotten the official thumbs up on the project :)
17:54:24 <nati2_> Netflix uses it for production system
17:54:39 <nati2_> jaypipes: Thanks :)
17:54:39 <Ravikumar_hp> at hp , we have lot of choas monkey tests . That we can automate
17:54:54 <nati2_> Ravikumar_hp: Ah really!?
17:55:01 <jaypipes> Ravikumar_hp: please do collaborate with rohitk and nati2_ on this project, then!
17:55:34 <nati2_> Ravikumar_hp: OK TTYL
17:56:30 <jaypipes> OK folks, anything else before I close the meeting?
17:57:54 <jaypipes> alrighty!
17:57:56 <jaypipes> #endmeeting