17:00:48 #startmeeting Rally 17:00:49 Meeting started Tue Jul 22 17:00:48 2014 UTC and is due to finish in 60 minutes. The chair is andreykurilin1. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:50 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:52 The meeting name has been set to 'rally' 17:01:04 hello 17:01:09 hi 17:01:18 hi 17:01:22 hi! 17:01:55 let's wait a little bit 17:03:09 here 17:03:11 ok, let's start 17:03:19 hi all! Boris(boris-42) is on vacation, so I'll lead this meeting today. Topics for discussion: news; updates and so on. 17:03:42 #topic rally news 17:03:50 #topic news 17:04:06 yesterday, Boris sent email to openstack-dev mailing-list with proposition to incubate Rally. I hope you will participate actively in this discussion:) 17:04:19 http://lists.openstack.org/pipermail/openstack-dev/2014-July/040813.html 17:04:59 Hi all o/ 17:05:02 hi 17:05:22 k4n0, news = http://lists.openstack.org/pipermail/openstack-dev/2014-July/040813.html 17:05:40 I think this is excellent news, so let us believe that community will agree with incubation of Rally.What are your thoughts about this news? :) 17:05:42 I will for sure 17:05:58 awesome ! 17:06:02 I will put as much of Cisco weight as i can 17:06:20 i reach out to my contact internal to Cisco about this so we get it on the map 17:06:28 Nice! 17:06:32 Gamekiller77: cool! 17:06:42 RainbowBastion: news = http://lists.openstack.org/pipermail/openstack-dev/2014-July/040813.html 17:06:50 I think Rally + Tools should be the defacto for benchmarking and understanding openstack from inside and outside 17:06:53 yah Rainbow is with me 17:06:55 I'm working with Gamekiller77 17:07:29 andreykurilin1, one thing we need clarity is the context info for user data the boris did 17:07:46 per Cisco we have a Read only LDAP and need to be able to use static users 17:07:55 if this work we are going full bore with Rally testing 17:08:22 Specifically, what format the file he's reading from is in and how it's set up. 17:10:26 we reach out to Boris directly but this info is great 17:10:36 i think Rally should be the core test and validation system 17:10:56 Gamekiller77: agreed 17:11:47 Since Rally 'on the threshold' to be incubated, imo, our main task is to make Rally more stable, so I propose to focus on test coverage and quality of tests. Any suggestions? 17:12:37 last week Sergey(rediskin) found that one of the Rally component (`rally verify`) was broken. 17:12:39 +1 on that, we need stricter coverage requirements on new patchs 17:13:08 andreykurilin1 +1 - broader test coverage would be nice. 17:13:21 We don't know how long it was broken(several fixes already merged). I was one of reviewers at last patch, that changed `rally verify` and I checked only the quality of the code and don’t checked how it works. This is wrong way of reviewing patches.:) 17:13:31 andreykurilin1, agreed, the unit test coverage is already in progress and we should start with functionality coverage sooner than later 17:13:31 k4n0, yup agreed 17:13:59 coolsvap: +1 17:14:06 #action Stricter unit test requirements on patches 17:14:15 Based on this situation, all of us (core-team and contributors of Rally) should be more attentive to quality and efficiency of patches. 17:14:29 Agreed 17:14:39 that my and RainbowBastion main focus 17:14:54 we ran a test last week and it failed and did not clean up 17:15:06 per the LDAP problem i stated 17:15:32 andreykurilin1, yup, start with functionality testing would help for command line failures due to patches 17:16:26 Gamekiller77, do you have any bugs about it? 17:16:57 i do not have a bug but i have RainbowBastion file one 17:17:11 i figured it should go clean up and projects it created but it did not 17:17:27 Gamekiller77, Please file as many bugs as you want, We will triage them as well. 17:17:29 RainbowBastion, put that on your list for today todo list ok 17:17:37 Will do 17:17:56 Gamekiller77, cleanup is a pain. we should fix it 17:18:00 yah no we are about to start some large scale testing as soon as we get this ldap work around from boris figured out 17:18:24 we see the code fix but we not sure how to format the data for ldap static users 17:18:56 beside that yes we test the hell out of it as RainbowBastion main job this summer as our intern for my department 17:19:27 Gamekiller77, great news, can you also start triaging new bugs reported in Rally? 17:19:48 that is key 17:20:13 Gamekiller77, sorry, I do not know nothing about data for ldap. let's wait boris 17:20:28 just for simplifylisity can you post the link to the bug system 17:20:49 andreykurilin1, it is context really how to provide static context for users 17:20:58 this way rally not trying to create users when it tests 17:21:01 Gamekiller77, https://bugs.launchpad.net/rally 17:21:14 RainbowBastion, make sure to get that link 17:21:22 Already have it written down 17:21:43 cool let run that test again and get as much info and open a case ok 17:21:50 sorry bug 17:23:13 Gamekiller77, RainbowBastion: it would be great. If you will report bugs, we could work on them together 17:24:34 RainbowBastion, is here to help make this work for Cisco so yah we work on what we can. I have limited time as i am on fast track to get Icehouse up soon and my PXe system for RHEL7 still down 17:24:57 cool 17:25:42 Let's change the topic to "updates". no objections? 17:26:00 not here 17:26:04 #topic updates 17:26:38 Rally began to accumulate old patches. Does anyone have updates about old patches? Also interested in status of newest patches:) 17:27:36 https://review.openstack.org/#/q/status:open+project:stackforge/rally,n,002e762f000193d1 17:28:11 andreykurilin1, I am & k4n0 working on unit test coverage, have submitted a few patches and will continue working on the same 17:28:54 coolsvap, ok, thanks. anyone else? 17:28:56 also I have a blueprint for add create cli command which is WIP 17:29:17 andreykurilin1, I am working on getting more detailed timing data for context creation and teardown along with oanufriev 17:29:42 coolsvap, could you share link to bp? 17:29:46 andreykurilin1, will submit patch by end of thursday 17:29:54 https://blueprints.launchpad.net/rally/+spec/collect-runtime-duration 17:30:12 i submitted a fix for a issue I found yesterday... regarding networks 17:31:34 k4n0, thanks 17:31:44 and i have few patches.... ['Fix side menu depth', 'Whole scenario time measurement', 'Updated rally gate scenarios', 'Periodic runner refactoring'] 17:32:41 andreykurilin1, just a min 17:33:02 rook: great! 17:33:28 andreykurilin1, https://blueprints.launchpad.net/openstack/?searchtext=add-rally-create-cli-command 17:34:00 oanufriev: thanks for updates 17:34:46 coolsvap, thanks 17:35:24 Does someone has other topics for discussion? 17:35:36 Nothing from my side today 17:36:17 so i am new to Rally, but not benchmarking. 17:36:41 I am curious the direction of the scenarios - and the best way to extend a scenario to do more than just single service level benchmarking 17:37:24 I was talking with yingjun about this 17:37:54 rook, and what he said?:) 17:38:15 11:44:24 yingjun: rook not now, there is a patch about this: https://review.openstack.org/#/c/103306/ 17:38:32 however, is a mixed scenario something that Rally should be doing 17:38:48 or is it more of a component level benchmark ? 17:39:53 rook, i have no answer for you right now, sorry 17:40:44 okay 17:40:46 rook, i'll try to discover this question soon and answer to it in #openstack-rally 17:40:47 One last question ;) 17:41:14 what is the question? 17:42:05 So, the current scenarios are focused on the messaging/db layer of OpenStack - is there any work to have scenarios where a workload is introduced into the guest, such as netperf between two guests 17:42:19 and maybe they already exist and I haven't seen them 17:43:29 rook, as far as I remember, Rally hasn't such scenarios yet 17:43:43 Okay... double bummer today 17:44:25 rook, can you create a blueprint for it? 17:44:34 rook i am going to need to do this 17:44:40 Gamekiller77 me too 17:44:47 too so maybe we should work together 17:44:47 andreykurilin1 sure 17:45:04 RainbowBastion, let work with rook to figure some items out 17:45:23 rook i looking at l2 to l2 and l2 to l3 work load 17:45:42 rook, Gamekiller77: it would be great! 17:45:46 as i use blades some is back plain and some is not 17:45:47 Gamekiller77 yup - have scripts now that do this, but I wanted a upstream tool for this. 17:46:21 it could be done with rally with a ssh engine or something ok cool let talk later 17:47:07 any other questions/topic for discussion? 17:47:23 yes i have one more 17:47:41 something that is lacking a test to figure IO load on the storage your using 17:47:50 i have yet to see a metric marker for grabbing that data 17:47:57 it would have to be very open 17:48:09 but most storage venders have api for it 17:48:33 so when you run a test case and spin up what was the load on your storage system or in my case cloud 17:48:47 as i run Ceph but not limited to Ceph 17:49:09 RainbowBastion, lets talk about this more ok see if we can not come up with a blue print for this idea 17:49:22 i know our bosses are going to ask for this 17:50:40 did i lose you all 17:50:50 We are here Gamekiller77 :) 17:50:54 Gamekiller77: :) 17:51:30 No, I'm here, I was trying to recreate the bug we found yesterday. 17:51:44 Sorry for disappearing 17:52:24 does what i ask sound like something Rally should capture 17:52:38 be it local lvm 17:52:41 or a nfs or ceph 17:52:53 i think a storage load idea would rock 17:54:05 Gamekiller77: hm... I think you should create a bp for it. 17:54:14 i am 17:54:23 storage is one of my passions 17:54:33 :) 17:54:53 my gears are running i have high level contacts at EMC and NetApp if i need help 17:55:04 also with Ceph 17:56:03 RainbowBastion, let have this as our next meeting and whiteboard a bp 17:56:12 Gamekiller77: it would be great, if you will do it:) 17:56:26 i want to give back to the community 17:56:34 and this is something i can do 17:56:50 :) 17:57:39 we must finish this meetings. 17:57:55 Thank you all :) 17:57:55 thank you all for discussion! 17:58:28 Gamekiller77: if you want, we can continue discussion at #openstack-rally 17:58:30 thanks all i think this was great 17:58:34 yup 17:58:46 #endmeeting