17:00:09 <jlvillal> #startmeeting ironic_qa
17:00:10 <openstack> Meeting started Wed Apr 13 17:00:09 2016 UTC and is due to finish in 60 minutes.  The chair is jlvillal. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:11 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:14 <openstack> The meeting name has been set to 'ironic_qa'
17:00:54 <jlvillal> Hello everyone
17:00:56 <rpioso> \o
17:00:59 <sambetts> Hey jlvillal and all
17:01:26 <jlvillal> As always the agenda is at: https://wiki.openstack.org/wiki/Meetings/Ironic-QA
17:01:32 <jlvillal> #topic Announcements
17:02:02 <jlvillal> Not sure if krtaylor or mjturek are here to do their announcement
17:02:23 <jlvillal> #info MoltenIron initial drop - manager for a pool of baremetal test nodes (krtaylor/mjturek)  https://review.openstack.org/#/c/304683/
17:02:35 <krtaylor> o/
17:02:38 <jlvillal> I don't have anything else for announcements
17:02:40 <krtaylor> sorry I'm late
17:03:19 <jlvillal> No worries. Anything else you want to add in regards to MoltenIron?
17:03:30 <krtaylor> yes, we have an initial drop of the baremetal management (think noodpool for physical servers)
17:03:31 <jlvillal> Beside the need for more cooling in your data centers?
17:03:40 <krtaylor> hehheh
17:04:00 <krtaylor> we'd like feedback, it is related to some other proposed work
17:04:21 <krtaylor> several teams have expressed interest
17:04:32 <[1]cdearborn> \o
17:04:56 <krtaylor> we have it working for Power testing, we are testing every ironic patch with a full physical deployment
17:05:07 <krtaylor> using molten iron
17:05:28 <jlvillal> Cool. Looks interesting.
17:05:53 <rajinir> o/
17:05:53 <krtaylor> we have a few other enhancements coming
17:06:14 <krtaylor> but, that's about it, if anyone is interested, ping me or mjturek
17:06:16 <sambetts> my test rig uses a very similar system just built on top of a mysql db, where we select a row and mark a column as used
17:06:33 <krtaylor> sambetts, very similar to what we are doing
17:07:03 <krtaylor> sambetts, but abstracted with cli and service
17:07:30 <jlvillal> Thanks a lot krtaylor
17:07:36 <jlvillal> Any other announcements?
17:07:59 <sambetts> krtaylor: looks cool :) need to make it work for drivers other than IPMI though :)
17:08:43 <jlvillal> Okay moving on.
17:08:43 <krtaylor> sambetts, hm, it should be fine with other drivers
17:08:55 <jlvillal> Or should I wait?
17:09:08 <krtaylor> nah, we can wait till open discussion
17:09:14 <sambetts> :) yup
17:09:16 <jlvillal> Okay :)
17:09:18 <krtaylor> onward
17:09:31 <jlvillal> #topic Grenade testing
17:10:54 <jlvillal> #info Grenade patch got merged into Ironic. Need to propose backport. Patch is only part of work. https://review.openstack.org/#/c/298967/
17:11:11 <jlvillal> I don't want people to think it is working...
17:11:37 <jlvillal> So I got a little busy with other things and didn't make much progress. Still stuck at node not getting IP address.
17:11:51 <jlvillal> But I have a good feeling I can work on it this week.
17:11:59 <jlvillal> Any questions before we move on?
17:12:26 <jlvillal> Okay moving on
17:12:31 <jlvillal> #topic Functional testing
17:12:47 <jlvillal> I don't think anyone has any updates. If you do, please speak up
17:13:18 <jlvillal> #info No updates
17:13:52 <jlvillal> #topic 3rd Party CI (krtaylor)
17:13:57 <krtaylor> sure, np
17:14:13 <krtaylor> no progress due to downstream responsibilities, but I will make sure the third-party CI status table is complete by next week
17:14:25 <krtaylor> #info No update
17:14:40 <sambetts> # info after a longer outtage than we expected due to network issues, Cisco is back up and running
17:15:05 <jlvillal> sambetts: No space in '#info'
17:15:05 <krtaylor> I'll also prepare a CI status for the gate/qa design session at summit
17:15:15 <sambetts> #info after a longer outtage than we expected due to network issues, Cisco is back up and running
17:15:28 <krtaylor> cool
17:15:49 <jlvillal> On the various CIs. Are you able to post what the correct 'recheck' command is to cause your CI to recheck?
17:16:01 <jlvillal> In the message posted to Gerrit.
17:16:03 <sambetts> Cisco has it in the message :)
17:16:09 <krtaylor> that should be included in the wiki page for the test system
17:16:22 <jlvillal> krtaylor: :( Why can't it be in the message...
17:16:25 <krtaylor> that is part of the infra requirements
17:16:30 <mjturek1> (sorry I'm late everybody)
17:16:31 <jlvillal> sambetts: It is?
17:16:36 <krtaylor> it should be
17:16:39 <krtaylor> yes
17:16:53 <jlvillal> sambetts: I was looking here: https://review.openstack.org/#/c/206244/
17:17:09 <krtaylor> mjturek1, we'll continue molten iron in open discussion
17:17:17 <jlvillal> krtaylor: Infra requirements say to NOT put it into the posted message?
17:17:19 <mjturek1> krtaylor: great thanks
17:17:26 <sambetts> jlvillal: thats because it was sucessful
17:17:37 <sambetts> jlvillal: if there is a build failed you see Build failed. For help on isolating this failure, please contact cisco-openstack-neutron-ci@cisco.com. To re-run, post a 'recheck cisco-ironic' comment.
17:17:59 <krtaylor> jlvillal, well technically recheck is the only thing that should be passed
17:18:00 <jlvillal> sambetts: Will 'recheck cisco-ironic' cause the Zuul jobs to re-run also?
17:18:19 <sambetts> jlvillal: you mean the OpenStack ones?
17:18:23 <jlvillal> Yes
17:19:05 <sambetts> Not sure, I've not noticed it doing it, but thats the style we use across all our OpenStack thirdparty CIs
17:19:13 <jlvillal> Testing completed on IBM PowerKVM platform. For rechecking only on the IBM PowerKVM CI, add a review comment with pkvm-recheck. Contact info: kvmpower@linux.vnet.ibm.com. For more information, see https://wiki.openstack.org/wiki/PowerKVM
17:19:31 <jlvillal> sambetts: I think anything that starts with 'recheck' will trigger the OpenStack CI too.
17:19:42 <jlvillal> anteaya may know for sure though
17:19:58 <krtaylor> its here: http://docs.openstack.org/infra/system-config/third_party.html#requirements
17:20:14 <krtaylor> it does, on purpose
17:20:38 <krtaylor> technically all systems should recheck on "recheck" but it hasnt been enforced
17:20:50 <jlvillal> krtaylor: Right, I think 'recheck' should cause all CI jobs to recheck
17:21:00 <krtaylor> and now all add their system to trigger theirs only
17:21:08 <jlvillal> But it is also nice to be able to only recheck the one CI, like the IBM one does.
17:21:29 <krtaylor> that was the idea, but it was a big long debate
17:22:05 * krtaylor tries to find the thread
17:22:37 <jlvillal> sambetts: krtaylor Not of big importance. It was just something I had run into before. Thanks.
17:22:41 <krtaylor> jlvillal, is there some need to have this addressed for ironic?
17:22:59 <krtaylor> jlvillal, no worries, I'll find that thread and let you know
17:23:11 <jlvillal> I would like it if 'recheck' does trigger all of our CI jobs, if we should be doing that. But I am not the expert.
17:23:37 <sambetts> If thats the case I may need to talk to my team about it because we use that style for neutron and other projects third party CIs too
17:23:46 <jlvillal> And would also like the ability to trigger a single CI to run. But I don't know what is the correct thing...
17:23:52 <krtaylor> yes - as per the requirements"Recheck means recheck everything. A single recheck comment should re-trigger all testing systems."
17:24:05 <krtaylor> all should re-run on a "recheck"
17:24:15 <jlvillal> krtaylor: That is how I read the Wiki you linked.
17:24:32 <krtaylor> yes, thats what we agreed many moons ago
17:25:05 <jlvillal> sambetts: I'll let you discuss the Wiki with your team and the 'recheck' command.
17:25:22 <krtaylor> I had worded that paragraph differently initially, but it was discussed and clarified
17:25:39 <sambetts> thanks :)
17:25:44 <jlvillal> I guess not Wiki. Actual docs :)
17:25:58 <jlvillal> http://docs.openstack.org/infra/system-config/third_party.html#requirements
17:26:17 <jlvillal> Sorry to side-track your section krtaylor
17:26:21 <jlvillal> All yours now :)
17:26:36 <krtaylor> no worries, thats all I had
17:26:44 <jlvillal> Okay moving on.
17:26:52 <jlvillal> #topic Open Discussion
17:27:15 * jlvillal sits back and lets sambetts and krtaylor discuss merits of various tooling :)
17:27:24 <krtaylor> mjturek1, ^^^
17:27:36 <mjturek1> we contributed a tool this week called molteniron
17:27:40 * mjturek1 grabs link
17:27:46 <mmedvede> krtaylor: re. recheck - IBM PowerKVM does also rerun on "recheck"
17:27:49 <krtaylor> mjturek1, anything else you want to bring up about the functionality?
17:28:09 <krtaylor> mmedvede, yes, but not all systems do
17:28:27 <mjturek1> here it ishttps://review.openstack.org/#/c/304683/
17:28:33 <sambetts> haha, well my concern is that at least right now molteniron focuses specfically on the *_ipmitool drivers, because its looking for ipmi_address, ipmi_username/password etc in the node
17:29:02 <mjturek1> correct, obviously we'd be open to other drivers but that's our initial goal
17:29:31 <mjturek1> we concede that it's still pretty rough but plan on improving it
17:29:48 <krtaylor> but that shouldn't be too bad to generalize
17:29:58 <mjturek1> switching to sqlalchemy is in the works right now for example
17:30:23 <mjturek1> anyway, we've found it to be a useful tool for managing a pool of target nodes
17:30:44 <krtaylor> sambetts, we'd love help to make it better, something re-usable for other ironic ci systems
17:30:52 <sambetts> I think if the plan is the support multiple driver then you may have to move from having a column for each node property having either a more complex relational DB or just store a json for the node representation
17:32:04 <mjturek1> sambetts: yeah that's fair
17:32:26 <mjturek1> sambetts: we've designed it to meet our needs initially
17:33:14 <devananda> I'm very happy to see that proposed to nodepool, fwiw
17:33:28 <sambetts> also how is this planned to interact with devstack? Right now we take advantage of the devstack ipmiinfo file so all I need to pass to each of my test runs is which info file to use, but this would actually add a extra step for us because we'd have to allocate the node, build the file then configure devstack with the new file
17:33:54 <sambetts> devananda: this is a layer below nodepool as far as I can tell, its not providing slaves for jenkins
17:34:03 <devananda> sambetts: indeed
17:34:08 <sambetts> its providing nodes that jenkins slaves can use
17:34:19 <devananda> but it is something nodepool could consume to run tests against bare metal
17:34:29 <mjturek1> sambetts: so we have a pretest hook and a cleanup hook in our job. The pretest hook calls to molten iron, we get the returned info, and then we amend the localrc
17:34:51 <mjturek1> sambetts: the cleanup script runs after the job where we then call molten iron to release the node
17:34:52 <krtaylor> devananda, I think mordred was talking about something like that also?
17:34:56 <devananda> mjturek1: the docs in that patch don't have any usage information. perhaps that would help?
17:34:58 <mordred> what did I do?
17:35:12 <mjturek1> devananda: absolutely, I can add those in ASAP
17:35:26 <devananda> mordred: https://review.openstack.org/#/c/304683/2
17:36:14 <sambetts> I wonder if we could add something into ironic devstack so that it can just pull that info at the point it enrolls the nodes
17:36:17 <mordred> neat. I would not propose that to nodepool right now, as that's what the upcoming v3 work should facilitate
17:36:24 <mordred> but let's keep in touch and tuff
17:36:26 <mordred> stuff
17:36:41 <mjturek1> sambetts: yeah actually a devstack plugin was on our todo list
17:36:57 <mjturek1> sambetts: the goal there being to allocate the node as late as possible
17:37:21 <devananda> that sounds better to me, too. pass the credentials for an ironic endpoint and a list of ironic node identifiers, and let the devstack plugin pull them in and enroll them
17:38:43 <mjturek1> devananda: sounds interesting yeah
17:39:02 <krtaylor> right now it is a 1/1 between the dsvm ironic controller and the target node, but we should describe how that works in a readme
17:39:38 <krtaylor> mjturek1, these are all good comments
17:39:42 <jlvillal> krtaylor: mjturek1 Not sure but you might want to do a cookiecutter for MoltenIron
17:39:51 <mjturek1> jlvillal: not familiar
17:40:03 <mjturek1> but can look into it :)
17:40:10 <jlvillal> mjturek1: https://github.com/openstack-dev/cookiecutter
17:40:14 <devananda> hmm. reading that review a bit and it's not clear to me what this should be
17:40:22 <devananda> it looks like a new service
17:40:45 <mjturek1> devananda: sorry which review?
17:40:48 <sambetts> yeah, I see it as a new service run alongside nodepool
17:40:48 <krtaylor> jlvillal, we were always thinking this could be used as a template for other test systems that want to do baremetal testing
17:40:53 <devananda> yea
17:41:20 <devananda> mordred: when you say "that's what the upcoming v3 work should facilitate" what did you mean?
17:41:34 <krtaylor> jlvillal, were you thinking we should propose this as a new project?
17:42:23 <jlvillal> krtaylor: Not necessarily. But maybe some of that stuff could be part of your patch.   requirements.txt as it appears 3rd party libraries are used.
17:42:40 <devananda> mjturek1: so, not to nitpick, but this python violates pep8 rules everywhere ...
17:42:53 <jlvillal> +1 on that.
17:42:59 <devananda> part of cookiecutter is the base framework for doing things like unit and pep8 tests
17:42:59 <mjturek1> devananda: yep it's rough around the edges :-\
17:43:21 <krtaylor> push early and often  :)
17:43:25 <devananda> it reads like perl :)
17:43:43 <jlvillal> Ouch devananda ouch  :P
17:43:49 <mjturek1> eeek
17:44:04 <krtaylor> harsh
17:44:19 <devananda> sorry - I don't mean that in a bad way. it's just reminding me of perl a lot.
17:44:46 <mjturek1> heh, no offense taken :)
17:44:58 <mjturek1> anyway, we'll absolutely look into cookiecutter
17:45:02 <krtaylor> well, part of that would have been caught if third-party-ci repo had pep8 testing, but it skips all gate tests for now
17:45:11 <mjturek1> sounds like it'd be a big help
17:45:16 <krtaylor> mjturek1, agreed, good feedback
17:45:17 <jlvillal> yeah, I see the jenkins job.  "noop"
17:45:25 <devananda> hah. fair.
17:45:28 <mjturek1> passed with flying colors
17:45:35 <krtaylor> yep, I had to set it up that way
17:45:43 <krtaylor> due to all the different contect there
17:45:46 <devananda> but yea, the more I read this, hte more it looks like it wants to be a new service. except I'm not sure what the purpose of that service would be.
17:45:47 <krtaylor> content
17:45:59 <devananda> check out resources from a pool?
17:46:04 <devananda> *bare metal
17:46:09 <mjturek1> devananda: correct
17:46:14 <devananda> but I think that's what nodepool v3 is going to provide
17:46:16 <devananda> mordred: ^ ?
17:46:17 <krtaylor> yes, that is the functionality of molten iron
17:46:31 <krtaylor> hm, want to learn more on v3 then
17:46:37 <mjturek1> absolutely
17:46:52 <mjturek1> this tool spawned out of necessity
17:46:55 <devananda> yea
17:46:57 <devananda> https://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html#nodepool
17:47:09 <mjturek1> but that necessity might be going away
17:47:18 <devananda> that is exactly whta nodepool v3 is describing on that spec
17:47:22 <devananda> awesome
17:47:39 * krtaylor tags for reading
17:47:44 <devananda> "Nodepool should also allow the specification of static inventory of non-dynamic nodes. These may be nodes that are running on real hardware, for instance."
17:48:49 <mjturek1> devananda: yeah will definitely read to see if it will meet our needs
17:48:49 <sambetts> That sounds awesome!, although in our case even with that system we'd still need to have our parameters DB like we have today because, along with the BM machine to use, we also provide network information to prevent our tests from standing on each other
17:49:22 <sambetts> e.g. the range of IPs to use etc
17:49:38 <devananda> until nodepool v3 can meet these needs, this still seems quite useful, and yea, looks like it's a separate project
17:49:49 <devananda> thankfully those are easy to create :)
17:49:53 <mjturek1> :)
17:50:00 <krtaylor> true :)
17:51:41 <jlvillal> Anything else to discuss?
17:51:52 <sambetts> Nothing from me
17:52:03 <mjturek1> nothing here, thanks for the initial feedback
17:52:10 <jlvillal> Okay, going to end the meeting. Thanks everyone!
17:52:16 <krtaylor> thanks!
17:52:21 <jlvillal> #endmeeting