17:00:22 <jroll> #startmeeting ironic
17:00:23 <openstack> Meeting started Mon Jan 11 17:00:22 2016 UTC and is due to finish in 60 minutes.  The chair is jroll. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:24 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:26 <openstack> The meeting name has been set to 'ironic'
17:00:30 <rpioso> o/
17:00:30 <jroll> hi everyone
17:00:30 <TheJulia> o/
17:00:32 <jlvillal> \o/
17:00:34 <lucasagomes> hi there
17:00:35 <dtantsur> o/
17:00:35 <rloo> hi
17:00:37 <NobodyCam> o/
17:00:38 <mgould> o/
17:00:44 <jroll> as always, agenda is here:
17:00:45 <jroll> #link https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting
17:00:48 <krtaylor> o/
17:00:52 <jroll> nice and light today :)
17:01:05 <devananda> o/
17:01:13 <jroll> #topic announcements and reminders
17:01:26 <sambetts> o/
17:01:30 <jroll> #info IPA releases this week, 1.1.0 from master and 1.0.1 from stable/liberty
17:01:34 <[1]cdearborn> o/
17:01:41 <vdrok> o/
17:01:44 <cinerama> o/
17:02:09 <jroll> also, networks patches are very close, I'd love to get eyes on those
17:02:16 <jroll> anyone else have announcements?
17:02:24 <jlvillal> Gate?
17:02:34 <rloo> jroll: midcycle virtual whatever?
17:02:43 <jroll> ditto for manual cleaning, those have 2x +2
17:02:50 <jroll> rloo: let's talk in open discussion
17:02:58 <rloo> jroll: ok
17:02:59 <jroll> jlvillal: gate should be clear now afaik
17:03:03 <e0ne> jroll: I've got one from cinder side but I could hold it until open discussion
17:03:11 <jlvillal> Great!
17:03:18 <rloo> jroll: wasn't there a gate issue in stable/liberty or kilo?
17:03:38 <jroll> e0ne: yes pls
17:03:40 <dtantsur> gate is NOT clear yet
17:03:41 <lucasagomes> we still need to get the patches to allow dib to be tested from source in
17:03:45 <jroll> oh
17:03:51 <jroll> dtantsur: can you give more info please?
17:03:54 <dtantsur> but will be as soon as DIB pinning finally lands
17:03:57 <jroll> rloo: timeoutssssss
17:04:05 <dtantsur> the patch to g-r is in flight: https://review.openstack.org/#/c/265775/
17:04:08 <jroll> dtantsur: oh, got it, thanks
17:04:28 <krotscheck> o/
17:04:36 <dtantsur> after that please make sure we do land https://review.openstack.org/#/c/264736/ or we're broken again in the next release
17:04:40 <sergek> o/
17:04:57 <dtantsur> Kilo is essentially dead due to constant timeouts
17:04:58 <sambetts> jroll: tinyipa has tested successfully in the gate, https://review.openstack.org/#/c/259089/ and https://review.openstack.org/#/c/234902/, the devstack lib patch needs work, but its seems quite fast now
17:05:10 <dtantsur> Liberty is more or less fine, at least we managed to land all the pending patches (iirc)
17:05:13 <jroll> cool, thanks sambetts
17:05:24 <lucasagomes> sambetts, w00t!
17:05:54 <rloo> dtantsur: so what does that mean wrt kilo -- nothing we can do so no future kilo releases?
17:06:04 <BadCub> morning folks
17:06:19 <dtantsur> rloo, someone has to invest a lot of time in figuring out. nobody did for now :)
17:06:38 <rloo> dtantsur: gotcha. is there a bug against it?
17:06:47 <dtantsur> yep, lemme find
17:07:12 <dtantsur> rloo, https://bugs.launchpad.net/ironic/+bug/1531477
17:07:14 <openstack> Launchpad bug 1531477 in Ironic "Kilo dsvm gate is mostly broken (timeout when waiting for active state)" [High,Confirmed]
17:07:40 <rloo> thx dtantsur, i'll update etherpad if it isn't there
17:08:44 <jroll> anything else?
17:09:09 <jroll> #topic subteam status reports
17:09:19 <jroll> as always, these are here:
17:09:21 <jroll> #link https://etherpad.openstack.org/p/IronicWhiteBoard
17:09:41 <dtantsur> any updates on "Node filter API and claims endpoint"? nothing is written
17:10:52 <jroll> dtantsur: I've kind of put that on hold until the neutron stuff lands
17:10:59 <dtantsur> got it
17:11:16 <jroll> it won't land in nova this cycle, hoping to come back to it in a few weeks
17:11:55 <lucasagomes> jroll, should we hold the indexable json fields too?
17:12:00 * lucasagomes still need to update the spec
17:12:23 <dtantsur> I would try to not hold everything for the next cycle :)
17:12:33 <jroll> lucasagomes: keep working on it
17:12:37 <rloo> we need to get the indexable json fields because that is holding up a lot of stuff like cleaning up how we deal with capabilities
17:12:43 <jroll> I just personally would like to help get the network stuff done faster
17:12:43 <dtantsur> ++
17:12:58 <lucasagomes> fair enuff
17:13:00 <jroll> and I think doing so is more valuable that a filter api without indexable json
17:13:04 <dtantsur> I was dealing with capabilities for tripleo work: it's terrible :(
17:13:32 <devananda> the neutron stuff is top priority for me right now, fwiw
17:13:52 <jroll> ya, if we let that slip again we've failed hard
17:13:55 <devananda> a lot of people have been putting a lot of work into it for 6 months, and it's very close to done
17:14:35 <devananda> the patch chain works in my local OVS/devstack testing, and so far, the code I've reviewed is (aside from nits) very good -- though I haven't reviewed it all yet
17:14:59 <dtantsur> sounds promising \o/
17:15:15 <devananda> there's a bit more to do on the CI side - we need to finish the switch to tempest-lib ASAP and rebase a few of the neutron integration patches
17:16:34 <dtantsur> year.. we also need tempest plugin to *finally* finish versioning testing >_<
17:17:36 <sergek> So far I didn't manage to run the tempest plugin (
17:17:53 <devananda> I think we could land all of the neutron work this week
17:18:05 <jroll> ++
17:18:20 <devananda> the CI is the main open question for me -- we don't have an experimental job in there yt
17:18:22 <devananda> yet
17:19:11 <jroll> devananda: let's both review the variables in the devstack patches and if they're all good I can push on the project-config patch
17:19:14 <jroll> review them today*
17:19:18 <devananda> getting the tempest lib'ification complete will make testing the neutron integration significnatly easier
17:19:20 <devananda> jroll: agreed
17:19:42 <rloo> wrt tempest lib'ification, what's the delay there? Do we have patches up?
17:20:19 <devananda> vsaienko: do you have time to do the patch rebasing today?
17:21:08 <lucasagomes> rloo, AFAIUI yes, we got the patches creating the base struct and copying the tempest code in ironic
17:21:12 <lucasagomes> and the project config one
17:21:17 <lucasagomes> jroll, anything else missing ?
17:21:52 <jroll> lucasagomes: I have a patch removing our tests from tempest tree
17:21:57 <jroll> and then it's done afaik
17:22:15 <dtantsur> devananda, it's probably pretty late for him..
17:22:16 <lucasagomes> good stuff
17:22:31 * jroll needs to update that one again for pep8
17:25:26 <rloo> shall we move on?
17:27:32 <mgould> sounds good to me
17:27:56 <lucasagomes> ++
17:27:58 <TheJulia> ++
17:28:00 <NobodyCam> :)
17:28:05 <jroll> ya
17:28:09 <jroll> #topic open discussion
17:28:21 <jroll> e0ne: you had something here?
17:28:43 <e0ne> small anouncment from cinder
17:29:10 <e0ne> we've finally got merget "Attach/detach volumes without Nova" spec
17:29:46 <e0ne> and we (cinder team) introduced new cinderclient extension named "python-brick-cinderclient-ext" to local attach/detach features
17:29:53 <e0ne> we discuss it at Summit
17:30:12 <e0ne> it could be used inside Ironic instances to attach cinder volumes
17:30:27 <e0ne> here is or short-term roadmap: http://lists.openstack.org/pipermail/openstack-dev/2016-January/083728.html
17:30:47 <e0ne> we need more feedback and feature requests from you
17:30:55 <devananda> e0ne: woot!
17:31:04 <jroll> \o/
17:31:19 <e0ne> I hope, we'll merge attach/detach stuff in Mitaka
17:31:33 <TheJulia> Awesome news
17:31:41 <e0ne> Ironic team's iput as very valuable for us
17:31:41 <NobodyCam> nice
17:31:59 <devananda> e0ne: is it possible to test this within devstack today?
17:32:14 <lucasagomes> thanks e0ne!
17:32:22 <e0ne> devananda: yes, I'll share detailed manual with video later this week
17:32:42 <e0ne> devananda: for now, you need just to install this extension from sources
17:33:00 <e0ne> is it's needed, I can update you with our progress on weekly basis
17:33:36 <e0ne> s/is it's needed/if it's needed
17:33:52 <devananda> e0ne: that's fantastic. once there are some docs on how to test / use it, things should accelerate on our end
17:34:15 <e0ne> devananda: I understand it. It's my top priority for now
17:34:53 <e0ne> also, we're thinking about functional tests for it including attach/detach inside ironic instance
17:34:57 <devananda> e0ne: right now, I'm focused on landing the neutron integration, so I won't have time to personally dig into the cinder integration right now
17:35:35 <e0ne> devananda: ok, we've got some time before M-3 milestone
17:35:40 <devananda> anyone else want to take the lead on the cinder work right now? if not, I'll dive into it once the neutron code is landed
17:36:08 <jroll> well
17:36:14 <jroll> let's be clear about cinder; there's two pieces
17:36:26 <jroll> 1) e0ne's work, which shouldn't require ironic changes
17:36:30 <jroll> 2) boot from volume
17:36:33 <lucasagomes> I think there's some people looking at boot from volume but I don't know if there's anyone working on attach/deattach volumes
17:36:42 <e0ne> jroll: you're absolutely right
17:36:47 <devananda> right
17:37:26 <lucasagomes> jroll, +1... 1) will have changes in Ironic when we start attaching/deattaching volumes via the BMC
17:37:39 <lucasagomes> but the brick agent is hardware agnostic
17:37:39 <e0ne> jroll: for the #2 of your list, I need to to take a look on https://blueprints.launchpad.net/ironic/+spec/cinder-integration closer
17:37:50 <jroll> e0ne: ditto
17:38:30 <devananda> lucasagomes: with brick agent, we could start building images that automount volumes on boot
17:38:45 <e0ne> lucasagomes: it's our first step in integration between projects
17:38:50 <devananda> and passing in attachment data via cloudinit
17:38:59 <e0ne> devananda: +1
17:39:06 <lucasagomes> devananda, e0ne sounds good
17:39:23 <lucasagomes> devananda, question there, the volume information is passed via configdrive?
17:39:23 <e0ne> devananda: to be clear, it's not an 'brick agend', it's a cinderclitnd CLI and Python API extension
17:39:25 <devananda> this isn't BMC dependent, and with [i]PXE booting, could also do diskless nodes
17:39:44 <lucasagomes> or it's after the instance is deployed ? (wondering if the provision vs tenant network would be a hurdle here)
17:39:54 <devananda> e0ne: thanks for the clarity
17:40:23 <devananda> lucasagomes: yes, network will be an issue. cinder vol will need to be visible on tenant network for any of this to work
17:40:28 <jroll> lucasagomes: yeah, the instance will need a route to the cinder-volume service
17:40:29 <e0ne> lucasagomes: you can use it after provisioning if you have SSH access
17:40:40 <lucasagomes> gotcha
17:40:47 <e0ne> jroll: actually, co cinder-api and storage networks
17:41:04 <e0ne> I know that ist's not secure enough for multi-tenant envs
17:41:09 <jroll> e0ne: yeah, I usually assume the user can access cinder-api :)
17:41:19 <e0ne> jroll: :)
17:41:37 <jroll> lucasagomes: for instance we have "servicenet" at rackspace - 10. provider network that has an ACL for onmetal -> cinder storage
17:42:15 <jroll> and our cinder backend does magic acl things per volume
17:42:29 <e0ne> jroll: what backends do you use?
17:42:50 <jroll> e0ne: dunno, it's something around LVM
17:42:53 <lucasagomes> jroll, I see, yeah... When time comes we probably should document a reference architeture for those things
17:43:05 <jroll> forget what it's called
17:43:07 <jroll> yeah
17:43:13 <rloo> not to be a spoilsport, but we've spent > 10 min on this... can the rest be taken off line?
17:43:28 <jroll> sure?
17:43:34 <lucasagomes> rloo, yes
17:43:34 <jroll> do we have anything else to talk about?
17:43:37 <lucasagomes> do we have more open topics?
17:43:37 <e0ne> #link http://lists.openstack.org/pipermail/openstack-dev/2016-January/083728.html
17:43:37 <jroll> oh, midcycle
17:44:09 <jroll> so I'd like to do the midcycle in february sometime - does anyone have dates that work great or don't work at all for them?
17:44:18 <e0ne> sorry for the link duplication - now it will be easer to find it
17:44:19 <jroll> or should I just pick one
17:44:42 <lucasagomes> jroll, after fosdem/devconf would be ideal
17:44:44 <dtantsur> jroll, the whole week 1-5th + 15th do not work for me
17:45:09 <e0ne> jroll: we'll got cinder midcycle on January 26-29. ironic itegration questions are on our desk
17:45:23 <jroll> dtantsur: the 15th or the week of the 15th?
17:45:30 <lucasagomes> dtantsur, are you going to the devconf right?
17:45:31 <dtantsur> jroll, no, only 15th itself
17:45:40 <lucasagomes> cause it's 5-7th of feb
17:45:43 <dtantsur> lucasagomes, yes + there're some meeting right before
17:45:44 <jroll> e0ne: I won't be able to make it with that late of notice, I think. link to more details?
17:46:11 <NobodyCam> jroll:this is a virtual midcycle this go round?
17:46:19 <jlvillal> Good chance I will be on vacation for two weeks starting 19-Feb-2015.
17:46:21 <e0ne> jroll: https://etherpad.openstack.org/p/mitaka-cinder-midcycle  - we will have online streaming and hangouts call
17:46:33 <jroll> e0ne: cool
17:46:35 <jroll> NobodyCam: yes
17:46:38 <NobodyCam> :)
17:46:59 <e0ne> so it will be great to have someone from ironic there:)
17:47:01 <TheJulia> e0ne: I'm local to the cinder midcycle location and can attend if you guys want a crazy ironic person to attend
17:47:05 <jroll> ok so feb 16-19 looks pretty ideal heh, or 8-12
17:47:24 <e0ne> TheJulia: cool
17:47:40 <lucasagomes> jroll, wfm, maybe propose both dates to the ML
17:47:48 <lucasagomes> to see who would be able to join
17:47:59 <jroll> lucasagomes: yeah, my first priority is cores
17:48:02 <e0ne> TheJulia: it will be helpful for sure
17:48:31 <TheJulia> e0ne: putting it on my calendar then
17:48:38 <NobodyCam> either should work for /me
17:48:44 <e0ne> TheJulia: thanks!
17:48:53 <jlvillal> e0ne, You would be lucky to get TheJulia to attend! :)
17:50:02 <rloo> oh, jroll, maybe mention the new rfc process, or send email about it?
17:50:13 <jroll> did we not send enough emails about it already?
17:50:14 <rloo> err, rfe?
17:50:19 <jroll> all I did was write it down
17:50:24 <rloo> didn't we just update the documentation about it?
17:50:41 <jroll> yeah
17:50:43 <jroll> ok I'll email
17:50:54 <rloo> thx jroll
17:50:58 <NobodyCam> :)
17:51:40 * jroll top posts like a boss
17:51:46 <jroll> anything else? 8 minutes left
17:52:18 <jlvillal> Regarding functional testing and how it gets run.
17:52:24 <jlvillal> There was an email thread started.
17:52:31 <jlvillal> I voted for being able to do it with tox.
17:52:38 <jlvillal> So it is easy to do for developers.
17:52:39 <sergek> yeah. It was me
17:52:46 <jlvillal> There were also options of devstack and tempest.
17:52:50 <jlvillal> Any thoughts?
17:53:02 <jlvillal> I was going to bring it up in the Testing/QA meeting on Wednesday.
17:53:06 <jroll> so
17:53:11 <jroll> those are two different things
17:53:13 <jroll> right?
17:53:22 <dtantsur> inspector does it with tox for developer's ease
17:53:23 <jroll> I haven't seen this thread afaik
17:53:27 <jroll> but like, do it with tox
17:53:40 <jroll> and there's a dependency of having ironic available
17:53:44 <NobodyCam> I would say tox
17:53:46 <NobodyCam> too
17:53:46 <jroll> so if you're in devstack, you have it
17:53:49 <jroll> and can still run tox
17:53:50 <jroll> right?
17:54:00 <jroll> (assuming this is client testing)
17:54:05 <sergek> I think we can run tox with DevStack as well like HEAT does
17:54:09 <dtantsur> jroll, what inspector does (and I think what jlvillal means) is that tox actually starts a limited inspector instance
17:54:13 <jlvillal> Yes. I am thinking for openstack/ironic
17:54:21 * lucasagomes gotta read the ML thread
17:54:29 <dtantsur> jroll, so it's fully standalone, does not need devstack/any other ironic installation
17:54:29 <jroll> oh, ironic, not ironicclient?
17:54:33 <rloo> jroll et al: this thread: http://lists.openstack.org/pipermail/openstack-dev/2016-January/083744.html
17:54:36 <jlvillal> dtantsur, +1  Run ironic-api, ironic-conductor, and rabbitmq.  To start
17:54:48 <lucasagomes> sergek, so setup devstack and then call a tox -e<cmd> that will run the tests?
17:54:48 <dtantsur> but I'm just telling what inspector does, not sure I'm fully aware of the context :)
17:55:11 <dtantsur> inspector-client does the same, it uses shared code in ironic_inspector.test.functional
17:55:11 <sergek> lucasagomes: yes
17:55:23 <jroll> jlvillal: that doesn't make it easy for developers, though, it depends on lots of things outside of pip
17:55:25 <jlvillal> I think if we developers have to run devstack and/or tempest to do functional testing, then they likely won't
17:55:34 <sergek> the question was how to run ironic services - manually or with devstack
17:55:39 <dtantsur> https://github.com/openstack/python-ironic-inspector-client/blob/master/ironic_inspector_client/test/functional.py#L172-L174
17:55:40 <lucasagomes> it sounds like a good approach
17:55:42 <jroll> I have to install half of devstack just to start ironic anyway
17:55:57 <lucasagomes> sergek, when we test in gate, we will already have devstack right?
17:56:11 <lucasagomes> this approach sounds good IMO, and it's slim for ironic
17:56:30 <dtantsur> I would love to be able to run 'tox -efunc' on my laptop, if it's even possible..
17:56:33 * lucasagomes don't want to maintain yet-another big script to setup everything
17:56:36 <sergek> I think we can all options altogether
17:56:37 <jroll> I totally agree
17:56:43 <jroll> but I also don't want rabbit on my laptop
17:56:46 <jroll> is my point here
17:56:57 <dtantsur> jroll, oslo.msg has an in-memory implementation
17:57:00 <dtantsur> iirc
17:57:01 <sergek> *have
17:57:03 <jroll> oh true
17:57:14 <jroll> idk, need to investigate more I guess
17:57:15 <TheJulia> 3 minutes
17:57:17 <TheJulia> s
17:57:33 <jlvillal> I will discuss it more on Wednesday. And feel free to comment in email thread!
17:57:44 <gabriel> zeromq doesn't need broker, might be an option jroll
17:58:09 <gabriel> afaik
17:58:11 <jroll> sure
17:58:14 <jroll> rabbit was one example
17:58:17 <sergek> I'd love to have all three options - manual service, devstack and tempest plguin
17:58:31 <sergek> alas I didn't manage to start Tempest plugin yet
17:58:37 <dtantsur> sergek, I don't see value in manual service option tbh..
17:58:44 <sambetts> vagrant ?
17:58:48 <dtantsur> I'd like to call tox -efunc and just have it finish to the end..
17:59:07 * dtantsur sets a reminder to respond to the thread tomorrow morning
17:59:09 <jlvillal> +1 on tox -efunc
17:59:17 <sergek> dtantsur: agree. and the question is what happens under the hood
17:59:17 <jroll> ok we're out of time
17:59:23 <sambetts> https://www.vagrantup.com
17:59:24 <jroll> #endmeeting