19:00:03 <NobodyCam> #startmeeting Ironic
19:00:03 <NobodyCam> #chair devananda
19:00:03 <NobodyCam> Welcome everyone to the Ironic meeting.
19:00:03 <openstack> Meeting started Mon Mar 31 19:00:03 2014 UTC and is due to finish in 60 minutes.  The chair is NobodyCam. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:00:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:00:07 <openstack> The meeting name has been set to 'ironic'
19:00:08 <openstack> Current chairs: NobodyCam devananda
19:00:12 <NobodyCam> Of course the agenda can be found at:
19:00:12 <NobodyCam> #link https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting
19:00:18 <devananda> hi everyone!
19:00:23 <NobodyCam> #topic Greetings, roll-call and announcements
19:00:23 <NobodyCam> Who's here for the Ironic Meeting?
19:00:24 <JayF> o/
19:00:25 <jroll> \o
19:00:27 <matty_dubs> o\
19:00:29 <lucasagomes> :)
19:00:29 <ifarkas> o/
19:00:29 <agordeev2> o/
19:00:30 <mrda> \O
19:00:30 <linggao> o/
19:00:31 <NobodyCam> welcome all
19:00:32 <matty_dubs> Ugh, I really have to practice my waving
19:00:34 <Shrews> ahoy
19:00:46 <jroll> mrda: bloated head today?
19:00:51 <mrda> It appears so
19:00:55 <jroll> :P
19:00:57 <mrda> ;)
19:01:00 <aburaschi> hi!
19:01:00 <lucasagomes> hah
19:01:32 <NobodyCam> great to see everyone
19:01:43 <NobodyCam> jumpping rignt into things
19:01:46 <NobodyCam> announcements:
19:02:05 <comstud> o/
19:02:09 <NobodyCam> RC will be cut tonight? <- that correct devananda
19:02:09 <adam_g> o/
19:02:19 <devananda> NobodyCam: hopefully
19:02:26 <NobodyCam> :)
19:02:29 <devananda> we have 3 things still targeted to RC1
19:02:38 <devananda> https://bugs.launchpad.net/ironic/+bug/1298529
19:02:40 <uvirtbot> Launchpad bug 1298529 in ironic "Missing dependencies for third-party driver causes exception traceback" [High,In progress]
19:02:43 <devananda> https://bugs.launchpad.net/ironic/+bug/1297937
19:02:48 <uvirtbot> Launchpad bug 1297937 in ironic "Nova ironic driver ignores the "swap" field from flavor" [Critical,In progress]
19:02:50 <devananda> https://bugs.launchpad.net/ironic/+bug/1298645
19:02:51 <uvirtbot> Launchpad bug 1298645 in ironic "translated .po files do not contain any translation text" [High,In progress]
19:03:03 <devananda> first one needs some discussion
19:03:12 <devananda> second one has a patch in flight. any cores should go look at it
19:03:14 <NobodyCam> ahh .po files still seemed empty to me
19:03:17 <devananda> third one is pending a change to -infra
19:03:26 <lucasagomes> second one, there's a patch upstream
19:03:32 <lucasagomes> just need review
19:03:33 <devananda> https://review.openstack.org/#/c/84191/
19:04:06 <devananda> so
19:04:14 <devananda> are there any other serious, release blocking issues
19:04:20 <devananda> taht folks are aware of and need to be raised today?
19:04:43 <lucasagomes> devananda, as part of the disk partitioning refactor it's fixing a problem that nowadays we can't have a swap partition >=1024
19:04:55 <devananda> if not, I'll tag as soon as those 3 are closed (or more accurately, I'll notify ttx that we're ready)
19:05:09 <lucasagomes> my patches right now is a complete refactor, so it's hard to land in rc1
19:05:17 <lucasagomes> but I think it worth to make a quick fix for this problem
19:05:20 <devananda> lucasagomes: yea. I'm looking forward to that fix a lot, but it's too large for my comfort right now
19:05:20 <lucasagomes> in rc1
19:05:24 <NobodyCam> lucasagomes: the parted patches
19:05:30 <NobodyCam> they do work
19:05:31 <devananda> lucasagomes: how quick / small ?
19:05:44 <lucasagomes> devananda, few lines, I would replace sfdisk with parted
19:06:01 <lucasagomes> but not encapsulate all that in a class wrapper
19:06:17 <lucasagomes> NobodyCam, yeah
19:06:39 <NobodyCam> devananda: this is the end (start): https://review.openstack.org/#/c/83788
19:06:43 <lucasagomes> devananda, or we could land the complete refactor, it fixes a bunch of other problems
19:06:46 <NobodyCam> #link https://review.openstack.org/#/c/83788
19:06:50 <lucasagomes> like getting rid of the swap partition
19:07:00 <lucasagomes> partitioning layout consistency (having root always to be the last)
19:07:01 <lucasagomes> etc..
19:07:21 <devananda> ah, yea
19:07:31 <NobodyCam> I ran a test with it and devtest worked! I have not tested with devstack
19:07:37 <devananda> lucasagomes: that's fairly important for upgrade compat, no?
19:07:45 <lucasagomes> devananda, I think so
19:08:09 <NobodyCam> lucasagomes: is it needed for the effermal stuff too?
19:08:30 <lucasagomes> NobodyCam, ephemeral?
19:08:32 <devananda> NobodyCam: ephemeral suppor talready landed
19:08:42 <lucasagomes> NobodyCam, yeah, it's there already
19:08:48 <NobodyCam> ephemeral :-p
19:08:57 <NobodyCam> ack
19:09:38 <devananda> lucasagomes: thoughts on proceeding with RC1 and considering that as a back-port if needed?
19:10:23 <NobodyCam> #topic Ironic RC1 milestone
19:10:27 <NobodyCam> setting topic
19:10:30 <devananda> thanks :)
19:10:37 <lucasagomes> devananda, right, so I can try tomorrow to make a quick patch just fixing the swap problem
19:10:43 <lucasagomes> which would be easier to land in rc1
19:10:45 <lucasagomes> few lines etc
19:10:48 <devananda> k
19:11:15 <lucasagomes> devananda, NobodyCam please give me an action on that
19:11:22 <devananda> lucasagomes: it's the first time this project has gone through an RC/release/etc, so there's a learning curve for me here
19:11:38 <NobodyCam> #action lucasagomes to add swap fix patch
19:11:48 <lucasagomes> devananda, yeah :) I think for all of us
19:12:00 <lucasagomes> NobodyCam, cheers :)
19:12:04 <NobodyCam> :)
19:12:10 <devananda> lucasagomes: what ever is in the juno release willneed to be able to handle an upgrade from icehouse
19:12:31 <devananda> but that aside
19:12:45 <devananda> i'd rather not hold the RC up if it works well enough
19:12:50 <devananda> (for some value of "enough")
19:13:16 <devananda> ok, i'd like to move on if there aren't other pressing release issues?
19:13:23 <lucasagomes> yeah, right I will keep the upgrade thing in mind
19:13:28 <NobodyCam> devananda: I have never backported anyting for Openstack..
19:13:29 <lucasagomes> not from me
19:13:40 <NobodyCam> what headachs are we in for
19:13:51 <devananda> #topic functional testing
19:14:01 <adam_g> NobodyCam, its pretty easy once stable branches are cut
19:14:02 <devananda> NobodyCam: more like, "what headache am _I_ in for" :)
19:14:14 <NobodyCam> ahh :)
19:14:16 <NobodyCam> ok
19:14:31 <devananda> adam_g, NobodyCam: I think you guys have been doing a lot of the testing side of things
19:14:36 <devananda> anything to share?
19:14:46 <NobodyCam> notice. ironic is working in DEVTEST
19:15:18 <lucasagomes> yay
19:15:19 <NobodyCam> just set "EXPORT USE_IRONIC=1"
19:15:20 <mrda> NobodyCam: woohoo!
19:15:22 <adam_g> tempest exercise to stress ironic+nova instance deployment is still up for review. once this merges we have a pretty good plan for getting a gate check setup that deploys using devtest and runs the subset of tempest tests we know will pass
19:15:45 <NobodyCam> looks like the wiki even got updated
19:15:49 <devananda> lifeless: hi! I saw an email from you last week that check-tripleo-ironic-seed-precise should be treated as voting now. But then I saw a bunch of failures for inconsequential changes.
19:16:02 <adam_g> slightly related--i spent some time over the weekend dusting off the compute driver unit tests and trying to make it easys for ironic devs to run them
19:16:03 <devananda> lifeless: can you clarify that?
19:16:31 <devananda> adam_g: awesome. have a link to the review(s)?
19:16:47 <adam_g> functional tempest scenario test @ https://review.openstack.org/#/c/81958/
19:17:28 <adam_g> unit test stuff @ https://review.openstack.org/#/c/84033/ https://review.openstack.org/#/c/84043/
19:17:55 <NobodyCam> devananda: I think this patch got ci working: https://review.openstack.org/#/c/83906/
19:18:07 <adam_g> i'd like to work with infra on getting those tests into the ironic CI pipeline, and possibly nova's as well
19:18:26 <devananda> adam_g: good stuff. I'll take a look after the meeting
19:19:06 <lifeless> devananda: right now tripleo-ci is down entirely, controller node (still a spof) died
19:20:06 <devananda> lifeless: yuck. thanks for the update
19:20:35 <NobodyCam> any one have any update on: https://etherpad.openstack.org/p/jjWcLDThTK
19:20:41 <NobodyCam> bad paste
19:20:56 <NobodyCam> Status of Fedora on devstack with Ironic
19:21:39 <matty_dubs> NobodyCam: Is someone actively working on that, do you know?
19:21:59 <matty_dubs> Or, for that matter, is it broken?
19:21:59 <lucasagomes> matty_dubs, I think Dmitry from red hat is working on that
19:22:04 <lucasagomes> and also devtest
19:22:04 <ifarkas> NobodyCam, no update from me, I was testing devtest recently
19:22:08 <lucasagomes> idk much the details tho
19:23:05 <NobodyCam> adam_g: you had a issue this morning where compute_driver did not get set correctly?
19:23:08 <lifeless> devananda: right now we're nearly back up
19:23:27 <lifeless> devananda: but nova-baremetal on the undercloud isn't deploying anymore, which is a bit of a problem
19:23:35 <lifeless> devananda: since we can't deploy the testenv machines
19:23:41 <devananda> lifeless: erm...
19:23:42 <adam_g> NobodyCam, yes--a bady change got merged to devstack that broke config. https://review.openstack.org/#/c/84209/ should fix that
19:23:42 <devananda> :(
19:24:08 <NobodyCam> #liink https://review.openstack.org/#/c/84209
19:24:29 <devananda> yea, devstack is broken at the moment because it is setting the wrong virt_driver
19:24:33 <NobodyCam> adam_g: ack.. will keep an eye on it
19:25:17 <NobodyCam> any thing else on testing?
19:25:29 <devananda> not from me
19:25:53 <devananda> wow, we might get > half the meeting for open discussion :)
19:25:59 <devananda> #topic open discussion
19:26:00 <Shrews> NobodyCam: #liink ?  ;)
19:26:00 <NobodyCam> #topic Food for Thought / Open Discussion
19:26:05 <devananda> heh
19:26:11 <NobodyCam> Shrews: fir?
19:26:47 <Shrews> NobodyCam: you're previous msg
19:26:50 <lucasagomes> heh if anyone has a topic maybe we should talk a bit about the moving the credentials in Ironic to another service?
19:26:52 <Shrews> s/you're/your/
19:27:00 <lucasagomes> idk if you guys saw the maillist (will get the link)
19:27:02 <devananda> I see 3 things on the agenda for open discussion
19:27:06 <lucasagomes> but there's please of options
19:27:11 <NobodyCam> Shrews: ahh ... tags the link for the parse bot
19:27:15 <devananda> anyone here to talk about third party CI ?
19:27:21 <lucasagomes> s/please/plenty*
19:27:47 <vkozhukalov> I have a question about organizing common parts of code in ironic and ironic-python-agent
19:27:48 <NobodyCam> devananda: not that I have seen
19:28:05 <devananda> vkozhukalov: yea, i think taht would be good
19:28:07 <lucasagomes> vkozhukalov, +1!!!
19:28:09 <vkozhukalov> are there any thought what is the best way to do that
19:28:18 <devananda> jroll, JayF: ^ ?
19:28:29 <jroll> hi
19:28:34 <lucasagomes> my only idea until now is having an ironic-common repository
19:28:37 <jroll> ^
19:28:39 <jroll> same for me
19:28:44 <NobodyCam> are we going to need someting like a Ironic-common repo?
19:28:45 <JayF> Well my first reaction is to tread lightly, because the agent is small and lightweight whereas that's not particularly a goal for ironic
19:28:50 <lucasagomes> for things that are really ironic-only
19:28:50 <devananda> so, then we're basically doing what oslo-incubator does
19:28:57 <devananda> and we should just use oslo-incubator
19:29:03 <lucasagomes> devananda, well, unless it only ironic related
19:29:05 <jroll> devananda: right, my other thought was upstreaming things there
19:29:14 <jroll> but yes, there's ironic-only things
19:29:18 <lucasagomes> devananda, for e.g, having the common/states.py in a common repo is beneficial
19:29:30 <lucasagomes> since the client/agent/ironic uses it
19:29:33 <comstud> oslo.ironic
19:29:33 <comstud> heh
19:29:37 <lucasagomes> heh :P
19:29:47 <NobodyCam> #link https://review.openstack.org/#/c/84209
19:29:51 <NobodyCam> :-p
19:30:25 <devananda> so using a common repo like that
19:30:36 <devananda> could, depending on how much we refactor, lead toa  testing nightmare
19:30:48 <devananda> eg, if a change in ironic and ironic-common are interdependent
19:30:57 <jroll> hmm, yeah
19:31:13 <lucasagomes> :/ yeah indeed, the cross-project dependency thing is complicated
19:31:22 <devananda> are we confident that all the shared things are behind good, stable APIs alraedy?
19:31:24 <lucasagomes> project repos*
19:31:25 <devananda> (I'm not)
19:31:25 <vkozhukalov> are we going to merge ironic and ironic-python-agent in the future? (i mean put them at the same repo)
19:31:32 <jroll> devananda: no, I'm not
19:31:44 <jroll> vkozhukalov: absolutely not
19:31:50 <jroll> vkozhukalov: imagine how large the ramdisk would be
19:31:52 <devananda> vkozhukalov: I don't see the benefit to that. if you do, please elaborate
19:31:54 <lucasagomes> vkozhukalov, I don't think so
19:31:56 <NobodyCam> devananda: I can make that claim at this point
19:32:16 <NobodyCam> (are behind good, stable APIs alraedy)
19:32:22 <vkozhukalov> ok, got it, im just wondering
19:32:38 <jroll> as I see it, if the agent imports anything from ironic, we'll have to install all of the dependencies for ironic
19:33:14 <devananda> jroll: right. that's a concern. if we "apt-get install ironic" it's going to pull in all the project's deps, too
19:33:17 <devananda> which is bad
19:33:19 <devananda> for the ramdisk
19:33:26 <jroll> right
19:33:31 <NobodyCam> devananda: +
19:33:36 <devananda> and eventually we do want the ramdisk to be buildable from packages
19:33:38 <devananda> not just source :)
19:34:05 <devananda> (just being devil's advocate) what about moving some common things into the cleint?
19:34:21 <devananda> comstud: is there precedent for a server project requiring its own client?
19:34:30 <JayF> fwiw; I have a bp open for diskimage-builder to allow the addition of coreos+container as a target ramdisk https://blueprints.launchpad.net/diskimage-builder/+spec/coreos-support-via-containers
19:34:38 <comstud> devananda: Not that I know of
19:34:47 <comstud> nova did at one point long ago in the past
19:34:51 <comstud> for the old pre-cells (zones) code
19:34:55 <JayF> this is how we will be running the agent in our deployment, and I'm willing to integrate it with diskimage-builder should the triple-o guys be willing to accept it
19:35:05 <devananda> JayF: neat! I think that's silly, but I understand your interest in it
19:35:09 <vkozhukalov> devananda: they are not client parts (disk partitioning for example)
19:35:18 <devananda> vkozhukalov: indeed...
19:35:30 <devananda> comstud: yea. what does nova / novaclient do for shared info, such as state strings?
19:35:53 <comstud> they don't share code right now
19:36:10 <devananda> comstud: duplicate then?
19:36:15 <jroll> it's not clear to me how much code will be shared, by the way
19:36:20 <jroll> or how often it will change
19:36:26 <jroll> e.g. states enums
19:36:38 <comstud> devananda: I don't think novaclient does much with them other than display them as-is
19:36:41 <devananda> jroll: nor I. states are the only thing shared with the client right now afair
19:36:45 <comstud> so there's really not a lot to share there
19:36:46 <jroll> I would *like* to share code, but not if it's a ton of work
19:36:52 <devananda> comstud: ack
19:37:13 <devananda> jroll: it may be more complexity than its worth.
19:37:24 <jroll> yes, that's a better way to put it :)
19:37:28 <vkozhukalov> jroll: not very much but actually as far as ironic conductor does the same things as agent does those parts will be common
19:37:29 <devananda> considering we dont need the PXE driver's partitioning to support the agent's partitioning
19:37:47 <devananda> so the code is there, and it would be /nice/ if it was shared, but I dont see it as a functional requirement
19:37:52 <jroll> vkozhukalov: what kind of common things are you imagining
19:37:53 <mrda> One small point is that the client would then inherit the release cadence of ironic itself.  i.e. follow string freeze timeline etc
19:37:55 <jroll> devananda: +1
19:38:03 <lucasagomes> devananda, +1
19:38:48 <vkozhukalov> jroll: at the moment disk partitioning, software raids, lvm etc.
19:38:50 <jroll> mrda: mmm.
19:39:10 <devananda> mrda: hm? I dont see that yet.
19:39:29 <devananda> mrda: but it's not something I actually think we should do -- was just tossing it out there
19:39:51 <vkozhukalov> devananda: im working on those parts right now
19:39:52 <mrda> if ironic depends upon the client, then to stabalise ironic, we first need to do likewise to the client
19:40:12 <mrda> devananda: sure, I understand it's just a proposal
19:40:20 <mrda> but it's a logical consequence
19:40:25 <devananda> mrda: ahhh. yes. I see
19:40:40 <devananda> mrda: and that's a very good reason not to do that :)
19:40:57 <jroll> so, my vote: let's put it off for now. if we start to see a large amount of code duplication between agent and ironic, we'll revisit.
19:40:59 <mrda> (sorry if I'm sounding vague, it's early morning here and I'm pre-coffee)
19:41:12 <NobodyCam> jroll: I would +1 that
19:41:16 <matty_dubs> jroll++
19:41:23 <JayF> ++
19:41:23 <devananda> vkozhukalov: "software raid, lvm etc" -- how would these be shared? PXE driver can't do this
19:41:29 <lucasagomes> jroll, sounds reasonable :) +1
19:41:30 <NobodyCam> I supect it will end up with just the states file
19:41:53 <NobodyCam> but lets see
19:42:02 <devananda> vkozhukalov: management of >1 disk needs to be done via ironic-python-agent, not over iSCSI connection
19:42:22 <vkozhukalov> devananda: maybe it will do that in the future
19:42:37 <devananda> vkozhukalov: so actually, even the disk partitioning code that's in PXE (or proposed) will be far outgrown bythe capabilities in IPA
19:42:40 <vkozhukalov> devananda: i see
19:43:11 <matty_dubs> Sorry, what is IPA?
19:43:18 <JoshNang> Ironic Python Agent
19:43:24 <matty_dubs> Thanks.
19:43:30 <matty_dubs> I'm apparently bad at picking up on acronyms.
19:43:31 <JayF> I see a day where the PXE driver is superseded by the agent being more likely than the PXE driver and IPA sharing a large quantity of code.
19:43:32 <JoshNang> or tasty beer, whichever you prefer
19:43:33 <devananda> I believe we should be aiming, in the mid-to-long term, to deprecate the current diskimagebuilder's deploy ramdisk
19:43:36 <NobodyCam> devananda: >1 disk needs to be done via ironic-python-agent, only?what about vender-pass thru for bms that can handle setting raids and such
19:43:46 <devananda> it's very rudimentary
19:43:58 <NobodyCam> s/bms/bmc's/
19:44:12 <devananda> NobodyCam: no, those are fine. which is why we need an abstraction layer (API) above partitioning. IOW, we need a driver interface for it.
19:44:37 <NobodyCam> ack :)
19:44:43 <devananda> my point is, the current deploy ramdisk can't do those things -- and teh functionailty that we rely on today (initrd exposes /dev/sda over iSCSI)
19:44:47 <devananda> will never be able to support
19:44:53 <JayF> The agent itself doesn't do partitioning right now at all. It just assumes partitions are baked into whatever raw image it's writing
19:44:54 <devananda> most of the disk-related functionality that we want to add
19:44:55 <devananda> so
19:44:57 <JayF> so it's completely undone in any driver atm
19:45:02 <vkozhukalov> ok, let's keep them independent then as jroll suggested
19:45:03 <devananda> the agent won't share (much) code with the (current) PXE driver anyway
19:45:24 <devananda> it may fork lucasagomes' Partitioning patch series
19:45:37 <devananda> JayF: how do you guys feel about keeping those in sync for a little while?
19:45:41 <NobodyCam> the current deploy ramdisk is VERY RUDIMENTARY
19:45:44 <vkozhukalov> hoping to substitute pxe driver low level things with agent
19:45:51 <devananda> vkozhukalov: right.
19:46:14 <JayF> I haven't looked closely enough at lucasagomes's patch series to have a strong opinion
19:46:33 <devananda> JayF: ok, please do. the agent will need to support partitioning
19:46:40 <JayF> but the ability to partition at all is something that doesn't exist now, that partitioning series of patches does, so I don't forsee a reason to redo it differently
19:46:48 <lucasagomes> JayF, right, it should work well stand alone, the only thing that I'm using there from ironic is the common/utils:execute()
19:46:57 <devananda> JayF: even if initially it's just as rudimentary as what the PXE driver does :)
19:46:58 <lucasagomes> which is easily replaceable
19:47:21 <JayF> lucasagomes: if you wanted to integrate them into the agent, we aren't in a feature freeze right now and would review those PRs real quick ;)
19:47:30 <jroll> devananda: keeping in sync with lucas' stuff? not a problem
19:47:38 <devananda> jroll: great
19:47:47 <devananda> all that sounds good to me
19:47:55 <NobodyCam> :)
19:48:15 <lucasagomes> JayF, heh ack :) cheers
19:48:58 <vkozhukalov> devananda: jroll im going to sent review request about disk partitioning in a couple days (partly based on what lucasagomes did)
19:49:08 <lucasagomes> I want to get more familiar with the agent code, so it might be a good start (gotta find some time to do that tho)
19:49:08 <jroll> vkozhukalov: cool
19:49:41 <JayF> well probably only one of you should do it
19:49:42 <lucasagomes> vkozhukalov, nice, I will review that then :)
19:49:44 <devananda> any other topics for open discussion?
19:49:50 <Shrews> devananda: i have a small one
19:50:03 <lucasagomes> JayF, +1 vkozhukalov is already on that :)
19:50:05 <NobodyCam> is there a list (etherpad) for what need to be done to the drivers / ironic to work with the agent?
19:50:17 <devananda> NobodyCam: it's a new driver
19:50:25 <JayF> JoshNang: do you have a MR up with the existing driver?
19:50:27 <devananda> NobodyCam: not any changes to existing drivers
19:50:45 <NobodyCam> so will end up with duplicated code?
19:50:57 <JoshNang> JayF: nope. I was working on it and kept hitting the "use all my VMs memory during tests" bug and put it off
19:51:07 <JayF> It lives right now at https://github.com/rackerlabs/ironic-teeth-driver
19:51:07 <devananda> jroll: am I correct? IPA will be used by a new driver, not the PXE driver?
19:51:15 <jroll> devananda: yes, new driver
19:51:16 <JayF> and will be submitted as a merge request sometime soon / when Juno opened
19:51:20 <JayF> *opens
19:51:21 <JoshNang> I can probably get it up pretty quickily though. It only needs fairly small changes.
19:51:21 <devananda> great
19:51:24 <NobodyCam> ie disk partitioning
19:51:30 <jroll> devananda: https://github.com/rackerlabs/ironic-teeth-driver
19:51:45 <devananda> JayF: fwiw, i'm hoping to open juno tmw
19:51:53 <jroll> \o/
19:51:56 <Shrews> Both JoshNang and myself have seen the unit tests randomly fail by taking all up all available memory. I've also seen this happen in Jenkins at least once recently. Speaking with the infra folks, they suspect a race condition b/w tests, as Neutron recently had a similar problem.
19:52:10 <devananda> fascinating
19:52:24 <devananda> Shrews: bug # ?
19:52:28 <JoshNang> Shrews: thanks for looking into that
19:52:38 <Shrews> devananda: none yet. i can open one
19:53:01 <devananda> Shrews: please do. and if we see any failures in jenkins, plesae "recheck bug ####" on them so we can track it
19:53:11 <Shrews> devananda: ack
19:53:33 <JayF> JoshNang: ^ maybe it's worth throwing up half-done even with the tests issues to see if they fail similarly in upstream jenkins?
19:53:35 <NobodyCam> Shrews: please post the bug # in channel when you have one )
19:53:46 <devananda> i haven't seen that one yet, but it sound painful, considering i don't run unit tests insidd a VM
19:53:53 <Shrews> it occurs quite often for me, but tracking it down is a chore
19:54:09 <Shrews> devananda: it seemed to start occuring after i rebuilt my venv (tox -r)
19:54:13 <devananda> k
19:54:24 <Shrews> tox -r -epy27, specifically
19:54:32 <JoshNang> devananda: Shrews: same here, after a rebuild. Though i had it in py26 as well
19:54:49 * devananda bulds a testenv in a VM
19:54:53 <Shrews> the jenkins failure i saw was py26. i don't have that on my test machine
19:54:53 <JayF> seems like we should get it reproduced in infra/jenkins, so all the logs are accessible to anyone on the bug/merge request to find
19:54:57 <Shrews> NobodyCam: ack
19:55:05 <devananda> JayF: ++
19:55:07 <NobodyCam> * 5 minute bell*
19:55:47 <devananda> lucasagomes: did you have something else?
19:55:55 <lucasagomes> not really
19:55:57 <lucasagomes> ah
19:56:04 <lucasagomes> 1 thing
19:56:20 <JayF> Would it be useful if I opened up the Vagrantfile we're using for Ironic in our enviornment? It's located in an internal closed repo but could migrate it to an open repo on github if someone found it helpful?
19:56:23 <lucasagomes> for the credential stuff, do you think that we should have a design session slot for that?
19:56:28 <JayF> Makes it easy to spin up VMs for working on Ironic
19:56:56 <devananda> JayF: couldn't hurt. I'd just stick it up on github somewhere and link to it from the wiki
19:57:21 <JayF> Yep. I'll have to extricate it from some other stuff, but I can get it done this week for sure.
19:57:47 <NobodyCam> JayF: I am now using tripleO's seed vm for most of my testing
19:57:52 <NobodyCam> :-p
19:58:07 * devananda continues to use devstack
19:58:07 <jroll> heh
19:58:14 <jroll> so anything else today?
19:58:36 <NobodyCam> no from me
19:58:45 <NobodyCam> s/no/not/
19:59:04 <devananda> ok then
19:59:07 <devananda> thanks everyone!
19:59:10 <lucasagomes> well I will submit a design session talk about the v3 credential stuff, so we can approve/reject later as things goes
19:59:10 <jroll> thank you!
19:59:10 <NobodyCam> great meeting everyone ... Thank you
19:59:19 <lucasagomes> thanks everybody
19:59:19 <NobodyCam> lucasagomes: ++
19:59:23 <vkozhukalov> thnx
19:59:25 <JoshNang> woo!
19:59:29 <ifarkas> thanks!
19:59:29 <mrda> thanks!
19:59:33 <matty_dubs> lucasagomes: Sounds good
19:59:41 <matty_dubs> See ya folks.
19:59:59 <NobodyCam> #endmeeting