14:02:04 <dprince> #startmeeting tripleo
14:02:05 <openstack> Meeting started Tue Nov 10 14:02:04 2015 UTC and is due to finish in 60 minutes.  The chair is dprince. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:02:06 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:02:08 <openstack> The meeting name has been set to 'tripleo'
14:02:46 <tzumainn> hiya!
14:02:48 <jtomasek> hi o/
14:02:49 <akrivoka> hiya \o
14:02:51 <adarazs> hi there
14:02:56 <marios> o/
14:03:00 <shardy> o/
14:03:01 <tremble> \o
14:03:20 <slagle> \o/
14:03:22 <eggmaster> o/
14:03:27 <trown> o/
14:03:36 <dprince> hi all \o/
14:03:44 <florianf> hi o/
14:03:52 <dprince> #topic agenda
14:03:52 <dprince> * bugs
14:03:52 <dprince> * Projects releases or stable backports
14:03:52 <dprince> * CI
14:03:52 <dprince> * Specs
14:03:55 <dprince> * Review Priorities: https://etherpad.openstack.org/p/tripleo-review-priorities
14:03:55 <jaosorior> o/
14:03:58 <dprince> * one off agenda items
14:04:00 <dprince> * open discussion
14:05:23 <dprince> I see we have one-off agenda items to discuss the tripleo-api or tuskar v3 naming, and suggestions to stop using ironic OSC namespaces too. So lets leave time for those at the end
14:06:21 <dprince> tzumainn: presumably you added the first one-off agenda item above?
14:06:56 <tzumainn> dprince, that is correct
14:07:06 <tzumainn> I am sure it is a topic that everyone will love to talk about
14:07:11 <tzumainn> so you're all very welcome
14:07:41 <dprince> tzumainn: cool, thanks
14:07:48 <dprince> #topic bugs
14:08:13 <dtantsur> o/
14:08:21 <dprince> one bug I'll highlight is this one derekh is working on
14:08:24 <dprince> https://bugs.launchpad.net/tripleo/+bug/1513879
14:08:24 <openstack> Launchpad bug 1513879 in OpenStack Compute (nova) "NeutronClientException: 404 Not Found" [High,In progress] - Assigned to Derek Higgins (derekh)
14:08:59 <dprince> if anyone tries tries to use recent RDO trunk packages you'll likely hit that... and it is a bit cryptic
14:10:22 <dprince> we really need to bump our DELOREAN URL again soon. Another other bugs/blockers that we are waiting on with regards to that?
14:10:51 <trown> ya the current-tripleo is super old
14:11:26 <trown> derekh: is there anything I can help with in terms of automating that... we are almost there for rdo-manager side of automating current-passed-ci
14:12:35 <derekh> trown: we can't bump current-tripleo until the fix for that is merged https://review.openstack.org/#/c/242158/
14:13:05 <derekh> trown: we now have a perisodic job that runs nightly to bump that symlink
14:13:12 <dprince> derekh: got it, are there any other blockers we are waiting on?
14:13:18 <derekh> trown: ots currently failing but I'm working on it
14:13:28 <trown> derekh: oh sweet. so once that passes we are good to go going forward
14:13:36 <dprince> derekh: will ping in #openstack-nova today about this too...
14:13:39 <trown> awesomesauce
14:13:41 <derekh> trown yup
14:14:07 <dprince> lets move on then
14:14:09 <derekh> Ye will also  see a patch from me later to bump our jenkins slaves to F22
14:14:32 <dprince> derekh: cool
14:14:38 <derekh> F21 is about to go EOL, anyways I'll link it when its ready
14:14:46 <dprince> #topic Projects releases or stable backports
14:15:10 <dprince> derekh: unrelated to topic, but on that note would going straight to F23 make sense?
14:15:29 <shardy> So re stable backports, I think we need to hold off until https://review.openstack.org/#/c/240938/ lands, which puts CI in place
14:15:31 <dprince> derekh: (python3 was meant to be the default there though... so not sure about it)
14:15:46 <shardy> that's taking a while to land, but is nearly there thanks to help from derekh
14:15:49 <trown> I added to https://etherpad.openstack.org/p/tripleo-stable-branches ... we need a .gitreview patch for each of our stable branches, I did python-tripleoclient already
14:16:08 <dprince> shardy: agree, with getting the CI job in place first
14:16:21 <derekh> dprince: I havn't tried it to be honest, maybe
14:16:23 <shardy> trown: ah, yeah, should be safe to land those pre-CI, I'll propose the rest later
14:17:14 <dprince> trown: sounds good, ++ on getting those in
14:17:58 <slagle> i pushed one for tht yesterday
14:18:04 <slagle> added to etherpad :)
14:18:11 <dprince> shardy: will try to ask about landing your updates patch in #openstack-infra today too
14:18:59 <dprince> #topic CI
14:19:02 <shardy> dprince: thanks, I have asked a couple of times already but got ignored
14:19:23 <dprince> other than needing to bump DELOREAN_URL... any CI updates this week?
14:19:31 <dprince> derekh: did we figure out the cause of the slow down?
14:19:42 <shardy> dprince: to clarify, that only adds support for stable/liberty CI, there is no coverage of updates yet
14:19:45 <shardy> that comes next
14:20:00 <dprince> shardy: sounds good
14:20:22 <derekh> dprince: was going to ask about that, I havn't looked into it,
14:20:50 <dprince> derekh: me neither
14:21:00 <tremble> shardy: Do we have any idea how much more work the updates coverage will be?
14:21:01 <derekh> Jiri added a patch to bubble up more info https://review.openstack.org/#/c/242542/
14:21:14 <trown> derekh: dprince, I looked at that patch... and it never had the slow down
14:21:27 <trown> and it seems to have gone away looking at the CI page
14:21:42 <derekh> trown: gate-tripleo-ci-f21-ha	SUCCESS in 1h 56m 01s
14:22:04 <dprince> trown: hmmm. perhaps just some environmental changes then? network related perhaps
14:22:04 <shardy> tremble: I've started looking at it, as has derekh, not too bad I think, but I really wanted to get stable CI in first so we have a working baseline
14:22:10 <derekh> 2 weeks ago we were getting jobs finishing after around 80 minutes or so
14:23:08 <derekh> hmm. things do seem to have gotten  a bit faster again, I'm going to see if I can produce a graph
14:23:17 <gfidente> I was wondering if we shouldn't run the upgrade job also for the changes submitted to the stable branch?
14:23:18 <derekh> it might make things more clear
14:23:19 <trown> ah ok, there were jobs non-ha jobs taking 120min, so I was using that as the bad benchmark
14:23:25 <trown> now they are 90-100
14:23:34 <dprince> derekh: I even saw one of my patches go as low as 66 minutes (the one where I removed the building of some ramdisks https://review.openstack.org/#/c/233449/)
14:24:24 <shardy> tremble: FYI I started brain-dumping into https://etherpad.openstack.org/p/tripleo-update-testing
14:24:49 <shardy> everyone please add your thoughts there about what we need to test, then we can start with the simplest cases and add them incrementally
14:24:57 <trown> derekh: dprince, I think we can lower heat timeout even more so failed jobs do not take so long
14:25:16 <derekh> dprince: ya, we've had a couple in the 60's (a rare few),
14:25:39 <derekh> I'm going to create a trello card for it, if anybody wants to take it fire ahead
14:25:46 <derekh> if not I'll get to it at some stage
14:26:00 <dprince> derekh: okay, thanks.
14:26:09 <dprince> lets move on
14:26:14 <dprince> #topic Specs
14:27:21 <derekh> https://trello.com/c/jnoMe1Vz/47-find-where-ci-jobs-are-spending-their-time
14:27:50 <dprince> any updates on specs this week?
14:28:21 <dprince> FWIW my goal is to have a couple of specs posted soon for "Composable roles" and also for "splitting the stack"
14:28:33 <dprince> ideas we talked about at the summit...
14:29:08 * gfidente would like reviews on the external lb spec and the relative submissions ;)
14:29:28 <trown> derekh, dprince: slightly to topic, do we do specs for big CI changes?
14:30:13 <dprince> trown: good question. I guess I'd leave it up to you. If you think it would be useful then perhaps...
14:30:14 <derekh> trown: we havn't traditional but that doesn't meant we shouldn't
14:30:30 <trown> at summit we talked a bit about an "undercloud appliance" created during the periodic job that updates the current-tripleo link... I have something like that working in rdo-manager, but it would be a pretty big change worthy of a spec I think
14:30:38 * derekh would review it
14:30:56 <trown> ok, I will put up a spec for it
14:31:04 <dprince> trown: ++ sounds good
14:31:08 <trown> there are still some tricky bits to work out
14:33:04 <dprince> gfidente: okay, we should try to push on your LB review here too https://review.openstack.org/#/c/233634/
14:33:28 <dprince> bnemec: ^^ are you still -1 on that with the feedback?
14:35:00 <dprince> perhaps we can discuss other specs later in #tripleo too
14:35:12 <dprince> #topic Review Priorities: https://etherpad.openstack.org/p/tripleo-review-priorities
14:35:56 <dprince> any review updates
14:36:07 * dprince has been slow on reviews after the summit
14:36:40 <tremble> dprince: jaosorior and I would still appreciate eyes on the tls_enablement CRs
14:37:03 <dprince> tremble: gotcha
14:37:20 <jtomasek> I'd love to get feedback on this: https://review.openstack.org/#/c/242439/
14:37:42 <jtomasek> (not saying I am not getting enough, though)
14:39:14 <dprince> okay, any other reviews that need highlighting please feel free to add them to the etherpad
14:39:28 <dprince> we should probably groom the etherpad a bit as it grows over time too...
14:39:39 <dprince> otherwise it may lose its purpose
14:40:37 <d0ugal> +1
14:40:47 <dprince> #topic one off agenda items
14:41:13 <dprince> okay, tzumainn would you like to present your topic?
14:41:22 <tzumainn> sure!
14:41:45 <tzumainn> so at summit, there was tentative agreement to call the new TripleO API, tuskar v3 for various reasons
14:42:05 <tzumainn> but objections have been raised in the spec review - https://review.openstack.org/#/c/230432/7 - and I find the arguments by jtomasek and bnemec compelling
14:42:07 <jrist> drumroll
14:42:19 <tzumainn> so I'd like to see if it'd be okay to name it tripleo-api after all, and have it live in tripleo-common
14:42:36 <tzumainn> and then never have another naming discussion ever for the rest of my long long life ever again forever
14:42:42 <d0ugal> lol
14:42:51 <dtantsur> lol +1
14:42:54 <jtomasek> +1
14:42:55 <shardy> having an API in a common library sounds odd
14:43:05 <shardy> but otherwise +1
14:43:23 <shardy> I assumed we'd have a new tripleo-api repo which depended on tripleo-common if we went with a new repo
14:43:23 <rbrady> +1
14:43:27 <d0ugal> That's true, tripleo-common isn't well named for an API location.
14:43:30 <jrist> how much are they paying you?
14:43:32 <jrist> :)
14:43:52 <tzumainn> a new tripleo-api repo sounds reasonable
14:44:06 <dtantsur> maybe it's time for something calles just tripleo?
14:44:15 <dtantsur> we already have python-tripleoclient
14:44:22 <tzumainn> yeah, I actually kinda think that renaming tripleo-common to tripleo might be the smoothest thing to do
14:44:24 <dtantsur> like ironic and python-ironicclient
14:44:24 <jtomasek> dtantsur: +1
14:44:27 <rbrady> +1
14:44:28 <d0ugal> dtantsur: That makes sense and matches the other projects.
14:44:57 <tzumainn> so if people agree, maybe adding the api to tripleo-common, and then eventually renaming tripleo-common to tripleo?
14:45:07 <jtomasek> tzumainn: +1
14:45:10 <akrivoka> +1
14:45:17 <dprince> tzumainn: so you're preference is that these live in the same repo
14:45:18 <dtantsur> rename as early as possible
14:45:25 <trown> well... I thougt there was disagreement about the same repo
14:45:28 <d0ugal> Do we expect tripleo to be used as a library?
14:45:32 <dtantsur> renaming is a disaster, many of you probably already know it :)
14:45:37 <gfidente> no rename please
14:45:47 <trown> yes rename is very hard for packagers
14:46:05 <trown> so doing it early would be appreciated
14:46:13 <shardy> d0ugal: yes, I assumed tripleo-common would support a public python API
14:46:13 <tzumainn> dprince, I admit it makes sense to me, it seems like most projects have the api and the code it touches close together, but if packaging is an issue... ?
14:46:31 <dtantsur> the earlier we do it, the easier it is for everyone
14:46:31 <shardy> and we'd layer a rest API on top of that, called tripleo-api/tuskarv3
14:46:56 <d0ugal> shardy: Right, so the repo would be a mix of a public library and API code that shouldn't be used?
14:47:06 <d0ugal> Could be a bit confusing, but probably not a big deal
14:47:17 <trown> creating more than one package from a single git tarball is what every other openstack service does
14:47:28 <trown> ie ironic-api and ironic-conductor
14:47:37 <d0ugal> oh, cool - that I didn't know
14:47:39 <shardy> d0ugal: Yeah, if we combine them then I think we'd be saying we don't want a public python API
14:47:39 <rbrady> is tripleo-common even being packaged yet?
14:47:52 <shardy> which may be OK, if we want both CLI and UI flows to use the new rest API
14:48:00 <d0ugal> rbrady: I assume so, it is used by the CLI :)
14:48:13 <trown> rbrady: ya it is even in liberty rdo
14:48:17 <d0ugal> shardy: Yeah, I am fine with that. Having the API as one clear entry point makes things easier.
14:48:28 <d0ugal> (easier and clearer)
14:49:02 <tzumainn> my opinion is not that strong, so if it's better to have tripleo-api be a separate repo, that's fine with me
14:49:21 <d0ugal> I like it all being in the same repo
14:49:44 <jtomasek> tzumainn: since the api itself is really thin layer, I'd rather see it in the same repo, just to make things smoother
14:49:44 <d0ugal> Otherwise we will forever be doing cross repo Depends-On even more :)
14:49:50 <jtomasek> yes
14:49:58 <d0ugal> +1
14:50:34 <shardy> tzumainn: mine isn't either, other than *-common isn't a good name for something with a rest api "tripleo" or "tripleo-api" make more sense to me
14:51:06 <jtomasek> +1
14:51:27 <tzumainn> shardy, agreed, my preference is to add the api to tripleo-common and rename tripleo-common, but I'm not fully aware of the potential difficulties in doing so
14:51:57 <trown> I am ok with doing that if we do it nowish... much less so if we wait a few months
14:52:10 <dprince> tzumainn: lots of oppinions on this I think
14:52:42 <dprince> tzumainn: at the sake of drawing this out longer... do we need an email thread or perhaps a quick online vote for this?
14:53:05 <jrist> online vote seems more productive
14:53:08 <dprince> tzumainn: perhaps we can think on this for one more week and have an official vote at the next IRC meeting?
14:53:17 <d0ugal> +1, we can go to email if a vote isn't conclusive?
14:53:42 <tzumainn> dprince, I think there's actually a bit of consensus - if I understand what people are saying correctly, they're +1 to tripleo-api instead of tuskar, and okay with it being in tripleo-common as long as tripleo-common is renamed first?
14:53:48 <jtomasek> I got impression of agreement, but if one more week is necessary, fine:)
14:53:51 <slagle> i dont think we need to think on anything for a week
14:54:05 <tzumainn> I think I'm a bit worried about an email discussion because the opinions that people have come from all different directions
14:54:23 <shardy> I kinda like the openstack/tripleo idea, but it's a little confusing compared to other projects because there are so many additional repos
14:54:27 <jtomasek> the discussion has already happened at spec review
14:54:28 <jrist> tzumainn: thus my comment
14:54:30 <slagle> the api goes into tripleo-common. it's not called tuskar in any way
14:54:31 <tzumainn> so I'm pretty sure that the email thread would explode into a billion different side-discussions
14:54:34 <jrist> +1
14:54:34 <slagle> does anyone disagree with that?
14:54:36 <jrist> no
14:54:38 <shardy> so it's not like openstack/tripleo will contain all of the non-client pieces
14:54:51 <marios> tzumainn: can we just use sthing like a doodle poll with just the two options
14:55:31 <dprince> tzumainn: lets follow up in #tripleo after this meeting
14:55:32 <tzumainn> marios, which two options - renaming tripleo-common vs not renaming tripleo-common, or separate repo for tripleo-api vs putting it into tripleo-common/tripleo, or... ?
14:55:38 <tzumainn> dprince, fair enough, thanks!
14:55:49 <dprince> tzumainn: I want to leave 5 minutes for dmitry to present too here
14:55:49 <marios> tzumainn: having tripleo-api independent, or as part of tripleo-common
14:56:03 <dprince> okay, dtantsur you are up
14:56:20 <dtantsur> I don't need too much time to present, just making sure you saw my email about reusing OSC namespaces
14:56:22 <dprince> dtantsur: re: Suggestion to stop using ironic and ironic-inspector OSC namespaces
14:56:49 <dprince> besides your email thread anything you want to highlight this week?
14:56:54 <dtantsur> tl;dr is that we're invading into "openstack baremetal" namespace without syncing with what ironic plans and without trying to make our command generic enough
14:56:59 <dprince> http://lists.openstack.org/pipermail/openstack-dev/2015-November/078859.html
14:57:08 <dtantsur> which is fine, but I think we should use our namespace(s)
14:57:11 <dtantsur> dprince, thanks!
14:57:39 <dtantsur> main reason is confusion, especially "introspection bulk start" vs "introspection start" is hard, but others have problems as well
14:58:16 <dtantsur> so let's prefix things with overcloud, e.g. "overcloud nodes configure boot" or "overcloud baremetal configure boot", whatever
14:59:09 * bnemec keeps forgetting that the meeting time changed
14:59:18 <d0ugal> +1, it makes a ton of sense to me.
14:59:26 <marios> dtantsur: to be clear you mean openstack overcloud nodes configure boot
14:59:39 <dtantsur> marios, sure, I'm skipping the common "openstack" thingy
15:00:58 <dprince> dtantsur: I'm not sold on using "overcloud" as the new namespace for all these things. But I do generally agree you are onto something with these ideas... we should probably be more careful not to step across project namespaces like this
15:01:30 <dprince> sorry to cut it short this week all but we are out of time
15:01:39 <dtantsur> let's continue in channel
15:01:40 <dprince> lets continue in #tripleo perhaps
15:01:41 <dtantsur> thanks
15:01:49 <dprince> #endmeeting