20:01:19 <ttx> #startmeeting tc
20:01:20 <openstack> Meeting started Tue Dec 13 20:01:19 2016 UTC and is due to finish in 60 minutes.  The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:01:21 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:01:23 <openstack> The meeting name has been set to 'tc'
20:01:26 * edleafe hides in the back
20:01:29 <ttx> Hi everyone!
20:01:31 <stevemar> o/
20:01:32 <mordred> o/
20:01:33 <ttx> Our agenda for today:
20:01:35 <ttx> #link https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee
20:01:36 <flaper87> o/
20:01:41 <ttx> (remember to use #info #idea and #link liberally to make for a more readable summary)
20:01:48 <ttx> dolphm: around ?
20:02:14 * ttx starts with the second topic to give time for dolphm to maybe join us
20:02:20 <ttx> #topic Some early contribution stats for Ocata (ttx)
20:02:30 <ttx> I ran some stats last week to look into early Ocata cycle contributions
20:02:40 <ttx> to try to see if we had projects that were very affected by recent changes at various companies
20:02:53 <ttx> It's still pretty early (not much data), but there are early signs
20:03:01 * stevemar is eager to hear about this data
20:03:04 <dolphm> ttx: yes
20:03:06 <ttx> Figured I should share
20:03:13 <ttx> I compared changes merged during first 5 weeks of Mitaka (including Thanksgiving) with first 5 weeks of Ocata
20:03:25 <ttx> to have comparable timeframes in terms of holidays and such
20:03:31 <ttx> And tried to ponder the results with Mitaka -> Newton trends, to isolate Ocata data
20:03:41 <ttx> I only looked up projects that are used in 10%+ of installs (according to user survey)
20:03:51 <ttx> Most affected project is clearly Designate, with a -63% drop between first weeks of Ocata and first weeks of Mitaka
20:03:58 * Rockyg slides in next to edleafe and waves hi
20:03:59 <ttx> (while activity in Newton had increased +35% compared to Mitaka !)
20:04:16 <ttx> Other Ocata visibly-affected projects included:
20:04:19 * smcginnis notes the goals of the shorter cycle may also have some impact there.
20:04:33 <ttx> smcginnis: yes, that is a good caveat indeed
20:04:35 <ttx> Nova (-6% while activity in Newton was +30% compared to Mitaka)
20:04:40 <ttx> Cinder (-25% while activity in Newton was +7% compared to Mitaka)
20:04:45 * edleafe waves back
20:04:47 <ttx> Rally (-48% while Newton was only -9% compared to Mitaka)
20:04:54 <ttx> Keystone (-31% while activity in Newton was only -3% compared to Mitaka)
20:05:01 <ttx> Those ^ mostly due to attrition in changes proposed, not really to core bandwidth
20:05:13 <ttx> so might point to what smcginnis just said
20:05:17 <ttx> Sahara (-44% while Newton was only -7% compared to Mitaka)
20:05:23 <ttx> Infrastructure (-15% while activity in Newton was +20% compared to Mitaka)
20:05:27 <ttx> Telemetry (-41% while Newton was down -12% compared to Mitaka)
20:05:31 <ttx> Those ^ mostly due to attrition in core reviews, since changes were still proposed
20:05:43 <ttx> Some other projects have a ~50% drop but the reduced activity started in Newton, so not exclusively an Ocata artifact:
20:05:48 <ttx> Glance (-58% but Newton was already down -32%)
20:05:53 <ttx> Heat (-52% but Newton was already -20%)
20:05:58 <ttx> Docs (-48% but Newton was already -25%)
20:06:07 <ttx> Other projects are doing relatively well, or are even in good shape (Oslo, Manila)
20:06:15 <johnthetubaguy> the docs one seems a massive concern to me
20:06:26 <ttx> johnthetubaguy: probably part of a larger trend
20:06:27 <flaper87> johnthetubaguy: well, glance too, tbh
20:06:32 <stevemar> johnthetubaguy: they all seem like massive concerns to me :)
20:06:37 <ttx> I'll refresh those stats once we have more data, and keep you posted.
20:06:38 <dims> ++ stevemar
20:06:42 <edleafe> ttx: did you count nova and nova-specs together? Or just nova?
20:06:43 <johnthetubaguy> stevemar ++
20:06:45 <flaper87> the core team keeps shrinking and so do contributions
20:06:49 <ttx> edleafe: just nova
20:06:50 <flaper87> stevemar: my point exactly
20:06:58 <dhellmann> ttx: do you have the latest numbers on core reviewer team attrition?
20:07:07 <johnthetubaguy> ttx: so I forgot the doc split out... ignore me a little
20:07:25 <ttx> dhellmann: I just have our notes
20:07:35 <dhellmann> ok, those should be up to date afaik
20:08:06 <ttx> raw data at https://etherpad.openstack.org/p/iFoG829Xig for those playing at home
20:08:32 <sdague> o/
20:08:46 <dhellmann> it would be interesting to match up the review stats changes with the team stats changes to see if there's any correlation
20:08:56 <dhellmann> as flaper87 said, we keep seeing folks step down from core teams
20:08:59 <ttx> dhellmann: I tried, it's not as clear-cut as I thought it could be
20:09:05 <ildikov> johnthetubaguy: ttx: on the docs one, some parts of the docs is moved back to project repos, which can lead to changing numbers regarding the OS Manuals activities too
20:09:09 <dhellmann> though I've also noticed a few teams enrolling new members
20:09:17 <ttx> ildikov: yes, good point
20:09:24 <EmilienM> on a positive side, a lot of projects still have good progress
20:09:47 <johnthetubaguy> ildikov: yeah, thats what I was attempting to say above after remembering it, its tricker/better than it looks
20:10:03 <dhellmann> it would also be interesting to see if the individual contributors who are doing less work in project X are doing more work in project Y, so focus is changing (versus leaving entirely)
20:10:23 <ildikov> johnthetubaguy: yeap, more complex to follow that this way
20:10:23 <ttx> The only project that is seriously in danger is Designate imho, so maybe we could communicate a bit more about it
20:10:26 <dhellmann> that would be more complicated ot produce, though
20:11:06 * flaper87 will look at the data in more detail
20:11:21 * dims spots the drop in "Fuel"
20:11:27 <dhellmann> ttx: I think some of those longer term trends show projects in trouble, too
20:11:27 <ttx> Once we reach the holidays I'll refresh the data
20:11:35 <ttx> dhellmann: yes
20:11:45 <dhellmann> just not as suddenly
20:11:50 <ttx> that should give us a better sample
20:12:05 <ttx> Anyway, I thought you would appreciate a heads-up
20:12:17 <ttx> Next topic ?
20:12:23 <flaper87> yeah
20:12:23 <dims> thanks ttx dhellmann this is definitely helpful
20:12:26 <flaper87> ttx: thanks for the data
20:12:29 <EmilienM> beside recent layoffs, do we know other reasons of such changes?
20:12:38 <mordred> pretty much layoffs
20:12:45 <EmilienM> I've also seen people moving their focus on something else
20:12:53 <ttx> EmilienM: foir the Ocata oddities, definitely the "staffing changes"
20:13:12 <fungi> from infra's perspective, it's almost all employment-related challenges
20:13:16 <johnthetubaguy> "staffing changes" is probably more accurate
20:13:21 <ttx> for the larger trends, disaffection for boring infrastructure plays a bit
20:13:33 <dhellmann> yes, it's not just folks losing jobs, some companies are keeping their engineers, but putting them on other projects
20:13:45 <dims> right dhellmann
20:13:47 <mordred> ++
20:13:50 <ttx> ok, back to our schedule
20:13:51 <johnthetubaguy> so we are certainly "less cool" in the hype curve sense
20:13:55 <ttx> #topic Do not allow data plane downtime during upgrades
20:14:03 <ttx> I think it was renames since
20:14:09 <ttx> #link https://review.openstack.org/404361
20:14:11 <ttx> renamed*
20:14:19 <ttx> dolphm: o/
20:14:30 <ttx> I think the recent split version is technically fine...
20:14:39 <ttx> I'm just struggling understanding *what* that level of granularity really brings us
20:14:41 <dhellmann> yes, thanks for doing that
20:14:45 <ttx> like the motivation behind adding it
20:14:52 <dolphm> o/
20:14:56 <ttx> Could you give an example of a deployer question that tag would answer ?
20:15:04 <ttx> Or which project we expect to strive to reach that specific level (rather than directly reach for zero-downtime or zero-impact upgrades)
20:15:30 <dolphm> i see them as parallel paths for projects to pursue
20:15:44 <dolphm> so, assuming your project has basic upgrade capabilities...
20:15:55 <ttx> ah, classify them in terms of backend effects rather than frontend effects ?
20:16:42 <dolphm> the next two steps you can pursue in parallel are either rolling upgrades (intermixing service versions) OR testing to ensure that your "controlled resources" (VMs, networks, data storage, etc) do not become unavailable at any point during the upgrade process
20:17:04 <ttx> ok, got it
20:17:18 <dims> i like the step-by-step approach dolphm
20:17:19 <ttx> other questions ?
20:17:29 <dhellmann> I got a little confused in the discussion of neutron issues
20:17:40 * flaper87 had the same question as ttx and it's now answered
20:17:41 <dhellmann> it sounds like some backends are always going to be disruptive during upgrades?
20:17:42 <sdague> except, we already test controlled resources work when the control plane is offline
20:17:55 <dhellmann> if that's the case, would neutron every be able to have this tag?
20:18:26 <dolphm> dhellmann: neutron or not, there are likely pluggable choices in every project that will cause you to sacrifice certain features of the upgrade process (feature support matrix, anyone?)
20:18:34 <mordred> dhellmann: I believe that's backend specific
20:18:42 <sdague> dhellmann: I guess the question is whether that's a product of neutron architecture, or backends
20:18:52 <sdague> ovs has specific issues when restarting iirc
20:18:58 <dhellmann> mordred : right, but the tag only applies to a deliverable, with no metadata to indicate that it may not always work as advertised
20:19:06 <ttx> one could argue that dropping packets is thing healthy networks do :)
20:19:10 <mordred> dhellmann: that's an _excellent_ point
20:19:13 <flaper87> and it applies to type:service
20:19:24 <dolphm> dhellmann: my thinking is that if there is *a* well tested, ideally default configuration upstream that can satisfy all these requirements, then the project deserves the tag
20:19:26 <dhellmann> yes, I don't think we guarantee no packet loss even in normal operation without an upgrade in progress, do we?
20:19:46 <johnthetubaguy> yeah, the tag used to be there is *a* way and its clear what that is
20:19:46 <dhellmann> dolphm : ok, I haven't had a chance to review the latest draft, is this issue covered in the text?
20:20:01 <fungi> also depends a lot on how far into the controlled infrastructure those guarantees extend
20:20:03 <dhellmann> maybe by saying that any caveats have to be documented?
20:20:14 <dolphm> dhellmann: the notion of only requiring a single happy path? no
20:20:36 <dhellmann> dolphm : "at least one happy path"
20:20:42 <sdague> I do have concerns about splitting this out - https://review.openstack.org/#/c/404361/3/reference/tags/assert_supports-accessible-upgrade.rst - because we already do that in default testing, unless I'm misunderstanding
20:21:00 <dolphm> dhellmann: (no, but) i think the notion of a happy path applies to all the upgrade tags, not just this one
20:21:09 <johnthetubaguy> sdague: certainly plan A was to keep it together, but there were issues or "redefining" the existing tag
20:21:18 <sdague> http://docs.openstack.org/developer/grenade/readme.html#basic-flow
20:21:23 <johnthetubaguy> s/or/of/
20:21:31 <fungi> while i'm not a fan of the "control plane" and "data plane" terminology, it does seem a bit out of place to make data plane guarantees about projects which are mostly control plane
20:21:39 <dhellmann> dolphm : ok, that's fair. Maybe we can get that clarified in an update
20:22:03 <sdague> johnthetubaguy: it's not really redefining, it is being explicit a thing that was an implicit piece of this
20:22:07 <dolphm> fungi: perhaps projects need to be able to apply for a tags with a "not applicable to me" status?
20:22:09 <ttx> sdague: there may be a gap between "verify resources are still working after upgrade" and "verify resources were not changed in any way after upgrade"
20:22:30 <dhellmann> and "verify resources are still working *during* the upgrade" which is what this says, right?
20:22:38 <sdague> ttx: their may be, but the intent was that was the check we've been running for 4 years
20:22:43 <fungi> dolphm: hrm... maybe i guess
20:22:43 <sdague> maybe 3 years
20:23:17 <ttx> sdague: do all projects check that ?
20:24:01 <EmilienM> ttx: not afik
20:24:09 <sdague> ttx: right now, I doubt it
20:24:14 <ttx> There are two dimensions: availability and integrity
20:24:16 <sdague> but the intent is there
20:24:31 <ttx> We test availability and that is what dolphm baked into the base tag
20:24:34 <dhellmann> intent would be sufficient if we didn't already have a bunch of projects claiming the existing tag
20:24:39 <ttx> The other tag requires integrity
20:24:55 <ttx> (i.e. the resource has not been altered)
20:25:13 <ttx> but fungi has a point
20:25:31 <ttx> it bleeds a bit into data-side implementation
20:25:34 <sdague> fungi: so the reason that we need to make those guaruntees, is that they are easy to screw up
20:25:53 <fungi> it's sort of like having a tag that says nova upgrades won't cause it to tell libvirt to reboot your instances, i guess?
20:26:04 <sdague> fungi, or delete them
20:26:07 <mordred> yah
20:26:09 <flaper87> fungi: yup
20:26:10 <dhellmann> or pause them
20:26:15 <sdague> which it did back in essex at times
20:26:32 <ttx> (Timeboxing this to 5 more minutes, since it feels like it could use a bit more iterations on the review)
20:26:43 <sdague> because replumbing in existing stuff and not modifying it needs to be front and center
20:27:09 <fungi> so this ends up being more about making sure that control plane services don't tell data plane services to take destructuve actions during a control plane upgrade
20:27:32 <sdague> right, that consumer workloads in your cloud will be fine as you upgrade your openstack infrastructure around them
20:27:33 <mordred> fungi: that's my understanding
20:27:44 <dolphm> fungi: sdague: mordred: ++
20:27:54 <fungi> whereas we can't make general guarantees about those data plane services themselves
20:28:02 <dims> "end user created resources still continue to function during and after upgrade"?
20:28:04 <fungi> since we don't produce them
20:28:19 <sdague> which gives you a lot more confidence that you can upgrade your cloud without destroying all your users
20:28:58 <fungi> so it's not "libvirt won't reboot your instances during a nova upgrade" and more "nova won't tell libvirt to reboot your instances during a nova upgrade"
20:28:59 <persia> Although ideally user services cannot identify an upgrade is happening, reducing the chance a crash there is upgrade-related.
20:29:16 <dolphm> overall, the spirit of this whole effort is basically "i should be able to upgrade openstack continuously without impacting my customers / workloads / etc"
20:29:21 <johnthetubaguy> fungi: thats my take
20:29:23 <dhellmann> fungi : right
20:29:25 <dims> right dolphm
20:29:27 <sdague> dolphm: yeh, definitely
20:29:29 <ttx> ok, looks like we can iterate on the review and converge to something around "no destructive actions"
20:29:34 <johnthetubaguy> dolphm: +1
20:29:54 <ttx> dolphm: Maybe iterate on the review and come back next week ?
20:29:55 <dolphm> ttx: ++
20:30:16 <sdague> I just want to make sure that we don't make our taxonomy of that so complex, given that "upgrade that destroys my workloads" isn't really worth even talking about
20:30:21 <ttx> ok, thanks, I understand it a lot better now
20:30:41 <johnthetubaguy> sdague: +1 thats totally a worry
20:30:49 <sdague> that's my basic objection, I feel like it makes the upgrade tag pointless because it doesn't give the fundamental table stakes we should expect
20:31:21 <johnthetubaguy> might be worth the cost of dropping the tag from everyone and making them re-apply
20:31:30 <dolphm> sdague: i feel that's the slipperly slope i'm on with this new tag, especially because it's no longer a linear series of milestones (at least, not necessarily)
20:31:31 <sdague> and I'd rather clarify that the the table stakes are real, and prune some projects from it if we have to
20:31:50 <ttx> then you fall in the "redefine existing tag" rathole
20:31:57 <sdague> ttx: sure
20:31:58 <fungi> i could get behind merging this expectation into the normal upgrade tag
20:32:07 <sdague> but since when are all tags idempotent?
20:32:15 <ttx> but yeah, let's continue that discussion on the review
20:32:15 <fungi> "expectation we failed to call out explicitly"
20:32:33 <ttx> Having an idea of which projects would be dropped would help
20:32:36 <sdague> fungi: yep, we made an oversight, lets fix it
20:32:41 <mugsie> well, all of them
20:32:48 <dhellmann> it's not that all tags are immutable, it's that this appeared to add a large new expectation to an existing tag without documentation that all of the projects using the tag met the requirement
20:32:58 <flaper87> dhellmann: ++
20:33:00 <mugsie> as none of the test that everything works *during* the upgrade
20:33:04 <dolphm> dhellmann: ++
20:33:11 <mugsie> none of them*
20:33:18 <sdague> mugsie: it's testing, nothing tests everything
20:33:33 <sdague> testing is about representative use cases and verifying them
20:33:47 <sdague> see: halting problem :)
20:33:48 <mugsie> nova's grenade test does not test a vm is accessable *during* the upgrade phase
20:33:52 <ttx> we could set them up as separate tags with the goal of removing the simpler version once it's all obsoleted
20:33:56 <sdague> mugsie: yes it does
20:33:57 <fungi> having some tests before making this change might still be in order
20:34:00 <mugsie> during?
20:34:10 <sdague> mugsie: define during
20:34:14 <dhellmann> sdague : while the new version of nova is being installed and started
20:34:23 <dhellmann> including any database changes
20:34:25 <mugsie> while the nova services have the code replaced and restarted
20:34:48 <sdague> dhellmann: so... I don't know that you can build infrastructure that guaruntees that because you are racing
20:35:06 <ttx> ok, I'll have to cut this one short and ask to continue on the review. This is clearly not ready for immediate merging anyway
20:35:06 <dhellmann> sdague : which may not be a big deal for nova, but it appears to be *the* case where neutron would fail to meet these requirements
20:35:08 <sdague> we test pre shutdown, during shutdown, post upgrade
20:35:24 <sdague> dhellmann: yeh... the neutron ovs issue is one that would need thought
20:35:26 <dhellmann> ttx: ack, let's move on
20:35:37 <sdague> part of it is going to require architecture inspection + testing
20:35:48 <ttx> #topic Driver teams: remaining options
20:35:50 <sdague> to know if the tests are an acurate reflection of reality
20:36:06 <ttx> stevemar distilled the discussion from last week down to 4 remaining options:
20:36:10 <ttx> #link https://review.openstack.org/403826 (fallback)
20:36:14 <ttx> #link https://review.openstack.org/403829 (grey, amended)
20:36:16 <ttx> #link https://review.openstack.org/403836 (soft black)
20:36:19 <ttx> #link https://review.openstack.org/403839 (soft white)
20:36:27 <ttx> I did a pass at optimizing "grey" to try to address fungi's concerns with it
20:36:32 <stevemar> (really 3 options, fallback is just that)
20:36:37 <ttx> (basically reducing the risk of it being abused as a registry of drivers where you want to place brands)
20:36:47 <ttx> Not sure what's next step
20:36:48 <fungi> yeah, i'm +1 on it now. while still not my preference, i can see it working out
20:36:49 <ttx> I could quickly set up a Condorcet to help us order those
20:37:00 <ttx> if you feel that's sueful
20:37:31 <flaper87> mmh
20:37:35 <ttx> or just vote on them
20:37:40 <flaper87> what about we all vote for our preference first ?
20:37:57 <flaper87> and eventually vote for the second prefered option
20:37:58 <ttx> yeah, let's try that -- setting up that condorcet with all your email addrseses is not fun
20:37:59 <stevemar> might be worth seeing what neutron/cinder/nova folks feel, or the folks working on the drivers
20:38:01 <flaper87> preferred*
20:38:14 <EmilienM> flaper87: yes
20:38:14 <ttx> stevemar: I'm not saying approving it
20:38:20 <stevemar> of course not
20:38:22 <ttx> I'm saying having a good candidate
20:38:26 <mtreinish> o/
20:38:26 <fungi> stevemar: can you encourage them to respond to the ml thread or jump in here?
20:38:40 <sdague> stevemar: especially neutron folks, as this is driven a bit from that community
20:38:41 <stevemar> fungi: smcginnis chimed in last week
20:38:44 <fungi> the point of bringing it to the ml first is so they could weigh in more easily
20:39:14 <fungi> yep, smcginnis did. but in general feedback on the ml was pretty minimal
20:39:27 <fungi> most responses were from tc members :/
20:39:33 <stevemar> just re-iterating the fact that if we're coming up with a solution then we should include the people it'll affect. i can certainly try to encourage them
20:39:41 <smcginnis> I think it makes sense for the tc to vett the options here, then have the teams chime in on a ML thread.
20:39:54 <smcginnis> Once things are narrowed down a little, that might help focus the dicussion.
20:39:59 <dims> ttx : can we check if anyone still wants to pursue the 2 soft reviews? if not we can prune them
20:39:59 <flaper87> the grey option seems to have 7 votes already
20:40:01 <stevemar> smcginnis: we already did a first pass, there are 3 options now-ish
20:40:05 <johnthetubaguy> so from a Nova view, it says using established plugin interfaces, Nova doesn't have one for drivers, which keeps things as they are today (in a good way)
20:40:06 <ttx> grey has a majority of +1 now, so we could refine it over the week and push it back to the thread for further discussion ?
20:40:18 <flaper87> ttx: was writing just that
20:40:32 <fungi> maybe if we go forward with a non-binding vote and ping the ml thread with a "this is what the tc is leaning toward" update...
20:40:37 <ttx> fungi: yes
20:40:40 <smcginnis> stevemar: Yep. Just thinking it might be good to start a thread saying the TC sees options x or y as possible, we'd like team input. Or something like that.
20:40:41 <dims> ttx : ++
20:40:46 <mordred> ttx: sooooo ... combining our thinking about this topic with the previous topic ...
20:40:55 <flaper87> fungi: ttx ++
20:41:04 <stevemar> ttx: yeah, its probably easier to present one choice instead of three
20:41:04 <ttx> re: upgrades ?
20:41:05 <mordred> I could see a point in the future where we might want to be able to give a rolling-upgrade tag to drivers
20:41:07 <mordred> yah
20:41:30 <mordred> like "nova has no-downtime upgrades as  aproject, and libvirt and xen also do as drivers, but nova-docker doesn't" _for_instance_
20:41:31 <ttx> mordred: the trick is in-tree drivers, would all have to pass
20:42:00 <ttx> (a worthwhile goal, but maybe would hinder in-tree drivers a bit)
20:42:02 <mordred> ttx: no clue - this is purely an inthefuture thought - but I'd think we'd want to be able to enumerate and tag them
20:42:03 <fungi> yeah, tags are per-deliverable or per-team
20:42:09 <mordred> yup
20:42:14 <mordred> we should not block anything on this thought
20:42:20 <mordred> just a thought I had for future mulling
20:42:21 <fungi> we'd need to adjust the tag data model further
20:42:24 <ttx> OK, I'll take that one back to the ML
20:42:36 <mordred> because it might take a while to figure out :)
20:42:38 <sdague> I think that if reality is complicated, and we need to break out descriptions of what things work with what drivers, that's fine. We need to not be too stuck on existing boundaries of tags.
20:42:59 <dhellmann> yeah, I'd rather we just use lots of words to say things where binary flags fall short
20:43:02 <ttx> #action ttx to push "amended grey" back to the ML for final discussion before approval
20:43:31 <ttx> moving on
20:43:35 <dhellmann> there's a bit of feedback from JayF about the API restriction causing issues for ironic, too
20:43:45 <sdague> tags, like anything else, are high level approximations of reality. They are fine as long as you realize they are low fidelity approximations, and dangerous the moment you believe your frictionless spherical elephants are real :)
20:43:52 <ttx> #topic Relaxing IRC meeting policy
20:43:59 <ttx> There was a thread about IRC meeting rooms, which concluded with:
20:44:03 <ttx> #link http://lists.openstack.org/pipermail/openstack-dev/2016-December/108604.html
20:44:11 <ttx> Some of the proposed actions may increase the silo-ing in some teams, so I wanted to run those past you first
20:44:21 <ttx> First action is to remove artificial scarcity by creating more meeting rooms, which will result in more concentration of meetings.
20:44:33 <ttx> Pretty much everyone agreed that's a necessary evil now, so we went ahead and created #openstack-meeting-5 last week
20:44:42 <ttx> Second action is to remove the requirement of holding team meetings in common channels
20:44:48 <ttx> (and allow team meetings in team channels)
20:44:57 <ttx> This one is slightly more questionable, as it reduces external exposure and reinforces silos
20:45:06 <ttx> So I wanted to check that it was OK with you before proceeding (as you were mostly silent on that thread)
20:45:18 <mtreinish> I'm not really a fan of that second thing
20:45:30 <mtreinish> for the reasons you mentioned
20:45:44 <ttx> (alternatively, we could try first part and wait a bit before doing anything on the second part)
20:45:44 <mtreinish> I also don't idle in all the project channels :)
20:45:55 <mordred> I mean ... I'm also not a fan of the second thing - but the Big Tent has brought us enough teams that have effectively no interaction that forcing folks into arbitrary meeting rooms also seems weird
20:46:05 <sdague> mordred: yeh
20:46:09 <johnthetubaguy> given timezone differences, and everything else, as long as its logged and open, we have got the main things
20:46:09 <dims> mtreinish : those of us working across a bunch of things don't like it BUT those who exclusively live in a couple of channels would love it
20:46:12 <mordred> however, I do get pinged in random meetings sometimes
20:46:17 <johnthetubaguy> mordred: +1
20:46:18 <stevemar> mtreinish: it causes a whole lot of scheduling pain for a "regular" meeting channel
20:46:27 <ttx> I'm pinged in random meeting rooms about twice a day
20:46:45 <fungi> at least that many for me as well
20:46:46 <sdague> I also am not sure the expecting people to idle everywhere is the right expectation anyway
20:46:49 <EmilienM> people who want to join a meeting just /join the channel and that's it? how is it a problem and how does it create silos? we should let people working together and avoid private meetings, that's all
20:46:52 <dtroyer> I think if you are having the meeting in a project channel you simply will not be able to assume random folk will see a ping
20:46:52 <stevemar> ttx: mordred wouldn't that mean you get pinged in project channel rooms instead ?
20:46:53 <dhellmann> I'm probably pinged once a week or so; sometimes more
20:46:57 <ttx> it feels people will miss my awesome insights by hiding in channels I don't lurk in :)
20:47:03 <mtreinish> dims: right, that's the reinforcing silos thing
20:47:14 <stevemar> ttx: lurk in more channels ;)
20:47:17 <ttx> stevemar: but would miss them
20:47:21 <JayF> The only weirdness about holding a meeting in a project channel is that it makes for a very strange experience should a first time user drop into IRC mid-meeting.
20:47:21 <dhellmann> sdague : ++
20:47:26 <johnthetubaguy> sdague: yeah, if nothing else, it feels very timezone silly to me
20:47:32 <dims> JayF : true
20:47:33 <ttx> stevemar: difficult to keep track
20:47:44 <fungi> i also get pinged many times a day in random project channels too, yes, but only high-profile ones i happen to idle in (probably others too i just don't see them since i'm not there)
20:47:45 <dhellmann> JayF : yes, that's a concern I had, too
20:47:47 <Rockyg> I think new teams or teams applying for big tent should be in a meeting channel until they have some momentum
20:47:57 <EmilienM> dtroyer, dhellmann: imho, if you miss a ping, that's not a big deal. Most of our problems can be solved async. versus during an irc meeting
20:48:02 <smcginnis> If someone needs to jump out of a meeting to ping someone in that person's specific project channel asking them to join - I don't see that as any kind of major burden.
20:48:16 <stevemar> Rockyg: i was thinking the opposite :)
20:48:22 <dhellmann> EmilienM : yeah, I'll start expecting more email on the -dev list :-)
20:48:22 <ttx> smcginnis: feels almost like a PTG
20:48:27 <smcginnis> :)
20:48:41 <fungi> going back to the time when most everything happened in #openstack-dev, #openstack-meeting provided a quiet haven to host an irc meeting without people popping in interrupting with off-topic randomness
20:48:42 <sdague> honestly, it wouldn't be bad if that meant the expectation wasn't that you were in 4 meeting channels, but instead you were in #openstack-dev when active, and that was the common ping bus
20:49:00 <smcginnis> sdague: +1
20:49:02 <ttx> sdague: +1
20:49:05 <dims> agree sdague
20:49:07 <Rockyg> sdague, +1
20:49:09 <johnthetubaguy> sdague: good point
20:49:11 <ttx> maybe we could relax one and reinforce the other
20:49:15 <EmilienM> TC should help projects to work together. If teams want to use their own channels, go ahead. I don't think we should make a rule for that
20:49:29 <mtreinish> EmilienM: it already is a rule
20:49:31 <flaper87> I'd bee ok with projects having meetings in their own channels if possible
20:49:33 <ttx> EmilienM: not a rule, just allowing it (currently irc-meetings prevents it)
20:49:35 <sdague> I also kind of wonder if part of the issue is the awkwardness of drilling into meeting archived content
20:49:40 <flaper87> and then for random pings just use -dev
20:50:06 <dims> ++ flaper87
20:50:32 <fungi> i'd strongly discourage teams having meetings in their own channels, but expect that many of them who think it's a cool idea at first will switch to wanting to have them in a separate channel from their general discussion channels eventually
20:50:34 <ttx> ok, I think we can proceed with relaxing the gate check at least and altering MUST -> SHOULD in some literature
20:50:39 <Rockyg> The reason I'd like new eams to be in meeting channel is they tend to be more single vendor.  I;ve see good things happen when a veteran openstacker jumps into the middle of one of those meetings because of trigger words
20:50:40 <EmilienM> (I'm personally not in love with IRC meetings, specially in distributed teams. I think most of our problems can be solved by email or gerrit)
20:50:42 <cdent> sdague++
20:50:49 <cdent> irc logs are not great
20:51:05 <fungi> and i don't want to see us adding #openstack-nova-meeting and #openstack-cinder-meeting and so on
20:51:08 <dims> Rockyg : you mean why is this random person messing with us? :)
20:51:26 <Rockyg> Uh, yeah, that's it, dims
20:51:32 <ttx> Anyway, if you care one way or another, please chime in on thread, otherwise i'll probably go ahead and implement it
20:51:34 <flaper87> EmilienM: yup
20:51:48 <ttx> We need to move to Open discussion, a few things to cover there
20:52:02 <ttx> #topic Open Discussion
20:52:05 <mordred> Rockyg: ++
20:52:12 <ttx> 1/ Joint BoD+TC meeting around PTG
20:52:19 <ttx> The Board would like to have a Board+TC meeting around the PTG
20:52:24 <ttx> They propose to hold the joint meeting on Friday afternoon
20:52:29 <ttx> Since most teams will run for 2 or 2.5 days anyway, I think that's doable
20:52:36 <ttx> We could push back and ask to hold it on Sunday afternoon instead, but I'm not a big fan of 6-day work streaks
20:52:48 <ttx> Also it's not really an issue if some TC members end up being busy elsewhere
20:52:56 <ttx> not as if all board members would be present
20:53:09 <fungi> obviously this impacts attendees who panned to work on vertical team stuff for the full 3 days
20:53:13 <ttx> opinions / strong feelings ?
20:53:13 <EmilienM> Sunday is usually the day spent on travels :-/
20:53:18 <fungi> er, planned
20:53:20 <johnthetubaguy> afternoon means one more hotel day for me, I gess
20:53:28 <EmilienM> fungi: right
20:53:29 <johnthetubaguy> EmilienM: I was thinking that too
20:53:31 <dtroyer> Given that choice, I'd much prefer Friday
20:53:33 <ttx> johnthetubaguy: sunday too probably
20:53:43 <dims> was waiting for a decision on this to book flights...
20:53:48 <ttx> fungi: not so many teams plan to do full 3 days
20:53:53 <mordred> if we're going to have one of these co-located friday afternoon, it would be great if it was treated as important on the board side - it often feels like we lose people for the TC section of board meetings
20:53:56 <EmilienM> it also means most of people would need to travel on Saturday.
20:53:56 <fungi> good to know
20:53:59 <sdague> ttx: well, the nova team does
20:54:00 <johnthetubaguy> ttx: true
20:54:06 <sdague> and we
20:54:06 <ttx> but yes, some will and I guess it's fine to prioritze that over the joint BoD/TC
20:54:11 <dims> ++ mordred
20:54:25 <sdague> we've got a few nova core members that are in this pool :)
20:54:27 <ttx> mordred: I can make that clear
20:54:31 <fungi> mordred: in this case instead of leaving mid-day and skipping the joint meeting, those people will just not come there at all i guess?
20:54:36 <flaper87> if we're having it on the PTG, I guess I'd prefer it on Friday
20:54:43 <EmilienM> +1 for Friday
20:54:46 <flaper87> this is the first PTG and I'd like to first see how it pans out
20:54:48 <stevemar> i'm okay with sunday afternoon only because it's easy to get to ATL; but friday works too
20:54:49 <mordred> I'm going to be tired and would prefer to go home, but will show up at the meeting. I'll be annoyed if I show up at the meeting and it's a ghost town. nobody wants a grumpy monty in a room
20:54:50 <flaper87> instead of adding more days to it
20:54:53 <fungi> or is the board planning to have a board meeting before the joint meeting again too?
20:55:04 <ttx> fungi: they will yes
20:55:13 <sdague> mordred: right, it does feel like a Friday afternoon is going to be sparse
20:55:20 <EmilienM> (can't we make it during an evening in a piano bar?) :-)
20:55:27 <ttx> sdague: they come to town exclusively for that day though
20:55:28 <dims> ++ EmilienM
20:55:30 <dhellmann> from what I saw, most of the board wasn't planning to come to the ptg for any other reason
20:55:33 <flaper87> EmilienM: with some rum
20:55:34 <fungi> oh, so friday attendees get to choose between vertical team ptg stuff and board meeting as well
20:55:53 <ttx> sdague: so I don't think they would come to ATL just for the morning
20:56:03 <ttx> but who knows
20:56:15 <ttx> OK, I'll communicate that back. It's not mandatory anyway
20:56:16 <sdague> so, personally either I guess is fine. The flights from here are pretty direct. As long as we nail it down soon
20:56:19 <EmilienM> fungi: you, stevemar and me are PTL, it would be hard for us to make a choice :-/
20:56:30 <dhellmann> I'm slightly more in favor of Sunday, but I understand the objections to the long week that would cause.
20:56:47 <ttx> We can sync before so that if you are stuck in a room, your views are represented
20:56:48 <sdague> dhellmann: yeh, I would say I would lean Sunday
20:56:52 <flaper87> ttx: I know it's not mandatory but if it'll happen, it kinda feels that way
20:56:54 <fungi> EmilienM: at least my team's assigned ptg days are monday/tuesday (yours as well?) harder for stevemar
20:56:55 <flaper87> if you know what I mean
20:56:56 <johnthetubaguy> +1 on knowing soon, I expect most folks are booking pre new year, I believe I am meant to
20:56:58 <mtreinish> dhellmann: I'm leaning the same way
20:57:04 <flaper87> I feel it's part of my job as TC member to attend
20:57:08 <flaper87> and I want to be there
20:57:10 <johnthetubaguy> what about Monday?
20:57:18 <johnthetubaguy> its a cross project thing?
20:57:20 <mordred> monday is bad for horizontal team things
20:57:26 <stevemar> fungi: it's just a bit of time anyway, i can break away from keystone-y stuff for a bit
20:57:29 <ttx> johnthetubaguy: the horizontal stuff only has 2 days, can't burn one
20:57:34 <johnthetubaguy> mordred: true
20:57:40 <fungi> EmilienM: though ptg is also after ptl elections, so maybe none of us will be ptls by then ;)
20:57:41 <dhellmann> johnthetubaguy : a different set of TC members wouldn't be able to attend in that case :-)
20:57:42 <ttx> ok, I'll start a -tc thread on that
20:57:42 <EmilienM> fungi: we are verticial
20:57:44 <ttx> 2/ Progress on Amend reference/PTI with supported distros (https://review.openstack.org/402940)
20:57:48 <ttx> EmilienM wanted to unblock this review
20:57:51 <stevemar> fungi: that's the hope! :P
20:57:52 <EmilienM> fungi: who knows? :-)
20:57:57 <sdague> ttx: I guess the second question is is this a normal 3 hour cross section? Or are we talking about the whole day?
20:58:01 <EmilienM> stevemar: talk for you :P
20:58:02 <ttx> #action ttx to start a thread on the -tc list to see what is most popular
20:58:12 <fungi> EmilienM: oh, for some reason i thought the deployment teams ended up on monday/tuesday as well
20:58:13 <dhellmann> sdague : half day
20:58:13 <stevemar> :)
20:58:18 <sdague> 2-5 on Friday is one thing, all day friday is different
20:58:18 * dims will be missing kid2's bday 2 years in a row
20:58:19 <ttx> sdague: Friday afternoon
20:58:23 <sdague> ok
20:58:24 <EmilienM> fungi: it was and it changed
20:58:24 <ttx> or Sunday afternoon
20:58:34 <EmilienM> ttx: thx for bringing it up again
20:58:35 <fungi> EmilienM: i should pay closer attention
20:58:47 <EmilienM> ttx: I see zero blocker now for this change to be accepted
20:58:48 <mordred> my main concern is that I'm looking forwrad to the PTG being super productive - and I don't want to fall into the trap of turning it in to a second summit by cramming additional things in if we can help it
20:59:03 <sdague> mordred: yeh...
20:59:09 <ttx> mordred: yes, not looking forward another 6-day thing
20:59:12 <sdague> we did try to make this different
20:59:17 <ttx> which is why I lean towards Friday
20:59:18 <EmilienM> fungi: no worries, we changed it very recently. Main reason: tripleo is not horizontal and we need to attend horizontal sessions (infra, etc)
20:59:18 <flaper87> mordred: my thoughts exactly
20:59:20 <sdague> ttx: or a 5 day thing fo being double book
20:59:37 <dhellmann> ttx: do we have the option of just saying "no, we'll be busy all week"?
20:59:38 <sdague> honestly, I think my actual preference is not to do it at all there :)
20:59:49 <ttx> EmilienM: difficult to switch focus
20:59:50 <johnthetubaguy> is it crazy to consider a virtual joint meeting instead?
20:59:51 <EmilienM> ttx: sounds like we're running out of time. Maybe next meeting
21:00:01 <EmilienM> ttx: or maybe I'll ping people to review it.
21:00:05 <ttx> EmilienM: yeah, that
21:00:12 <ttx> that way we can finalize it next week quickly
21:00:29 <ttx> flaper87: same for https://review.openstack.org/398875
21:00:47 <flaper87> ttx: sounds good
21:00:52 <ttx> Also if someone could tell me how useful https://review.openstack.org/406696 is, I would appreciate it
21:00:54 <flaper87> ppl, read ^
21:00:54 <dims> dhellmann : ++
21:01:03 <ttx> And we are out of time
21:01:11 <stevemar> o\
21:01:24 <EmilienM> ttx: bonne nuit!
21:01:27 <ttx> #action ttx to finalizer the Friday/Sunday decision in a -tc ML thread
21:01:28 <fungi> do we have a tentative agenda for the meeting? wondering if this is to continue the board discussion on accepting other languages, making emerging trendy technologies first class citizens and restructuring project governance, or other stuff
21:01:33 <fungi> oh, out of time
21:01:37 <ttx> #endmeeting