21:02:22 <ttx> #startmeeting project
21:02:23 <openstack> Meeting started Tue Oct 21 21:02:22 2014 UTC and is due to finish in 60 minutes.  The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:02:25 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:02:26 <anteaya> jeblair: yeah I wanted maple too
21:02:27 <openstack> The meeting name has been set to 'project'
21:02:32 <ttx> Our agenda for today:
21:02:34 <SergeyLukjanov> o/
21:02:40 <dims> o/
21:02:50 <ttx> (maple works because it's an element in the state flag)
21:02:57 <ttx> #link http://wiki.openstack.org/Meetings/ProjectMeeting
21:03:10 <ttx> We didn't have 1:1 syncs today (and won't have them next week either)
21:03:18 <ttx> Those will be back after the summit.
21:03:27 <ttx> #topic M Design summit time span
21:03:36 <ttx> First same question as I just asked TC
21:03:44 <ttx> The M summit time span
21:03:51 <ttx> That's the one after Vancouver, located in APAC, one year from now
21:03:58 <ttx> The main conference will happen Tuesday - Thursday
21:04:05 <ttx> Monday will have Ops Summit, analyst day, and other pre-summit things
21:04:14 <ttx> Foundation staff needs to close on the contract and asks when should design summit be ?
21:04:25 <ttx> We can do Tuesday - Friday, but then we fully overlap with the conference, which may impact the PTLs ability to present at the conference
21:04:43 <ttx> We can do Wednesday - Saturday, but then... Saturday. Long week. TC didn't like that option that much
21:04:45 <asalkeld> it's still the best option
21:04:50 <ttx> We can do Monday, Wednesday-Friday (break on Tuesday), too. But feels weird.
21:04:52 <mestery> -1 for Wed-Sat
21:05:06 <morganfainberg> agreed -1 for wed-sat
21:05:08 <ttx> asalkeld: yes, Tc concluded Tue-Fri was probably best
21:05:09 <anteaya> ttx ha ha ha (state flag)
21:05:18 <eglynn> bleeding into the Sat will make it a long trip from Europe or the US
21:05:47 <dhellmann> standardizing on tue-fri seems like a good system
21:05:58 <asalkeld> ttx, another option is the dev sessions are m-f but just some mornings
21:05:58 <mtreinish> I think Tues-Fri works fine
21:06:00 <mestery> +1 for tue-fri
21:06:01 <eglynn> so +1 that we just live with the overlap and go with tue-fri
21:06:09 <asalkeld> so have some half days
21:06:20 <dhellmann> asalkeld: interesting
21:06:23 <SergeyLukjanov> -1 for wed-sat
21:06:46 <ttx> asalkeld: I fear we would end up doing a 5-day design summit
21:06:49 <SergeyLukjanov> and IMO the tue-fri works good
21:06:55 <dhellmann> asalkeld: I think the counter arg to that is monday is usually the operators day
21:07:00 <dhellmann> also what ttx said
21:07:09 <ttx> i.e. we'd fill the "holes" with more design summit pods or off-discussions
21:07:17 <ttx> and be dead on friday
21:07:20 <eglynn> agreed on the hole filling
21:07:28 <asalkeld> ok, just an idea
21:07:36 <ttx> nature abhors void :)
21:07:46 <asalkeld> you could lock the pods;)
21:07:51 <SergeyLukjanov> ttx, ++ for hole filling
21:08:06 <ttx> I'll bring the feedback back to Lauren and Claire. not saturday, slight preference for Tue-Fri
21:08:23 <SlickNik> +1 for hole filling as well.
21:08:32 <ttx> ok, moving on
21:08:38 <ttx> #topic Juno release postmortem
21:08:46 <ttx> So... the release went generally well.
21:08:54 <ttx> There were a bit more late respins than usual, with 5 projects doing a RC3 in the last days
21:09:07 <ttx> There was also a bit of a f*up in Glance, which started identifying release-critical bugs only after their RC1 drop
21:09:20 <ttx> but this did not seriously affect the release
21:09:33 <ttx> One interesting exercise is to look back at the "critical" bugs which justified the late respins, and ask why those were not detected by testing
21:09:50 <ttx> If the issue is critical enough to justify a late respin, it usually should have been tested in gate in the first place
21:10:07 <ttx> So those may just uncover significant gaps in testing... for example:
21:10:15 <asalkeld> we are moving the functional tests in-tree (in Heat) - I am hopefully this will help our coverage
21:10:16 <ttx> Cinder CHAP Authentication in LVM iSCSI driver: https://review.openstack.org/#/c/128507/
21:10:27 <ttx> Cinder Unexpected cinder volumes export: https://review.openstack.org/#/c/128483/
21:10:37 <ttx> Cinder re-attach a volume in VMWare env: https://review.openstack.org/#/c/128431/
21:10:47 <ttx> Ceilometer recording failure for system pollster: https://review.openstack.org/#/c/128249/
21:10:57 <ttx> Trove restart_required field behavior: https://review.openstack.org/#/c/128352/
21:11:07 <ttx> Trove postgresql missing cluster_config argument: https://review.openstack.org/#/c/128360/
21:11:08 <SlickNik> Yes, this helped uncover some 3rd party testing holes for Trove, and we're looking to address those in Kilo.
21:11:21 <ttx> So in general, please have a look at your late proposed/juno backports and see if anything could have been done to detect that earlier
21:11:31 <eglynn> fair point
21:11:34 <asalkeld> ok
21:11:38 <jogo> how many of those were 3rd party tested?
21:12:01 <mestery> ttx: Sounds good
21:12:04 <SergeyLukjanov> ttx, in sahara we're already working on adding a bunch of tests to cover it (doing it after each release)
21:12:15 <ttx> jogo: maybe the reattac
21:12:18 <ttx> ar
21:12:27 <nikhil_k> good point, we've set up a plan for kilo to avoid this
21:12:51 <jogo> ttx: as it sounds like we have a 3rd party CI quality control issue
21:12:51 <ttx> the reattach volume in VMware env would belong in 3rd party, but the others were pretty much mainline tests imho
21:13:03 <jogo> ttx: ahh, reattach
21:13:19 <ttx> Another theme which emerged are issues with default configuration files -- although I'm not sure how we can avoid those:
21:13:28 <ttx> Ceilometer missing oslo.db in config generator: https://review.openstack.org/#/c/127962/
21:13:34 <ttx> Glance not using identity_uri yet: https://review.openstack.org/#/c/127590/
21:13:53 <jogo> ttx: maybe devstack tests? or something
21:14:01 <jogo> testing the config files
21:14:03 <asalkeld> do these project not have a sample config anymore?
21:14:16 <ttx> jogo: in the glance case they were just using the deprecated options
21:14:18 <eglynn> asalkeld: now generated (as opposed to static)
21:14:24 <asalkeld> i still like the review capablity that gives you
21:14:33 <jogo> ttx: oslo-config has a error on deprecated config option
21:14:34 <asalkeld> (of having it in the tree)
21:14:40 <jogo> but I last I checked it broke
21:14:46 <dhellmann> jogo: ?
21:15:04 <jogo> dhellmann: cause service to halt if a deprecated config option is used
21:15:15 <eglynn> asalkeld: yeah, definitely pros and cons to having it as static content in-tree
21:15:24 <dhellmann> jogo: I don't know about that one, is there a bug?
21:15:32 <jogo> dhellmann: I think so, let me lok
21:15:34 <ttx> not sure how much of the "default config" we actually consume in tests
21:16:11 <ttx> anyway, that's the only "themes" I could see in the late issues in RCs
21:16:29 <jogo> dhellmann: https://bugs.launchpad.net/oslo-incubator/+bug/1218609
21:16:31 <uvirtbot> Launchpad bug 1218609 in oslo-incubator "Although CONF.fatal_deprecations=True raises DeprecatedConfig it is not fatal" [Low,Triaged]
21:16:50 <eglynn> BTW how did the gate hold up during the RC period?
21:16:57 <eglynn> ... flow rate seemed better behaved than during the milestones
21:16:58 <ttx> Is there a significant issue that we just missed completely and is an embarassment in the release ?
21:17:04 <eglynn> ... clearly the patch proposal rate was way down
21:17:30 <dhellmann> jogo: that bug makes it sound like apps are not correctly dying, but your comment earlier made it sound like they were dying when they should not
21:17:40 <dims> right
21:17:41 <ttx> I'm not aware of any really critical issue that we let pass in the release
21:18:08 <ttx> but then, I'm no longer spending my days on Launchpad reports
21:18:29 <ttx> anything you know about ?
21:18:46 <asalkeld> nothing major
21:18:50 <SlickNik> eglynn: The gate seemed to hold up fairly well — didn't notice any significant delays during the RC period.
21:18:59 <jogo> dhellmann: yeah IMHO the fatal_deprecation should make things die
21:19:01 <jogo> and they don't
21:19:03 <SlickNik> nothing that I'm aware of either.
21:19:09 <dhellmann> jogo: is the app catching that exception?
21:19:22 <dhellmann> jogo: or are we just logging and not throwing an error?
21:19:34 <eglynn> ttx: nope ... we have a known issue release noted, but not critical
21:19:43 <eglynn> SlickNik: agreed
21:19:52 <ttx> Anything else you want to report on the release process ? Something we did and we shouldn't have done ? Something we didn't do and should have done ?
21:20:02 <dhellmann> jogo: looks like the traceback in the log in that bug is showing the exception being caught by the app
21:20:17 <dims> dhellmann: jogo: we should probably take that back to oslo channel :)
21:20:23 <dhellmann> dims: yeah
21:20:36 <eglynn> a question I've always wanted to ask about the release process
21:20:47 <asalkeld> ttx to my newbie eyes it seemed to work well
21:20:52 * ttx braces for the shock
21:20:53 <eglynn> ... do the milestones have to be synchronized across all projects?
21:21:08 <jogo> dims: yup
21:21:11 * eglynn asks in terms of mitigating the gate rush that kills us at every milestone
21:21:12 <ttx> so that would be a development cycle question
21:21:33 <ttx> The idea behing it is to have a common rhythm
21:21:52 <ttx> but then if we can't handle the load this community management artifact generates...
21:22:06 <ttx> we could certainly get rid of it
21:22:25 <asalkeld> eglynn to spread them you would have to release one every 2 to 3 days
21:22:25 <asalkeld> lots of projects
21:22:35 <ttx> It's important for virtual/global communities to have the same rites and rhythms, it's what makes us one
21:22:50 <SlickNik> Also useful for things like dep freezes, and common freezes in general.
21:22:51 <ttx> but it's always a tradeoff
21:22:51 <eglynn> yeah just probing as to how "embedded" the synchronized milestone concept is
21:22:56 <eglynn> (... in the release management culture)
21:23:01 <ttx> and if the drawbacks outweigh the benefits...
21:23:23 <ttx> frankly, it's only feature freeze which generates load issues
21:23:34 <ttx> the first two milestones are pretty calm
21:23:39 <asalkeld> ttx needs a holiday some time too
21:23:58 <ttx> We could stagger FF, but I'm not sure that would cure the load
21:24:12 <eglynn> yeah, maybe better to mitigate that FF rush by more strictly enforcing the FPF
21:24:19 <ttx> that would certainly increase load on release management :)
21:24:26 <ttx> eglynn: ++
21:24:42 <ttx> anyway, let's move on
21:24:49 <ttx> #topic Design Summit scheduling
21:24:51 <eglynn> cool enough, looks like on balance it best to keep
21:24:57 <ttx> At this point we only have Keystone schedule up on kilodesignsummit.sched.org
21:25:12 <devananda> ironic schedule is basically done, i jus thaven't had time to post it yet
21:25:18 <mikal> ttx: what's the deadline for that?
21:25:22 <mikal> ttx: nova is basically done too
21:25:26 <asalkeld> we are going through Heat's tomorrow this time
21:25:28 <ttx> The deadline for pushing a near-final agenda is Tuesday next week (Oct 28)
21:25:34 <ttx> So you should abuse your weekly meetings this week to come up with a set of sessions
21:26:01 <nikhil_k> we've a virtual/online mini-summit for glance, summit session finalizing would be done in there as well (Thurs 23/Fri 24)
21:26:15 <morganfainberg> i need to check on the Ops summit details one of keystone's sessions might change.
21:26:21 <ttx> As far as the release management track goes, we don't have a specific meeting, so I'll discuss it here and now
21:26:27 <ttx> we only have 2 sessions, and only two themes proposed
21:26:29 <mestery> ttx: Neutron is almost done, we'll finalize tomorrow.
21:26:35 <ttx> So we'll likely have one session on stable branch maintenance, and one on vulnerability management
21:26:44 <ttx> No session on release schedule, since we decided that already on-list
21:26:59 <ttx> (I know, we lose a traditional design summit slot)
21:27:08 <ttx> Everything else will get covered at the Infra/QA/RelMgt meetup on Friday.
21:27:24 <ttx> I'll push the agenda for that tomorow probably
21:27:36 <ttx> Questions on the scheduling ?
21:28:00 <david-lyle> any idea about per service operator session requests?
21:28:02 <ttx> morganfainberg: any issue wrangling the scheduling website ?
21:28:14 <morganfainberg> ttx, i've had no issues
21:28:28 <morganfainberg> it's "just worked" for the most part
21:28:35 <ttx> david-lyle: they are definitely a good thing to have. Avoid overlap with Ops Summit session to maximize attendance
21:28:50 <david-lyle> I have not seen any requests, Horizon found it valuable last time, so just schedule and hope they come?
21:29:27 <ttx> david-lyle: yes. Amybe brag about that session at the Ops Summit on Monday ?
21:30:00 <ttx> there is a "Ops Summit: How to get involved in Kilo " session that sounds appropriate
21:30:15 <asalkeld> when / how will the cross project session be descided
21:30:42 <eglynn> by the TC I think
21:31:01 <ttx> asalkeld: TC members are voting on the etherpad this week, feel free to add your opinion there as well
21:31:18 <ttx> then Russellb and MarkMCClain were mandated to build the final schedule
21:31:19 <asalkeld> yeah, i have added some things there
21:31:30 <dhellmann> #link https://etherpad.openstack.org/p/kilo-crossproject-summit-topics
21:31:51 <ttx> shall be all set when we meet again next week
21:32:03 <ttx> anything else on that topic ?
21:32:11 <asalkeld> all good
21:32:18 <ttx> #topic Global Requirements, a practical case
21:32:33 <ttx> At a previous meeting we had a discussion on global-requirements, and agreed that it was for integrated projects requirements and integrated-project-wanabee solving checken-and-egg issues
21:32:44 <ttx> But dims raised the following review: https://review.openstack.org/#/c/128746/
21:32:54 <ttx> Sounds like a good practical example of a corner case
21:33:05 <ttx> nova-docker driver was split out to stackforge but still wants to do nova-like testing to be able to merge back
21:33:08 <dims> here's a long writeup - https://etherpad.openstack.org/p/managing-reqs-for-projects-to-be-integrated
21:33:14 <ttx> Is that a valid requirements update case or not ?
21:33:57 <mikal> So...
21:34:03 <dhellmann> isn't the idea that they would sync on their own, not gate on requirements, and then seek to add any new requirements to the global list when they are re-integrated?
21:34:03 <mikal> ironic has the same problem right?
21:34:05 <dims> essentially we need a way to allow requirements jobs and dsvm jobs to work
21:34:13 <mikal> Or is python-ironicclient in global reqs?
21:34:14 <asalkeld> seems to me that docker is super important to openstack
21:34:15 <devananda> mikal: ?
21:34:19 <dhellmann> dims: no, I think you want to turn off the requirements gating for your repo
21:34:27 <dims> dhellmann: dsvm jobs?
21:34:34 <dhellmann> dims: do those fail, too?
21:34:38 <dims> dhellmann: yep
21:34:52 <dhellmann> dims: what's the failure condition? can't install something?
21:34:55 <mikal> devananda: I'm trying to work out why this didn't come up for ironic
21:35:08 <dims> dhellmann: requirements/update.py just exits
21:35:44 <devananda> mikal: I'm not sure what the problem is (poorly tracing this meeting, sorry)
21:35:49 <dhellmann> dims: does solum run dsvm jobs?
21:35:59 <dhellmann> devananda: non-integrated project with extra requirements
21:36:01 <asalkeld> solum had this issue - solution: remove the project from project.txt
21:36:05 <devananda> ironic has chosen not to list our 3rd party libs in requirements
21:36:21 <dhellmann> asalkeld: does that fix the dsvm issue?
21:36:25 <devananda> which leaves it up to operators/installers to pull those packages separately from requirements.txt
21:36:41 <asalkeld> dhellmann, it then uses upstream pypi
21:36:48 <devananda> and within each driver's __init__, it checks and gracefully fails if its lib isn't present
21:36:54 <asalkeld> not the openstack restricted one
21:37:08 <mikal> devananda: yeah, this is what drivers in nova do too
21:37:12 <dhellmann> asalkeld: our ci mirror is fulll, so I think you're actually using the local mirror now
21:37:26 <dhellmann> devananda: yeah, we do something like that in oslo.messaging, too
21:37:33 <devananda> so far it has been fine for us
21:37:33 * dims notes that docker-py is an essential not optional dependency
21:37:42 <asalkeld> dhellmann, yeah - the logic could have changed
21:37:46 <devananda> dims: essential for that driver. not for the whole project. right?
21:37:52 <dims> devananda: right
21:37:52 <mikal> dims: so is python-ironicclient if you want a working ironic in nova though
21:37:59 <dhellmann> dims: that's fine, if you uncouple your project from the global requirements list fully it ought to work
21:38:00 <devananda> dims: so it fals in the same category
21:38:19 <devananda> dims: that makes it not a requirement, since what driver you use is a deploy-time option
21:38:34 <devananda> it's an external dependency of that particular configuration
21:38:35 <asalkeld> devananda, well if docker-py was in requirements the heat could maybe put the container resource from contrib into heat proper
21:38:35 <dims> dhellmann: unforunately the tempest-dsvm jobs fail
21:38:53 <dims> devananda: not asking nova's requirement to have docker-py
21:39:01 <dims> devananda: asking global requirements to have docker-py
21:39:07 <dims> which is different
21:39:07 <devananda> dims: ahh ok
21:39:08 <asalkeld> in this particular case i am suprised we dont' just add it
21:39:12 <dhellmann> dims: ok, that's disconcerting, they shouldn't care about the requirements now. Do you have a log?
21:39:42 <devananda> dims: i have no objection to projects that want to depend on docker syncing which version of docker-py they depend on
21:39:50 <dims> dhellmann: there's a customer http client which is not good in nova-docker trying to use a well thoughtout library
21:39:51 <devananda> that's the function of global req's -- syncing version deps
21:39:56 <dhellmann> asalkeld: we could, but this is supposed to be working already, I think, and we will have other cases where that's not the right solution
21:40:18 <devananda> I don't see a reason to reject a submission to global req's when projects want to use that to sync the version dep. but maybe I'm missing something?
21:40:18 <asalkeld> sure
21:40:31 <dims> dhellmann: there was an aborted attempt in february to switch to docker-py documented in the url above
21:41:00 <dhellmann> devananda: I may be misrepresenting sdague, but AIUI, he also wants that list to be an actual list of dependencies for openstack
21:41:09 <dhellmann> of course that may change under a big-tent model
21:41:38 <dims> dhellmann: i've also documented a proposal to avoid adding to g-r using a flag sdague introduced
21:41:47 <dims> in update.py
21:42:06 <dhellmann> ok
21:42:34 <dims> 43-49 lines
21:42:36 <dhellmann> I'm a little lost because some of those comments seem unrelated and I'm only just now seeing this issue.
21:43:06 <dims> apologies
21:43:31 <dhellmann> where should I start reading to catch up? is there a ML thread, or bug or something?
21:43:45 <dims> dhellmann: https://etherpad.openstack.org/p/managing-reqs-for-projects-to-be-integrated
21:43:54 <asalkeld> can we go off line on this?
21:44:18 * dims nods
21:44:27 <dhellmann> yeah, I think the issue there is the dsvm jobs should not be failing on extra requirements, but let's talk about it on the ML
21:44:45 <dims> dhellmann: ack, will get the ball rolling
21:44:59 <ttx> ok, so the take here is that they shouldn't need the global-requirements update ?
21:45:03 <mtreinish> dhellmann: I think it should only fail if the project is tracked in g-r
21:45:24 <mtreinish> at least that was my understanding
21:45:32 <asalkeld> yes
21:45:53 <dhellmann> mtreinish: yeah, so it may be a configuration issue then
21:45:55 <dims> my only "requirement" is project be allowed to move forward till it gradutes :)
21:45:57 <dhellmann> for the job
21:46:07 * dhellmann snorts at dims' pun
21:46:17 <dims> :)
21:46:25 <ttx> #topic Open discussion
21:46:28 <ttx> Anything else, anyone ?
21:46:55 <asalkeld> ttx do we have nice big white board in paris?
21:47:04 <asalkeld> for the dev sessions
21:47:09 <notmyname> o/
21:47:18 <asalkeld> and esp. the friday session
21:47:28 <eglynn> yeah good point
21:47:41 <ekarlso> a/j openstack-meeting-3
21:47:50 * eglynn hates little flip charts
21:47:51 <notmyname> ttx: I'm not particularly happy with how http://lists.openstack.org/pipermail/openstack/2014-October/010059.html was handled. who do I talk to about it?
21:49:07 <ttx> asalkeld: the whiteboards are not "big" but there are several of them
21:49:17 <asalkeld> ok thanks ttx
21:49:24 <asalkeld> that's something
21:49:26 <ttx> eglynn: the Flip charts double as whiteboards
21:49:32 <ttx> notmyname: reading
21:49:43 * nikhil_k find Glance in the email topic
21:50:08 <ttx> notmyname: that would be the OSSG group. They are having a pTL election this week
21:50:10 <notmyname> ttx: probably not enough time to read/digest in this meeting. but I'd like to address it
21:50:22 <ttx> notmyname: so whovever wins shall get a nice email from you
21:50:27 <dhellmann> notmyname: was the swift team not involved or something? what's the background?
21:50:33 <ttx> you can cc me. I don't technically oversee that grouyp though
21:51:01 <fungi> notmyname: alternatively you can follow up on the openstack-security ml
21:51:30 <fungi> notmyname: i gather the people who drafted and approved that text all gather there
21:51:41 <notmyname> ok, the -security ML seems like a good starting place (rather than a response to the general ML for now)
21:52:30 <notmyname> dhellmann: only a little. and mostly I don't think the right issue was addressed
21:52:50 * dhellmann nods
21:53:21 <ttx> anything else before we close ?
21:53:25 <notmyname> that, and that a resolution of "just use ceph instead of swift" was given as official openstack recommendation is annoying
21:53:51 * ttx reads again
21:54:16 <dhellmann> that was one of several options, right? and it seemed like the most heavy-weight
21:54:23 <ttx> That was indeed tactful
21:54:45 <ttx> dhellmann: "Implementing an alternative back end (such as Ceph) will also remove the issue" feels a bit loaded
21:54:57 <dhellmann> yeah
21:55:17 <ttx> notmyname: yes, openstack-security sounds like the right avenue to discuss that
21:55:27 <notmyname> especially since I think the issue is simply the different definition of "public" between glance and swift. not a security issue
21:56:01 <notmyname> thanks. I'll follow uo on the -security ML
21:56:11 <ttx> ok then, let's close this
21:56:15 <ttx> #endmeeting