15:00:08 #startmeeting ceilometer 15:00:09 Meeting started Thu Oct 9 15:00:08 2014 UTC and is due to finish in 60 minutes. The chair is eglynn. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:11 o/ 15:00:14 The meeting name has been set to 'ceilometer' 15:00:19 o/ 15:00:22 o/ 15:00:34 o/ 15:00:36 o/ 15:01:02 hey y'all 15:01:10 #topic juno close-out 15:01:22 as discussed last week we ended up needing an RC2 15:01:31 #link https://bugs.launchpad.net/ceilometer/+milestone/juno-rc2 15:01:47 also a python-ceiloclient 1.0.12 release to pick up the fix for that token-expiry issue in the alarm evaluator 15:02:05 ... so getting down to the wire now for juno final 15:02:22 eglynn, cool, that'll solve lots of possible problems with alarm evaluator :) 15:02:32 o/ 15:02:40 so does anyone got any remaining concerns? 15:02:54 in the sense of stuff that might warrent an RC3? 15:03:18 ... the window for that is rapidly closing 15:03:57 ... so if there is anything we need to fix in juno, we'd need to be raising it by EoW, or early next week latest 15:04:34 * eglynn hopes that no news is good new on that score :) 15:05:49 slightly related to Juno, the TC didn't meet this week ... so I don't know if the gap analysis coverage outcome will be formally reviewed 15:06:07 or whether the wiki summary will suffice 15:06:13 eglynn, ok, do you know when it'll be done? 15:06:15 a-ha 15:06:15 ok 15:06:30 #topic TSDaaS/gnocchi status 15:06:49 jd__: the floor is yours 15:08:27 eglynn, it looks like jd__ does not want to communicate with us :) 15:08:30 *crickets* 15:08:38 TL;DR: patches in-flight for exposing human-friendly timespans in the archive_policy API 15:09:03 also sileht continues hacking on the long-running dispatcher review 15:09:16 also some news: ityaptin is working on gnocchi performance testing (lab fighting in progress) 15:09:17 cross-entity aggregation support also in progress 15:09:28 that one is a bit bloked 15:09:34 due to a bug in webob 15:09:42 "lab fighting"; it's a shame that's a thing 15:09:43 DinaBelova: is Ilya using the WIP dispatcher from https://review.openstack.org/98798 ? 15:09:49 eglynn, yeah 15:09:55 eglynn: yeah 15:10:01 to have kind of realistic env :) 15:10:38 eglynn, we hope to grab some results next week 15:10:41 a-ha, are we waiting on a fix from the webob folks? 15:10:57 https://github.com/Pylons/webob/issues/164 15:11:07 eglynn, for both Swift and some TSD (my WIP change or your ones) 15:11:25 cool 15:11:26 eglynn, probalby my AND yours :) 15:11:34 cooler :) 15:11:36 :D 15:12:28 sileht: could we work around that webob issue by making the py3 tests non-voting? 15:12:42 eglynn, also we've started rc2 testing by vrovachev 15:12:45 (temporarily, pending a webob fix) 15:12:56 DinaBelova: performance profiling again? 15:12:59 eglynn, I'm think to change ^ by + for testing only 15:13:03 that's not the gnocchi thing, but forgot to say this on previous topic 15:13:11 nope, just full features testing 15:13:14 cool 15:13:21 all meters for all services 15:13:29 currently it seems to look nice 15:13:36 excellent 15:13:39 :) 15:13:51 sileht: yeah that could be pragmatic 15:14:52 DinaBelova, jd_: we also have some gnocchi slides to write for the conference track in Paris 15:15:03 maybe chat about that next week? 15:15:10 eglynn, that's why ityaptin is working so hard now :D 15:15:14 eglynn, +1 15:15:19 eglynn, also I'm writting some explaination of the limitation of the current dispatcher implementation 15:15:30 coolness :) 15:15:40 eglynn, we'll get first performance results by this time as well - will be easier to decide what to talk 15:15:51 yeap, that makes sense 15:15:56 and what not to talk about :) 15:15:58 ;) 15:16:41 anything else on gnocchi, or shall we move on to our other favourite weekly topic? 15:17:13 #topic Tempest status 15:17:36 +2s in progress :) 15:17:41 for unskipping :) 15:17:50 eglynn, thanks for adding more reviewers there 15:17:51 :) 15:18:20 tempest, most temptation :) 15:18:29 llu-laptop, for sure 15:18:30 :) 15:18:49 (c) you're my dark temptation (c) 15:18:51 sorry 15:18:53 yeap good news on the nova notification test unskip, thx to cdent for busting out his gentle thumb-screw ;) 15:19:03 cdent, /me bowing 15:19:23 If Sean wasn't away it would probably already be through the gate :) 15:19:35 if so, my dream about all Vadim's changes being merged may come true 15:19:46 cdent, :) 15:19:48 if you guys are that anxious for it, I'll just fast approve it :) 15:20:00 mtreinish: well that would be nice, thank you sir! 15:20:09 mtreinish, you can't believe my wish :) 15:20:13 to see that :) 15:20:14 mtreinish: it's almost (but not really) more fun to speculate on the magic sauce 15:20:38 mtreinish, thanks! 15:20:45 \o/ 15:21:28 * cdent notes: "whatever we just did, more of that" 15:21:36 LOL :) 15:21:43 no worries, I just hope it doesn't go flakey again... 15:21:55 mtreinish, me as well :) 15:22:21 if it does, please point me at it, I'm getting pretty aware of untangling these things 15:22:39 cdent: excellent :) 15:23:06 ok, I guess that's the main new news on tempest 15:23:12 #topic Win The Enterprise monitoring WG update 15:23:22 just a quick note that I've in on the weekly WTE calls 15:23:33 ... I've *been in ... 15:23:41 #link https://etherpad.openstack.org/p/WTE_Monitoring_Team 15:23:57 TL;DR: based on operator feedback, they want to surface compute/blockstore capacity headroom 15:24:21 I've looking into how this might be emitted as notifications from the nova & cinder schedulers 15:24:28 (for ceilo to consume) 15:24:50 I've proposed a couple design session topic on their tracks 15:25:19 (... so we'll see what comes of that, if anything, given that all the tracks are constrained this time round) 15:25:20 eglynn, cool, it'll be interesting 15:25:43 TBH I don't know yet if this "headroom" concept makes any sense for neutron 15:26:07 probably we need some neutron expert here :-\ 15:26:25 yeah /me needs to dig a bit more into the neutron plugin contract, and the upward reporting (if any) 15:26:49 see if there's any analogue to the nova resource tracker for example 15:27:11 eglynn: is this for capacity planning or more alarming in case of reaching the limit of resources? 15:27:14 anyhoo that's research WIP, just wanted to give a heads-up on where it's headed 15:28:03 fabiog: the idea is more to provide an answer to the question "how much space do I have left in my datacenter to fit new resources?" 15:28:35 eglynn: got it, thanks 15:28:42 fabiog: e.g. how many new instances of various flavors, how many gigs of block storage etc. 15:28:43 * DinaBelova is crying remembering blazar activity... 15:29:42 DinaBelova: interesting, I hadn't thought of that angle 15:29:46 * eglynn peeks at https://wiki.openstack.org/wiki/Blazar 15:30:10 eglynn, heh, Blazar (ex. Climate) was my little child before Ceilo :) 15:30:30 so that's all about pre-claiming resources in advance? 15:30:50 eglynn, yes 15:30:56 but it's dead project now... 15:31:02 DinaBelova: blazar == previous known as climate? 15:31:07 llu-laptop yep 15:31:27 DinaBelova: still, interesting to have that perspective, I'll re-read the wiki ... thanks! 15:31:32 eglynn, np :) 15:31:46 DinaBelova: yes that is capacity planning in the traditional sense, I guess they really want a dynamic evaluation to decide when allocate new workloads 15:32:07 fabiog, probably yeah 15:32:10 that's the difference 15:32:38 fabiog: yeah, when the capacity topic was first brought up I thought they'd be more interested in extrapolating/forecasting future usage trends 15:33:12 so this is workload mgmt vs. capacity planning? interesting... 15:33:37 eglynn: I think is a matter of speed, In the old days they had months to decide, now they have hrs :-) 15:33:46 nealph_, last time we had lots of questinable moments, about how to integrate all this thing into the OS ecosystem 15:34:01 eglynn: so they need a snapshot of the datacenter so they can make an informed decision 15:34:11 fabiog: yep, exactly 15:34:19 fabiog, yeah 15:34:23 * nealph_ gets it. 15:34:26 eglynn: is a really interesting topic 15:34:42 fabiog, yeah, still :) 15:34:46 hot one 15:35:07 fabiog: yeap, though unfortunately the required info isn't direct surfaced by many of the services as yet 15:35:20 ... isn't *directly surfaced 15:35:52 well, anyway, I'd love to see this thing developing :) 15:35:52 eglynn: actually I am not so sure the services really knows how much they chewed already 15:36:02 it means that my feeling it's interesting was not wrong 15:36:04 This is one of those areas where I think the services should be the ones thinking hard about what they surface; ceilomter should just "hear" well. 15:36:40 eglynn: I see a reference to "admin read only" in that etherpad as well...anticipate ceilo impacts there? 15:36:51 or is that another topic entirely... 15:36:53 fabiog: the nova scheduler knows from the resource tracker reports 15:37:01 fabiog: ... though apparently the info provided to the cinder scheduler by some drivers is very questionable 15:37:23 fabiog: ... e.g a continual report of infinite capacity available 15:37:26 nealph: I think we can solve that using the RBAC approach 15:37:50 eglynn: right 15:39:02 nealph_: I'm not directly involved in that read-only notion 15:39:32 eglynn: fair enough. :) 15:39:33 nealph_: ... but the proposal was to add such an out-of-the-box read-only-admin role to keystone, and extend the RBAC config for the services to include a read-only rule 15:40:19 nealph_: proposed to cross-project track 15:40:26 nealph_: ... line 71 here: https://etherpad.openstack.org/p/kilo-crossproject-summit-topics 15:40:40 (not much detail there TBH) 15:40:44 eglynn: cool, will chase it down offline. 15:40:56 eglynn, nealph_: the current roles in Openstack are too coarse, you either have admin and do everything or you are confined in the project as regular user 15:41:21 I mean the one available out of the box 15:41:50 fabiog: yeah, either king-of-all-you-survey, or a mere peasant 15:42:43 ok, guess we can move on 15:42:50 #topic open discussion 15:43:02 may I start here? :) 15:43:22 I wanted to point one topic you guys might be interested in 15:43:23 https://review.openstack.org/#/q/status:open+branch:master+topic:merge-compute-and-central-agents,n,z 15:43:38 here are some things to be done to merge out central and compute agents 15:43:43 our* 15:43:56 so I'd love to collect your feedback on this POC 15:44:22 DinaBelova: cool, I'll add that to my review list, thanks! 15:44:31 eglynn, thank you sir! 15:44:42 of course, I'd like to have some preapproval 15:44:45 to create BP 15:44:48 DinaBelova: I will have a look. 15:44:52 fabiog, thanks! 15:45:08 then I'll create BP and add it everywhere 15:45:13 to all these commits 15:45:25 Dinabelova, I would like in general to move away from polling from Nova 15:45:47 fabiog, that'll be kind of next step I believe 15:45:59 I don't like this polling as well.. 15:46:00 Dinabelova, because they already complained that we are significantly impacting their performance when we do that 15:46:10 fabiog, I can imagine 15:46:46 it should be the responsibility of the service to create their own notifier or poller, not ceilometers... 15:46:48 * cdent is a broken record 15:46:49 Dinabelova, so this seems to me to consolidate the polling strategy but it doesn't remove the problem at the rott 15:46:51 root 15:47:03 fabiog, for now - yep 15:47:22 but currently even compute agent code seems to have no sence 15:47:28 after nsaje's commits 15:47:44 that make agents code more unified 15:48:02 DinaBelova: my first thought of merging central/compute agent is that may require we introduce some disable-self' 15:48:18 Dinabelova, I think you will be able to run a Central Agent with only a pipeline for Nova in the nova nodes, is this the way you are achieving a similar result to the current behaviour? 15:48:37 fabiog, yeah, I'm operating just pipelines 15:48:38 mechanism in the pollster, in case we have compute only pollsters enabled on a non hypervisor machine 15:48:41 to define waht to poll 15:48:50 by this ceilometer-polling-agent 15:48:56 yep, seems to be two orthogonal issues here ... 15:49:02 1. remove needless duplication/specialization between agents 15:49:05 2. rationalize polling load on the nova-api etc. 15:49:10 eglynn, yes, and that's why I'm asking your help here :) 15:49:28 not only eglynn, but all of you 15:49:29 :) 15:49:35 DinaBelova: so you're currently concentrating exclusively on issue #1 amiright? 15:49:41 yes 15:49:44 #1 is a great idea 15:50:04 I'll propose to solve #2 in next changes 15:50:21 to make these steps simpler 15:50:35 they are refactoring.... mostly... lots of refactoring 15:50:41 :) 15:50:43 cool, sounds like we should all put our 2 cents in on gerrit 15:50:49 ;) 15:50:52 thanks! 15:51:08 we could also possibly discuss f2f at the contributor meeting in Paris? 15:51:19 speaking of which ... 15:51:28 eglynn, yeah, it'll be cool, but I'd love to start this activity now 15:51:37 cool enough 15:51:41 as some first steps seem to be simple 15:51:46 eglynn: we have a fix for the Dispatcher to re-try db connection once is lost or late, can you guys please have a look at: https://review.openstack.org/127128 15:52:05 BTW last week we said we'd punt deciding the design summit topics to next week's meeting 15:52:24 everyone still good with that? 15:52:41 eglynn, works for me 15:52:51 neutron 15:52:52 eglynn, I am good with that 15:53:42 eglynn, are you going to have every proposant to discuss the proposal in IRC? 15:54:00 or do you need some prep work 15:54:21 fabiog, eglynn - probably sum small meeting needed? 15:54:25 if it'll be quesitons 15:54:29 some* 15:54:45 fabiog: yep, I was thinking that each proposer would give a short pitch, handle quick Q&A if necessary 15:54:58 or yeah, something like that ^^ 15:55:33 eglynn: elevator pitch is good 15:55:43 then folks could go away and digest what they've heard 15:55:54 ... before registering their preferences later on the spreadsheet 15:56:06 ... so that we can gauge interest in the various topics 15:56:15 sound like a reasonable approach? 15:56:47 eglynn: it does. But I believe we may need 2 meetings considering the amount of proposals 15:56:55 eglynn, +1 15:57:11 fabiog: or a time limit. :) 15:57:27 fabiog: yeah, you could be right 15:57:34 eglynn, or leave the Q&A for the second meeting 15:57:35 fabiog: ... the schedule needs to be finalized at least a week before summit 15:57:57 fabiog: ... so maybe we'll need to organize an extra meeting some after if we overrun 15:58:07 meeting *soon after 15:58:18 e.g. the Friday or the following Monday 15:58:32 eglynn, cool but we have the 16 and 23 15:58:38 could we just extend the next week's meeting's extra time to ceilometer channel? 15:58:56 llu-laptop: yeah, that would work for me 15:59:02 +1 15:59:09 +1 15:59:52 as per usual, we're up against the shotclock 16:00:00 time 16:00:06 yeah, see you folks! 16:00:23 bye 16:00:23 yep, let's call it a wrap ... thanks folks for your time! 16:00:31 #endmeeting ceilometer