15:00:17 #startmeeting monasca 15:00:17 Meeting started Wed Nov 25 15:00:17 2015 UTC and is due to finish in 60 minutes. The chair is rhochmuth. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:22 The meeting name has been set to 'monasca' 15:00:22 o/ 15:00:27 bye 15:00:33 o/ 15:00:35 thanks all 15:00:43 o/ 15:00:48 hello 15:00:53 o/ 15:00:54 hola 15:01:17 hi everyone 15:01:24 happy turkey day 15:01:36 i'm supposed to be off this week, but i failed in that endeavor again 15:01:44 rats 15:02:15 so, i haven't really been responding to emails, and not been doing review 15:02:23 or writing code 15:02:33 but i'm here 15:02:46 good to read you again :) 15:02:55 so, we might as well get started 15:03:13 There is a review at, https://review.openstack.org/244483 15:03:36 thats mine 15:03:47 i started to take a look at this last week 15:03:51 it looks fine to me 15:03:59 nice 15:04:02 the only reason i haven't merged is test 15:04:12 just trying to verify all is good 15:04:19 i won't get to it this week 15:04:23 ok 15:04:44 i asked for some help from my team, but obviousely know one else looked at it either 15:04:47 so, sorry 15:05:02 the bottlneck is just verification 15:05:09 that everything still works 15:05:36 i wanted to push log management into monasca-vagrant 15:05:44 and it blocks a little 15:05:55 unless someone else looks at it, then we'll just have to wait a little longer 15:06:26 so the log-api in monasca-vagrant would be another area 15:06:43 so, are you blocked on https://review.openstack.org/244483 15:07:01 before you want to proceed adding the logging support? 15:07:11 yes, but I want to sync our ansible roles 15:07:32 the only other option is to do the merges and hope for the best 15:08:00 i would prefer not doign that 15:08:12 it's fine 15:08:31 so, there changes to three ansible roles and then the monasca-vagrant changes 15:08:48 that's right 15:08:56 are you ok waiting a little longer 15:09:15 how little is little? :) 15:09:26 epsilon 15:09:34 :) 15:09:39 it's ok 15:09:52 i'll see what i can get done in the background 15:10:04 but early next week is what i would target 15:10:10 great 15:10:35 THen there is, https://review.openstack.org/#/c/241626/ 15:10:52 that's me 15:11:00 it's beautiful, no? 15:11:04 so, it all looks ready to go 15:11:08 i think it is the same issue 15:11:12 test/verification 15:11:30 yeah, tests well on my side, but would like others to confirm 15:12:04 i think it will be similar situation 15:12:14 waiting for more folks to look at it 15:12:15 the good news is, if the new code is broken (don't think it is), only affects if you pass the new parms 15:12:33 yes, seems like low risk change 15:12:47 as in adds new capabilities, but doesn't impact existing functionality 15:13:02 yeah 15:13:09 so, i'll probably get to this on monday/tuesday too 15:13:13 bueno 15:13:23 hoping someone else looks at this too 15:13:37 any volunteers? 15:14:09 for the influxdb side of the house, java/python, devstack works well to test 15:14:18 vertica is a bit more complicated 15:14:51 crickets/tumbleweeds 15:14:58 ok, next topic 15:14:59 Any update on cache fix? 15:15:03 i'd look at this, but I am pretty much blocked by other stuff and this is my recent job...exclusively, I was looking at your change by only looking at the code 15:15:19 thanks tomasz 15:15:20 sure, thx tomasztrebski 15:15:20 no chance to do something more for any other change 15:15:36 understood, we're all in that boat 15:15:59 hopefully I see a light in that tunnel and maybe I will get a time to finally do my review-part properly 15:16:21 So, I don't have any updates on the cache fix 15:16:27 I started writing code last week 15:16:41 that's progress 15:16:48 but then was basically inundated with impromtu meetings for 3 days that pretty much killed me 15:17:13 i'm a little concerned now with the remainder of Decmber 15:17:20 i didn't take any time off all year 15:17:22 yeah, we're doing a datacenter migration, taking 95% of my time instead of wonderful monasca work 15:17:27 and now i'm trying to catch-up 15:17:41 better keep that wife happy 15:17:45 so, my development time is low 15:18:02 rigt now 15:18:29 anyway, i'll probably get to this next week too, but i'm not expecting to complete next week 15:18:40 ok, thx for setting expectations 15:19:08 Update for pymysql 15:19:21 @topic Update for pymysql 15:19:26 #topic Update for pymysql 15:19:32 we are testing the replacement 15:19:39 awesome 15:19:44 and it seems to be straight forward 15:19:58 is a drop-in replacement? 15:20:03 yes 15:20:14 implementing was quite fast, our colleague needed..what..2 days, I guess 15:20:46 along with some testing he's been doing in parallel with mysql 15:20:57 so, we did all this work a few months ago for postgres support and hibernate 15:21:26 are we goign to need to do somethign similar for the monasca-api 15:21:37 we would like to 15:21:59 that's another brick actually, completely seperate...we agreed to provide pymysql as replacement so that's the first task to complete 15:22:11 doing hibernate-like stuff and postgres would be next step 15:22:11 i see 15:22:32 so, possibly adding support for sqlalchemy in the python monasca-api 15:23:03 that's what we have in mind 15:23:13 i think everything else is covered 15:23:22 python monasca-notification was already converted/added 15:23:30 monasca-persister doesn't use mysql 15:23:40 all the java code was converted 15:23:43 monasca-common 15:24:05 oh yeah 15:24:06 that too 15:24:46 so i don't see any problems objections to pymysql 15:25:05 cool, we'll push the change to review 15:25:23 ok, thanks 15:25:44 #topic How does tempest know URL of monasca api ? 15:25:45 we also started working on sqlalchemy 15:26:10 thank witek 15:26:21 changed topics a bit soon 15:26:50 Not sure who asked the question about the Tempest tests 15:26:54 i'm finished :) 15:27:06 yeah, so that's a question from me...basically I am trying to port log-api tempests to be with the project, and apart from normal issues with first try of new framework 15:27:40 i am a little bit puzzled, where's the information where to look monasca-api server written ? 15:27:48 so, the monasca-api registers with keystone 15:28:22 there is a file called, ./etc/tempest.conf 15:28:37 that has the endpoint information and credentials 15:28:42 for keystone 15:28:59 yes, I am looking at it right now 15:29:17 Have you seen the directions at, https://github.com/openstack/monasca-api/tree/master/monasca_tempest_tests 15:29:23 and there is services available configuration property in file...config.py, isn't ? 15:29:43 hmm, i don't know anything about config.py 15:29:44 yeah, I am basically trying to follow up your setup, because it works 15:30:17 https://github.com/openstack/monasca-api/blob/master/monasca_tempest_tests/config.py#L21 15:30:22 I am talking about that 15:31:02 that looks familiar now, i think i wrote that 15:31:32 so, for the log api there would be a similar skeleton 15:31:54 one possiblity is to create monasca_log_tempest tests in the monasca-log-api repo 15:32:06 and then copy/past the code and modify 15:32:42 ok, that clears things a bit, guess I need to understand that to make it work, but I will follow up your suggestion, seems actually so good that I am astonished that I didn't think of it before 15:32:48 :/ 15:33:14 ok, in case of any problems I will probably ask over mail or something like that 15:33:23 I dont want to spent too much time over this topic 15:33:26 well, i think it makes sense to have the log api with it's own tempest tests 15:33:26 thanks for your help 15:33:45 the other option is to add to the monasca-api directly 15:33:47 it makes perfect sense that's why I am trying to embrace that stuff ;) 15:34:10 but, then we start mixing things together 15:34:20 but that does not seem right, after all it was decided some time ago to keep those API seperate 15:34:25 and for me it was good decision 15:34:30 ok, well, let me know if you run into any problems 15:34:34 i'm not an expert 15:34:40 I hope not, but thx ;) 15:34:49 but i did do the original work in that area so might know a little more 15:35:19 i heavaily modeled on manila as they seemed to have a really good model 15:35:58 ok, next topi 15:36:04 #topic Summit videos + slides on Wiki page 15:36:11 that's mine 15:36:20 Sounds like a great idea 15:36:27 someone was wasking me about that yesterday 15:36:32 I was just wondering if we should add the video links and pdfs on the wiki page 15:36:45 yes, i agree 15:36:45 do I have permissions to do that? 15:36:50 you should 15:36:59 ok, so I'll put my stuff online 15:37:00 i don't know what the permissions on wiki pages are 15:37:13 i think anyone can modify 15:37:13 I'll try. If I run into problems I'll ask by email 15:37:22 ok, thanks 15:37:44 Fabio is not here today, right? So I'll ping him by email so he also uploads his presentation 15:38:13 correct, no fabio 15:38:22 that's all from me 15:38:31 ok 15:38:36 #topic https://review.openstack.org/#/c/226733/ 15:39:29 tomasz i'm guessing you want a +2 15:39:40 i see deklan +2'd yesterday 15:39:44 yeap, again me...I hope that now that should be finished and it should have higher chance of acceptance 15:40:28 i'll take a quick look and +2 unless i see anything 15:40:35 i'm assuming deklan tested well 15:40:50 well, I'd love that +2, however to be fair, I dont know what to think about new gate for tempests, that's another reason why I wanted to bring this topic 15:41:06 ok 15:41:10 I assume that gate is experimental, so failure there should not be a reason to worry ? 15:41:32 uhhh, the gates are marked as experimental 15:41:44 but, we are closing in on 100% passing 15:41:56 right now, the last i checked, there were only 5 tests failing 15:41:59 as of last night 15:42:04 this is against the python api 15:42:10 the java api should be completley passing 15:42:20 ok, so please review that and in case of unclear parts just leave a comment 15:42:33 let's hope it does 15:42:40 ok 15:43:08 ahhh, yes i see at the bottom of the review the failure 15:43:14 everyone is getting that right now 15:43:17 so, that isn't a problem 15:43:35 unless all of a sudden something was failing that was passing previousel 15:43:43 it would be difficult for you to know that right now 15:43:54 pretty soon, we should be at 100% 15:44:29 then we will change from experimental to checks, but not voting i beleive 15:44:34 then it will be clearer 15:44:40 ok, I will prepare some fireworks 15:44:41 :) 15:44:45 so, no cause to panic 15:45:20 ok, moving on 15:45:24 #topic Quota status 15:45:33 That's me again. I remember we had some discussions about quotas in Monasca some weeks ago. It was brought up by bklei_ . I was just curious if there have been any actions around this topic during the last weeks 15:45:54 There haven't been any actions 15:46:14 We should possibly get a blueprint to work on this 15:46:51 i'd love to see that topic move fwd 15:47:01 ok thanks. May be that this will become a topic at Fujitsu as well in the next months 15:47:16 Yes, it is important for doing tru monitoring as a service 15:47:26 on public cloud endpoints 15:47:28 if you start a blueprint mroderus, i'll help add my thoughts 15:47:47 right. I think that's a fundamental requirement for a monitoring cloud service 15:48:06 from our perspective, one quota we'd like to see is data retention period (per project) 15:48:09 ok bklei_ . I'll ping you as soon as we start working on this 15:48:16 great 15:48:45 we might need to get started on that soon too 15:48:53 bklei_: so you mean a maximum time the data is stored, right? 15:48:58 we had some requests around this recently 15:49:07 right -- per project is what we need 15:49:13 (keystone project/tenant) 15:49:30 have you also discussed volume-based quotas? 15:49:44 even if the API just tracked what it is, we could just consume that and do what we will with it for starts 15:49:49 such as number of metrics or megabytes 15:50:00 i'm just thinking time 15:50:13 we'll probably default to 6 weeks or something 15:50:23 could be fancier, but that'd get us started 15:50:34 we also need quotas on number of alarms 15:50:48 yeah, that could spiral 15:50:49 rhochmuth: right 15:51:08 and in addition to time/retention period on metrics, probably the number of metrics too 15:51:45 number of notification methods, … 15:51:51 I'm just worrying that time/project may not be enought. A project may have an infinite number of agents sending at an arbitrary fine resolution 15:51:54 (theoretically) 15:51:58 custom metrics i'd assume, ignoring libvirt? 15:52:28 we already have quota mgmt for # of instances 15:52:36 ok 15:52:50 is that already in Monasca? 15:52:56 it's in nova 15:53:04 nova quota-show 15:53:10 or something like that 15:53:58 i'm just saying, since openstack already has a mechanism for capping # of instances, we shouldn't cap the default libvirt metrics 15:54:06 ah, ok.. so that quota is for the provisioned VMs. But apart from this, a user can additionally install agents and post metrics to the API. Is that considered as well? 15:54:36 exactly, if we add quotas for # of metrics, should likely be 'custom' metrics they POST, not the default metrics 15:54:59 right, makes sense 15:55:22 thx for working on this mroderus 15:55:38 bklei_: thanks back to you 15:55:57 i just wanted to point out that we have a blueprint 15:55:59 https://blueprints.launchpad.net/monasca/+spec/alarm-count-resource 15:56:06 https://wiki.openstack.org/wiki/Monasca/UI_UX_Support#Alarm_Counts_Resource 15:56:16 cool 15:56:18 we had a number of requests from our UI team 15:56:35 cool 15:56:41 to to server side filtering, sorting/ordering, and return summaries for alarms 15:56:54 this blueprint is from the summary of counts of alarms 15:56:56 nice, would like to see that ui 15:57:11 the ui is in helion 15:57:17 called opsconsole 15:57:20 i know :) 15:57:42 rbak is our UI team :) 15:57:56 thanks 15:58:16 the general idea for alarms counts resource to ti return the total alarms, and alarms in various states of ALARM, ACKNOWLEDGED, … 15:58:26 so, it would be good to take a look at that 15:58:41 we're also going to have blueprints for ordering/storing better 15:58:58 gonna be a busy 2016 15:59:07 i believe fujitsu had a bluepring for ordering/sotring too, so we might end-up adding to yours 15:59:26 anyway, just wanted to point out that we've started on this 15:59:41 uh.. honestly speaking I'm not aware of anything. But that doesn't mean much :) 15:59:42 rbrandt is the engineeer working on that 15:59:57 ok, looks like we are done 16:00:04 yeah, that blueprint you're talking about was done some time ago 16:00:05 ;) 16:00:26 need to end the meeting folks 16:00:29 see you next week 16:00:32 ok.. bye! 16:00:35 bye 16:00:42 thanx, bye 16:00:43 bye 16:00:50 #endmeeting