13:02:40 #startmeeting monasca 13:02:40 Meeting started Tue May 5 13:02:40 2020 UTC and is due to finish in 60 minutes. The chair is chaconpiza. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:02:41 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:02:43 :) 13:02:44 The meeting name has been set to 'monasca' 13:02:50 Thanks for the reminder 13:02:59 lets start! 13:03:13 #topic Summary: Investigation: Alarm Definition update 13:03:46 1.1-1.3: Just a summary, as reported last week already. Any questions for this? 13:04:27 more or less clear for me so far 13:04:39 If not, let me continue with 1.4/1.5: Update of det. alarm defs. 13:04:53 We investigated further for the ttopic I mentioned last week. 13:05:20 Results: Alarms get more or less "destroyed" if the alarm definition is being updated. 13:05:39 This happens with any update, can be e.g. update of operator 13:05:54 It's a different bug than the one we know already. 13:06:27 In database, there's a table sub_alarm. In this table, expression is stored as well. 13:06:47 After update of alarm def., determinstic is no more part of this expression. 13:07:21 Thus, a deterministic alarm can reach status "UNDETERMINED" 13:08:02 Any questions related to this? 13:08:11 I would like to add, that we have the list of commands to reproduce it in devstack. 13:09:01 sounds plausible, that this is another bug 13:09:08 Yes 13:09:13 do you think the bug is only in API? 13:10:18 I am not sure yet 13:10:38 I assume that it's a bug in monasca-thresh, but I need to investigate further. However, this will take some time. 13:11:06 I'm currently working on a diferent, monasca-thresh-related topic (metrics too old in log) 13:11:18 we can create a Story with tag `bug` and assign so far to project monasca-api 13:11:26 and what's the priority? are you impacted more by `update function, period` or deterministic alarms? 13:11:42 chaconpiza: +1 13:12:08 In our customer environment, they're not using det. alarms. 13:12:31 However, they can live with the work-around for update function, period. 13:13:04 Thus, from our site, "metrics too old" has highest priority 13:13:53 do we have a story for `update function, period` bug? 13:14:55 No, not yet, wanted to finish investigation first - done now. If everybody agrees with the proposal in 1.6.2, I can create such a story 13:15:14 +1 13:15:28 sounds good to me, that would be validation in API, right? 13:16:12 Yes, this wil impact api only 13:16:54 So, I will create 2 stories, ok. 13:17:06 +1 13:17:10 That's it from my site 13:17:18 alright, then lets move to next topic 13:17:35 thanks Matthias! 13:17:40 #topic Ussuri release 13:17:40 thanks 13:18:05 we're approaching the release deadlines 13:18:19 on Thu is the final RC deadline 13:18:47 I've created new RC for monasca-persister already 13:19:01 including the simplejson requirement 13:19:32 do we know of any other bugs on stable/ussuri which have to be fixed? 13:20:05 I don't recall any 13:20:39 I also think other components are good 13:21:34 the official final release will be announced next week 13:22:02 is it announced in the mailing list? 13:22:58 i have small fix 13:22:59 https://review.opendev.org/#/c/724155/ 13:23:15 yes, will be, for now they're sending release countdown email 13:23:16 would be nice to have this in ussuri 13:23:55 http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014577.html 13:24:17 thanks 13:25:15 I added myself as reviewer for the Adrian's fix 13:25:40 thanks 13:25:45 Ok, lets go for the next topic 13:25:56 #topic Bump librdkafka to 1.4.0 13:25:56 adriancz: I think it's not release critical, so we can review and merge on master and then cherry-pick to stable/ussuri after the official release 13:26:17 We are using now confluent-kafka-python v1.4.1 13:26:36 yes we can merge this later 13:26:48 for docker we need to compile the source code 13:27:22 I started an Alpine container to do this fix https://review.opendev.org/#/c/725258/ 13:28:01 because it brakes the zuul check build-monasca-docker-image 13:29:06 Then we found a new broken zuul check: monasca-tempest-python3-influxdb 13:29:22 looks good to me, the failing tempest test are not related 13:29:45 I will paste what I wrote one hour ago, for the record. 13:29:52 We have a small issue in monasca-api getting data from influxdb. 13:29:58 Just after stacking: vagrant@devstack:~$ monasca metric-list --name zookeeper.out_bytes 13:30:09 The repository was unable to process your request (HTTP 500) (Request-ID: req-d6106c5d-3bf9-4adc-af93-cd14efddac13) 13:30:09 adriancz and me were finding the root cause: 13:30:09 The problem is with the new library: influxdb 5.3.0 13:30:09 Downgrading to the previous version: 13:30:09 vagrant@devstack:~$ pip install influxdb==5.2.3 13:30:10 vagrant@devstack:~$ sudo systemctl restart devstack@monasca-api 13:30:10 vagrant@devstack:~$ monasca metric-list --name zookeeper.out_bytes 13:30:21 +---------------------+----------------------+ 13:30:22 | name | dimensions | 13:30:22 +---------------------+----------------------+ 13:30:22 | zookeeper.out_bytes | component: zookeeper | 13:30:22 | | hostname: devstack | 13:30:22 | | mode: standalone | 13:30:22 | | service: zookeeper | 13:30:23 +---------------------+----------------------+ 13:30:23 As witek found, influxdb was upgraded in u-c during the weekend 13:30:37 😉 13:31:33 As well witek found an issue related to the new version of influxdb that could be the reason 13:31:56 https://github.com/influxdata/influxdb-python/issues/820 13:33:30 what to do now? 1. try to update our code to be compatible with new influxdb client 13:33:35 Adrian Czarnecki proposed openstack/monasca-api master: Fix incorrect old log-api tempest test configuration https://review.opendev.org/718512 13:33:54 2. send a change to the u-c to avoid the new version 13:33:55 ?? 13:34:31 I vote for 2 13:35:13 i vote for 1 13:35:51 I will vote for 2, to keep CI working and in parallel try to make our code compatible with new version 13:36:13 because is a high possibility that the new version of influxdb client is buggy 13:36:42 I think this is the best option 13:36:46 adriancz, is it ok to you? 13:36:54 ok 13:37:01 well, right now we don't really know if that's a bug in influxdb client or our code 13:37:12 :D 13:37:47 but +1 for investigating the problem and trying to solve the root cause 13:38:11 and also we don't know what we need to do to make our code compatible with new influx 13:39:32 well, then it seems we are choosing the option 1 13:39:38 +1 for investigation of root cause 13:40:08 let's follow up next week then 13:40:08 ok, I will continue digging on it 13:40:16 alright 13:40:29 lets jump to the final topic 13:40:47 #topic Docker images 13:41:29 oh, that's mine 13:41:57 as we just discussed, building Python Docker images requires compiling from sources 13:42:11 because installing from wheels is disabled in Alpine 13:42:26 the following blog article describes it 13:42:32 https://pythonspeed.com/articles/alpine-docker-python/ 13:42:56 it causes problems for some requirements, like for example confluent-kafka 13:43:31 also, the build process gets really long, and the resulting image isn't small 13:43:58 the author suggests other base images for Python 13:44:17 which at the first glance makes sense to me 13:44:40 just wanted to share 13:44:43 witek, I barely recall that there is other option call Eggs, it is possible in Alpine? 13:45:20 don't understand 13:45:28 instead of wheels 13:45:52 https://packaging.python.org/discussions/wheel-vs-egg/ 13:45:55 pip in Alpine installs from source package 13:46:27 I see 13:47:02 at least for confluent-kafka 13:47:44 To choose a new base image or continue with Alpine is important. Would you like to add into the etherpad for the PTG? 13:48:37 yes, I think we can discuss it during PTG 13:48:46 but is not my priority 13:48:46 +1 thanks 13:49:35 #topic AOB 13:50:28 is there any other topic you would like to discuss? 13:50:44 I have one, could have added to the agenda actually 13:51:22 #topic Fix monasca-log-api CI 13:51:23 we've got two competing approaches to fixing CI in monasca-log-api 13:51:30 https://review.opendev.org/718512 13:51:36 https://review.opendev.org/704536 13:52:03 we've chatted with adriancz yesterday and he convinced me to his approach 13:52:11 seems cleaner 13:52:51 so, the concept is, we've deprecated monasca-log-api and its DevStack plugin in Ussuri 13:53:35 we still maintain monasca-log-api code but stop maintaining DevStack plugin from Ussuri onwards 13:54:16 so for testing, we run new DevStack plugin in monasca-api for changes in monasca-log-api master and stable/ussuri 13:54:56 for stable/train and older branches we test with old DevStack plugin in monasca-log-api 13:55:22 I know, it's a little complicated but I think makes sense 13:56:01 I would vote to Adrian's way, since we have already the infrastructure to proceed, I mean: USE_OLD_LOG_API 13:56:37 oh, I've missed one review for that: 13:56:48 https://review.opendev.org/720240 13:56:58 i create change for that 13:56:59 https://review.opendev.org/#/c/720240/ 13:57:06 o sorry witek 13:58:22 adriancz: could you please rebase on top of https://review.opendev.org/724658 ? 13:58:45 also, please add a topic for both your changes, so it's easier to find 13:58:49 yes 13:59:25 for https://review.opendev.org/724658 I need the fix of docker (Bump librdkafka to 1.4.0) 13:59:38 and for that the fix of influxdb new version 13:59:58 we have a long chain 14:00:13 yes, influxdb client seems to be prio 1 today 14:00:25 Ok, lets go with Adrian's way 14:00:30 +1 14:00:34 I will digg into influxdb 14:00:42 we ran out of time 14:00:57 thanks, good meeting 14:01:02 Thanks for your ideas 14:01:08 +1 14:01:18 bye bye 14:01:26 Bye, everybody 14:01:31 bye, see you next week 14:01:37 #endmeeting