15:01:08 <witek> #startmeeting monasca
15:01:09 <openstack> Meeting started Wed Jul  4 15:01:08 2018 UTC and is due to finish in 60 minutes.  The chair is witek. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:10 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:12 <openstack> The meeting name has been set to 'monasca'
15:01:24 <jgrassler> Hello
15:01:26 <witek> Hello everyone
15:01:30 <amofakhar> Hello
15:01:34 <koji> hello
15:01:58 <witek> nice to see you
15:02:03 <witek> here the agenda:
15:02:07 <witek> https://etherpad.openstack.org/p/monasca-team-meeting-agenda
15:02:26 <witek> #topic Kafka upgrade
15:02:47 <witek> we've done some testing with upgrading Apache Kafka
15:02:52 <witek> to ver. 1.0.1
15:03:21 <witek> log message format has been changed since Kafka 0.10
15:03:35 <witek> but there is an option to set it to the old format
15:04:05 <witek> we've run the tests for a couple of days with proposed settings
15:04:14 <witek> and haven't observed any issues
15:04:25 <amofakhar> nice
15:04:27 <witek> the throughput is the same as before
15:04:39 <witek> tested on single node installation
15:05:05 <jgrassler> Yay. So I get to build a new Kafka package once we've got the consumers updated... :-)
15:06:05 <witek> it would be nice, if you could also do some testing in your environments
15:06:48 <jgrassler> I'll try to squeeze that in somehow...
15:07:38 <jgrassler> May take a while, though - I'll have to roll a Kafka 1.0.1 package first, and Java being Java that may come with unexpected excitement.
15:08:02 <witek> yes, with packaging it gets more effort
15:09:02 <witek> on the related topic, we also make comparison of throughput of different Kafka clients
15:09:13 <witek> here is the WIP:
15:09:27 <witek> https://github.com/monasca/monasca-perf/pull/28
15:10:02 <witek> it seems, confluent-kafka is the preferred option
15:10:43 <witek> we've managed to get better values for pykafka than presented in graphs, but still much lower than confluent
15:12:12 <witek> amofakhar: have you been doing some testing with Kafka clients?
15:12:30 <Guest64549> sorry i was disconnected
15:12:39 <Guest64549> yes we did some tests before
15:13:33 <witek> I remember you've been opting for confluent during the PTG
15:13:54 <witek> seems Amir has some connectivity problems
15:14:17 <amofakhar> yes we did some tests before
15:14:41 <amofakhar> I think we can do the same with this new version
15:15:27 <witek> amofakhar: have you tested confluent client?
15:16:43 <amofakhar> I think, but i am not sure because some other persons did it before
15:17:23 <witek> OK, could you please check on this, I'll ping you in the next days
15:17:46 <amofakhar> sure
15:18:11 <witek> my current idea is to remove pykafka from global-requirements and exchange with confluent
15:18:27 <witek> but want to be sure it's the right approach before doing that
15:18:55 <jgrassler> I stil need to look into packaging that, too...
15:19:09 <amofakhar> confluent-kafka-python ?
15:19:27 <witek> jgrassler: yes, true
15:19:32 <witek> amofakhar: yes
15:19:35 <jgrassler> Yes. It's a bit more complicated than regular Python modules since it comes with a C component.
15:19:48 <jgrassler> The next point on the agenda prevented me from that so far :-)
15:19:57 <witek> right, let's move on :)
15:20:03 <witek> #topic Alembic support
15:20:21 <jgrassler> Yeah, so we've got a command line DB management tool now.
15:20:53 <jgrassler> https://review.openstack.org/#/c/580174/
15:21:19 <jgrassler> It can deal with a configuration database schema that was created from a legacy SQL script schema.
15:21:26 <witek> so the command is `monasca_db`
15:21:31 <jgrassler> Yes.
15:22:13 <jgrassler> It detects the schema by computing a SHA1 checksum of the active schema in a (hopefully) canonical form.
15:22:14 <witek> should we document the usage somewhere apart from online help?
15:22:26 <jgrassler> Absolutely.
15:22:39 <jgrassler> Maybe we can add that as an extra task on the story.
15:23:22 <witek> yes, we can do that
15:23:32 <jgrassler> Anyway, the most important thing about reviewing is probably the list of fingerprints: it would be a good idea to compute the fingerprints on as many (and weird) environments as possible.
15:24:24 <jgrassler> For the canonical representation may not quite cut it, yet, and contain a few things that vary across environments.
15:25:06 <jgrassler> If so we'll need to either add fingerprints or (preferably) fix the canonical representation.
15:25:55 <witek> what do you mean with canonical representation?
15:26:28 <jgrassler> I linked to a testing script for generating fingerprints of all legacy revisions, along with expected output in the review comments.
15:27:16 <jgrassler> A representation that (ideally) doesn't change if you run it in an environment with (1) a default database charset of UTF-8 and (2) a default database charset of koi-8.
15:28:51 <jgrassler> Internally (see fingerprint.py), fingerprinting dumps the database into an sorted list of sqlalchemy.Table objects and serializes that into a textual representation.
15:29:16 <jgrassler> I try to control for environmental factors and irrelevant information by stripping out a few things, but I may not have caught everything.
15:30:28 <witek> OK, thanks I'll try to test it in our env
15:30:55 <jgrassler> Thanks :-)
15:31:29 <witek> I think we can move on
15:31:50 <witek> #topic deployment
15:32:16 <pandiyan> Hi All
15:32:21 <witek> hi pandiyan
15:32:56 <pandiyan> I think am facing issue for 4 weeks being try to deploy in different than upstream method.
15:33:39 <pandiyan> shortly i explain, following upstream docker compose, where removed keystone and passed corresponding values for keystone wherever required
15:33:55 <pandiyan> http://paste.openstack.org/show/725024/
15:34:23 <pandiyan> witek:  i need guidelines to deploy
15:34:33 <pandiyan> here my grafana docker compose file http://paste.openstack.org/show/725025/
15:35:07 <pandiyan> here my kafka_zookeeper compose which support swarm mode, which is not unique like upstream http://paste.openstack.org/show/725029/
15:35:49 <pandiyan> Here the issue, when am running "monasca setup" on compute node getting ERROR like this http://paste.openstack.org/show/725033/
15:36:26 <pandiyan> But in ctl node able to see metrics http://paste.openstack.org/show/725031/
15:36:44 <pandiyan> whether those monasca-setup issues is expected witek
15:37:50 <pandiyan> I know this meeting is mostly for development discussion, but i looking help from deployment point of view as well being operator. please
15:38:05 <witek> probably not all of them are expected, but in general not critical
15:38:22 <pandiyan> witek: is it fine to ignore those ?
15:38:23 <witek> monasca-setup tries to run all detection plugins and fails on some
15:38:45 <witek> you can configure the needed plugins manually afterwards
15:39:08 <pandiyan> may i know why influxDB plugin issue is there in that ERROR
15:39:23 <pandiyan> ERROR: Plugin for influxdb-relay will not be configured.
15:39:37 <pandiyan> ERROR: monasca-persister process has not been found. Plugin for monasca-persister will not be configured. ERROR: Supervisord process exists but configuration file was not found and no arguments were given.
15:40:38 <witek> the reasons can be different from one case to another
15:40:49 <witek> some of the plugins are process checks
15:41:09 <koji> let me confirm, so the error messages are happened when you run monasca-setup on the server where monasca is running?
15:41:17 <jgrassler> And there's a different detection plugin for each metrics plugin.
15:41:43 <haruki> Current monasca setup output "ERROR" when the process or service is not running and failed to detect the process or service
15:41:47 <pandiyan> witek:  as per my customised docker compose files , whether this is expected
15:41:55 <pandiyan> jgrassler: yes
15:42:03 <jgrassler> Some of them have detection heuristics that do not work everywhere, unfortunately...
15:42:39 <witek> yes, it's expected that some plugins fail, e.g. when given service is not running
15:42:59 <pandiyan> jgrassler: sorry, docker monasca is running on different server, created keystone endpoint and trying to install & run monasca agent on compute node
15:43:10 <jgrassler> I know roughly what you are dealing with. I fixed some of these. And then I went back and fixed my fixes because I'd overlooked corner cases... :-)
15:43:21 <koji> in my opinion, the error message which says just not exist should be change the log level to INFO
15:44:10 <pandiyan> in devstack also same kind of error where all are running in single node
15:44:14 <witek> yes, we could file bugs for these and try to fix
15:44:14 <jgrassler> pandiyan: if you are running monasca-setup on a machine where the other monasca services are not running, the detection of Monasca services on that node is absolutely going to fail, yes. They're not there after all.
15:45:15 <jgrassler> If some of these errors are reported on single-node Devstack that is more likely to be an actual problem.
15:45:22 <koji> and i think the current monasca-setup doesn't support monasca docker monitoring
15:45:24 <pandiyan> jgrassler: may  i know what you mean here by monasca services here ?
15:45:41 <jgrassler> In that case, please file bug reports with enough information for us to reproduce the problem.
15:45:59 <jgrassler> pandiyan: monasca-persister for instance. That is a Monasca service (part of Monasca).
15:46:48 <jgrassler> pandiyan: if you run monasca-setup on a machine where there _is_ no monasca-persister (because monasca-persister runs on a different machine), it will wail to detect its presence of course.
15:47:09 <witek> InfluxDB also is normally not installed on OpenStack nodes
15:47:23 <jgrassler> That's not really an error (monasca-setup is a bit overzealous about reporting "errors", unfortunately)
15:47:26 <pandiyan> jgrassler: got it, but here all monasca services are running in docker by mapping to existing openstack-keystone, so dont think need services should run on each ndoe where running monaca agent
15:48:13 <witek> no, it's just monasca-setup complaining
15:48:18 <jgrassler> pandiyan: exactly.
15:48:28 <pandiyan> jgrassler: i know one person has delivered same way as i do from other organisation, unfortunately dont have his contact
15:48:55 <jgrassler> pandiyan: monasca-setup tries to detect _all_ services it's got metrics plugins for (well, most). If some of these happen not to be present, that's not a problem.
15:49:07 <pandiyan> also would like to know how can i check data populated inside influxDB here
15:50:39 <jgrassler> `monasca metric-list` would give you an overview at least.
15:51:01 <pandiyan> jgrassler: say i have 500 compute nodes, and runniing monasca docker compose in seperate server and did keystone integration, in that case do how can i have moansca services in 500 nodes
15:51:04 <witek> yes, you can use monasca client
15:51:10 <witek> or query db directly
15:51:12 <witek> https://docs.influxdata.com/influxdb/v1.3/guides/querying_data/
15:51:19 <jgrassler> You can access InfluxDB directly, too, but I'd have to look that up myself (it's rarely necessary).
15:51:41 <witek> if API is working, InfluxDB is working as well
15:51:54 <pandiyan> witek: i tried from localhost where docker influx is runniing its saying connection refused
15:52:03 <pandiyan> where able to connect mysql
15:52:55 <witek> check port forwarding
15:53:31 <haruki> you can check the metrics which are collected in Influxdb as below
15:53:43 <haruki> docker exec -it <influx container ID> /bin/sh
15:53:50 <haruki> influx
15:53:57 <haruki> use mon
15:53:59 <pandiyan> haruki: samething what ever u said i did
15:54:04 <haruki> show series
15:54:28 <koji> which command failed?
15:54:37 <koji> `influx`?
15:55:29 <pandiyan> haruki: inside docker i can able to connect but not from outside
15:55:34 <pandiyan> let me ask my final question
15:56:03 <pandiyan> How can i pass keystone values to grafana in environment variable
15:56:22 <pandiyan> here my grafana-dockercompose http://paste.openstack.org/show/725025/
15:56:59 <pandiyan> how can i pass dependent keystone values to grafana in environment variables
15:57:10 <witek> GRAFANA_USERNAME and GRAFANA_PASSWORD for grafana-init don't work?
15:57:35 <pandiyan> thats working witek , but i want to login grafana with keystone user
15:57:56 <witek> open http://grafana_host:3000
15:58:00 <witek> and log in
15:58:34 <pandiyan> i mean keystone credentials, we have 1000+ customers, so i want them to login grafana with their keystone credentials
15:59:10 <witek> yes, they can log in with their Keystone credentials
15:59:43 <witek> these are mapped from Keystone users and projects to Grafana users and organisations
15:59:53 <pandiyan> no even for me i am not able to login with keystone.. i am using whatever i have passed "GRAFANA_USERNAME and GRAFANA_PASSWORD "
16:00:30 <pandiyan> i think as i said something missing when i removed keystone as per upstream for grafana
16:00:53 <pandiyan> see here i have removed keystone dependecy http://paste.openstack.org/show/725025/
16:01:36 <pandiyan> sorry here http://paste.openstack.org/show/725037/
16:02:58 <witek> should be working
16:03:14 <witek> you may try to log in with some user with 'admin' role
16:03:31 <pandiyan> i have admin role
16:03:33 <witek> oh, the hour passed
16:03:41 <witek> I have to close the meeting
16:03:43 <witek> sorry
16:03:52 <pandiyan> okay witek understood
16:03:55 <witek> #endmeeting