14:00:52 <witek> #startmeeting Monasca Team Meeting
14:00:54 <openstack> Meeting started Wed Aug 30 14:00:52 2017 UTC and is due to finish in 60 minutes.  The chair is witek. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:55 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:58 <openstack> The meeting name has been set to 'monasca_team_meeting'
14:01:12 <rhochmuth> o/
14:01:17 <witek> hello
14:01:19 <cbellucci> o/
14:01:27 <shinya_kwbt> o/
14:01:43 <witek> hi shinya_kwbt
14:02:26 <shinya_kwbt> hi, It's been a long time.
14:02:36 <nseyvet_op5> hello
14:03:10 <witek> I'll try to moderate this time
14:03:23 <witek> the agenda is pretty light
14:03:33 <witek> https://etherpad.openstack.org/p/monasca-team-meeting-agenda
14:03:47 <witek> #topic Specifications repository
14:03:56 <sc> hi
14:04:09 <witek> I would like to propose adding monasca-specs repository
14:04:47 <witek> previously we had blueprints in Launchpad to describe new feature requests
14:05:35 <witek> the problem with blueprints was that it was not easy to review and work together on them
14:05:49 <rhochmuth> would this follow what has been done on other projects
14:05:52 <rhochmuth> such as https://github.com/openstack/nova-specs
14:05:55 <witek> many projects use specs repositories for that reason
14:06:15 <witek> yes, most of projects do that
14:06:34 <witek> the specs are then published to specs.openstack.org
14:07:17 <sc> I like the idea, I never liked launchpad
14:07:20 <witek> in that case http://specs.openstack.org/openstack/nova-specs/
14:07:21 <rhochmuth> sounds like a good improvement
14:08:11 <witek> as you can see it's also a place to hold project priorities for development
14:08:12 <rhochmuth> +1
14:08:28 <shinya_kwbt> +1
14:08:36 <sc> +1
14:08:47 <witek> great
14:09:12 <witek> I'll start and create the repo then
14:09:36 <witek> good to see that you like the idea
14:10:01 <nseyvet_op5> +1
14:10:19 <witek> in the process of preparing for that I have prepared Contributor Guide
14:10:35 <witek> https://review.openstack.org/498804
14:11:01 <witek> http://docs-draft.openstack.org/04/498804/4/check/gate-monasca-api-docs-ubuntu-xenial/715a60c//doc/build/html/contributor/index.html
14:12:05 <witek> it describes shortly which tools we use for bug tracking and new features
14:12:39 <witek> it will end up here https://docs.openstack.org/monasca-api/latest/contributor/index.html
14:13:34 <witek> #topic Open Stage
14:13:56 <witek> are there any other topics for today?
14:14:18 <rhochmuth> mid-cycle prep
14:14:27 <nseyvet_op5> Pike release
14:14:31 <rhochmuth> gridDB, cassandra update?
14:14:49 <rhochmuth> grafana discussion
14:14:50 <witek> #topic mid-cycle
14:14:55 <sc> the idea of using gnocchi as TSDB?
14:15:22 <witek> the agenda is here: https://etherpad.openstack.org/p/monasca_queens_midcycle
14:15:53 <witek> there is still a lot of place
14:16:11 <witek> so feel free to add topics
14:17:02 <rhochmuth> thx witek
14:17:02 <witek> Pike has been officially release today
14:17:49 <rhochmuth> cool
14:17:50 <witek> it takes the last tags from the stable/pike branch
14:18:56 <witek> #topic TSDB
14:19:12 <witek> where should we start?
14:19:14 <witek> :)
14:20:13 <witek> GridDB, akiraY provides new patchsets
14:20:28 <witek> do we want to try this out?
14:20:56 <rhochmuth> It would be nice to know performance on a three node cluster
14:21:05 <rhochmuth> to compare against other DBs
14:21:25 <witek> yes, the resources in internet come down to one benchmark
14:22:00 <rhochmuth> one and three nodes has been the platform we've used for influxb and cassandra compares so far
14:23:35 <nseyvet_op5> there is a large patch set to review for GridDB.
14:23:49 <nseyvet_op5> how do you feel about that?
14:23:50 <witek> it would be nice to collect a list of requirements or criteria based on which we judge
14:23:50 <shinya_kwbt> akiraY said he(NEC) doesn't have 3 node cluster which have 256GB RAM.
14:25:28 <jgu> +1 on "it would be nice to collect a list of requirements or criteria based on which we judge" and even shared performance testing scripts
14:26:14 <nseyvet_op5> we plan an AWS based test for Influx (both single instance and enterprise), so +1 on list of requirements
14:26:48 <sc> if you have an idea of the hardware requirements and the test process I can see what it's available in our lab, I don't promise nothing but to try
14:28:13 <witek> sc: that would be great
14:28:15 <nseyvet_op5> https://etherpad.openstack.org/p/cassandra_monasca under Hardware Requirements
14:28:37 <nseyvet_op5> Entry scale (TBD) Mid Scale (TBD) Carrier Grade Minimum 3 baremetal nodes each node: 256 GB RAM minimum Two SSDS: commit log disk 256GB; data disk: 2TB Dedicated VLAN for the replication data link, 10 gb
14:28:45 <nseyvet_op5> Is this correct witek?
14:29:05 <nseyvet_op5> It could be interesting to first define what is "Entry scale" and "Mid Scale"?
14:29:19 <witek> I guess jgu has used smaller system for tests
14:30:01 <witek> and also, I think it's more important to define the test load and not hardware
14:30:20 <jgu> nseyvet: I've changed down to one ssd.
14:30:43 <jgu> yes I think it's  important to specify the test load benchmark first
14:31:06 <nseyvet_op5> Sounds fair
14:32:04 <witek> shinya_kwbt, jgu, nseyvet_op5 is that something you could work on together?
14:33:03 <shinya_kwbt> I will review patchset, cause I have no benchmark resources.
14:33:19 <witek> I mean specifying the test load and judgement criteria
14:33:34 <jgu> sure
14:33:38 <nseyvet_op5> sure
14:34:22 <shinya_kwbt> Oh Sorry I'm not specialist for benchmark I could not advice.
14:34:49 <witek> and akiraY?
14:36:10 <shinya_kwbt> I guess akiraY isn't there today.
14:37:10 <witek> jgu: do you have any update on Cassandra?
14:37:14 <shinya_kwbt> He and mine office are different
14:37:46 <witek> shinya_kwbt: oh, I see, I'll contact him
14:38:51 <jgu> we'got some good news... the write performance # is better after we switched the use of SSD from data to commit log. The latest # is updated on the etherpad.
14:40:13 <jgu> we will send a patch for the devstack cassandra deployment and Java persister for review soon (hopefully next week)
14:40:39 <nseyvet_op5> So, the Java persister remains the benchmark component. Correct?
14:42:08 <witek> jgu: I know the performance is better for Java, but I think we need Python implementation as well
14:42:22 <jgu> our plan is to also deliver a python driver for Cassandra.
14:42:42 <witek> nseyvet_op5: I guess it's fair to test Cassandra with Java, although not optimal
14:42:48 <jgu> api will only have the python version
14:43:08 <witek> jgu: thanks
14:44:28 <jgu> Python version could work for small deployment.
14:45:05 <witek> having two implementations is additional maintenance effort
14:45:21 <witek> but I guess we have to live with it
14:45:58 <jgu> yes I am feeling that pain right now :-=)
14:46:34 <witek> Julien Danjou from Gnocchi project has suggested using their TSDB
14:46:35 <nseyvet_op5> what is the push for having two implementations?
14:47:03 <rhochmuth> might as well throw this one in too, https://www.timescale.com/
14:47:26 <witek> nseyvet_op5: all OpenStack infrastructure is optimized for Python
14:47:37 <witek> and we wanted to deprecate Java
14:47:45 <rhochmuth> so, does anyone have any performance info on gnocchi?
14:48:41 <witek> in the mailing list they have cited two benchmarks
14:49:46 <witek> https://julien.danjou.info/blog/2015/gnocchi-benchmarks
14:49:56 <witek> https://www.slideshare.net/GordonChung/gnocchi-v3/18
14:49:59 <nseyvet_op5> I will throw in RiakTS
14:50:46 <witek> TimeScale is really interesting, based on Postgres
14:51:01 <witek> but they don't have clustering
14:51:07 <witek> correct?
14:51:41 <rhochmuth> not yet as far as i know
14:52:07 <rhochmuth> reading the gnocchi benchmark i wonder what a batch is
14:52:14 <rhochmuth> is a batch a batch of the same metric
14:52:25 <rhochmuth> as in cpu utilization for host 1 at various time samples
14:52:48 <rhochmuth> or is a batch a set of metrics for different hosts/resources at a single time sample
14:53:03 <rhochmuth> it has been a while since i've looked into this
14:53:09 <witek> I don't know, only the second case would make sense for us
14:53:18 <rhochmuth> but they way we do benchmarking is with the latter
14:53:40 <rhochmuth> right
14:54:21 <rhochmuth> unless there was an intermediary in-memory representation (clustered and fault-tolernat) that could buffer the data temporarily until it is flushed and persisted to disk
14:54:22 <witek> nseyvet_op5: did you run some tests with RiakTS?
14:55:33 <nseyvet_op5> No I have not used RiakTS.  It is conceptually a solid DB.
14:55:48 <nseyvet_op5> It scales and is open source
14:56:04 <nseyvet_op5> there are benchmarks available online for it
14:56:19 <witek> we will have to wrap up soon
14:56:47 <witek> nseyvet_op5 jgu would you create the document for the testing reference?
14:57:09 <witek> preferably rst, so we can put it to the new spec repo :)
14:57:38 <nseyvet_op5> jgu could u start and then share?
14:57:47 <jgu> witek: sure
14:57:53 <witek> great thanks
14:58:08 <witek> next week I will be in vacation
14:58:33 <witek> last week of school holidays
14:59:09 <witek> I have to close the meeting
14:59:16 <rhochmuth> thx witek, bye-bye
14:59:16 <witek> thank you everyone
14:59:25 <witek> thanks rhochmuth
14:59:58 <witek> #endmeeting