16:02:11 <cloudnull> #startmeeting OpenStack Ansible Meeting
16:02:11 <openstack> Meeting started Thu Apr  2 16:02:11 2015 UTC and is due to finish in 60 minutes.  The chair is cloudnull. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:02:12 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:02:14 <openstack> The meeting name has been set to 'openstack_ansible_meeting'
16:02:28 <cloudnull> #topic Agenda & rollcall
16:02:40 <cloudnull> so who is all here :)
16:02:47 <alextricity> Here
16:03:02 <rromans> o/
16:03:03 <d34dh0r53> pre sent
16:03:06 * cloudnull presente
16:03:11 <rackertom> o/
16:03:24 <Sam-I-Am> hi
16:03:26 <stevelle> o/
16:05:38 <cloudnull> #topic action items from last week
16:05:53 <d34dh0r53> #link https://wiki.openstack.org/wiki/Meetings/openstack-ansible#Agenda_for_next_meeting
16:06:05 <cloudnull> the one from odyssey4me was dropped
16:06:20 <cloudnull> #item hughsaunders convert that to a spec and resubmit it for approval
16:07:02 <cloudnull> ^ hughsaunders what say you? - I suspect that this was the bp you created. IE https://blueprints.launchpad.net/openstack-ansible/+spec/manage-resolv-conf
16:07:05 <cloudnull> #link https://blueprints.launchpad.net/openstack-ansible/+spec/manage-resolv-conf
16:08:12 <cloudnull> welp nevermind , it looks like the bp/spec was abandoned.
16:08:14 <cloudnull> #link https://review.openstack.org/#/c/168074/
16:09:02 <cloudnull> i closed that BP and marked it Obsolete.
16:09:21 <cloudnull> #topic Blueprints
16:09:48 <cloudnull> alextricity: being that you're the only one on the agenda can you start off talking about the BP/Spec
16:09:57 <alextricity> Sure.
16:10:23 <alextricity> The goal is to create discussion around how we can implement ceilo into OSAD
16:10:25 <cloudnull> #link https://review.openstack.org/#/c/169417/
16:10:45 <alextricity> cloudnull already have sat down and started talking about the initial setup
16:11:02 <alextricity> e.g. changes to the openstack_environment.yml, new containers, etc
16:11:13 <cloudnull> yes.
16:11:28 <alextricity> The code is up on the whiteboard
16:11:59 <cloudnull> #link https://review.openstack.org/#/c/169417/2/specs/kilo/implement-ceilometer
16:12:08 <cloudnull> #link http://docs-draft.openstack.org/17/169417/3/check/gate-os-ansible-deployment-specs-docs/f5eda8a//doc/build/html/specs/kilo/implement-ceilometer.html
16:12:18 <alextricity> Right now I am brainstorming ways we can effeciently deploy ceilometer, with mongodb database backend.
16:12:25 <cloudnull> so my one question here is how do we test it. ^
16:12:27 <alextricity> efficiently*
16:12:59 <alextricity> I was thinking about creating a small play to deploy a basic mongodb server
16:13:09 <cloudnull> do we develop an in repo method to deploy mongo, similar to how we are doing mariadb/Galera ?
16:13:10 <alextricity> But i'm open to suggestions, as always ;)
16:13:27 <alextricity> Hmmm..I don't know about that cloudnull
16:13:28 <andymccr> maybe there are upstream mongo roles?
16:13:47 <cloudnull> or do we make just enough to get it to work?
16:14:06 <cloudnull> if there are upstream mongo roles we can pull them in using masters structure to pull in external roles.
16:14:16 <andymccr> yeh that'd be ideal
16:14:21 <cloudnull> #link https://github.com/stackforge/os-ansible-deployment/blob/master/ansible-role-requirements.yml.example
16:14:49 <d34dh0r53> +1 to that, implementing and managing a mongo role is not something we should be doing IMHO
16:14:58 <alextricity> re: upstream roles: definitely. I don't know if you guys are comfortable with have in-repo method for deploying mongodb
16:15:29 <cloudnull> im comfortable with it. we're doing it with maria and rabbit
16:15:35 <alextricity> I'm leaning towards that idea as well d34dh0r53
16:15:51 <alextricity> I think we should just do enough to make it work for now
16:15:54 <cloudnull> mongo is used through the OpenStack. even if i dont like it
16:16:08 <cloudnull> *throughout
16:16:16 <andymccr> yeh if its not there, and we need ceilometer
16:16:21 <sigmavirus24> to me, if the upstream roles will accept patches to make them better fit our needs, then yeah
16:16:22 <andymccr> then we should look into it.
16:16:23 <alextricity> Who knows, ceilometer is expected to up their game in Kilo. With Gnocchi, mysql could be a viable solution.
16:16:32 <sigmavirus24> otherwise, for a short term fix we may have to carry our own
16:16:53 <Sam-I-Am> does ceilometer not work with mysql?
16:17:03 <cloudnull> well other services like zaqar are using mongo too
16:17:11 <cloudnull> Sam-I-Am it works, kinda
16:17:17 <andymccr> it works functionally
16:17:20 <alextricity> lol
16:17:36 <cloudnull> so we will eventually  need something that implements mongo.
16:17:45 <stevelle> cloudnull: Sam-I-Am I recall mention of 'don't ever use it in production with SQL backend'
16:17:46 <cloudnull> if we can leverage upstream lets do that
16:18:11 <d34dh0r53> just afraid our mongo role and the project as a whole will take the blame when ceilometer...
16:18:20 <cloudnull> alextricity could you do a bit of research on what upstream roles are available. and how we can leverage tehm ?
16:18:33 <alextricity> cloudnull. Of course.
16:18:35 <cloudnull> d34dh0r53 this is fair.
16:18:52 <alextricity> I'll keep adding to the blueprint as I gather more info and get further along in implementing the plays
16:19:20 <cloudnull> but ceilometer is a OpenStack namespaced service and we should aim to support deploying all the OpenStack services we can.
16:19:34 <d34dh0r53> cloudnull: 100% agree with that
16:19:39 <andymccr> true - and really if support is added and nobody uses it then its likely it wont be amazing, but then nobody is using it
16:19:49 <Sam-I-Am> stevelle: 'dont ever use it in production' ?
16:19:52 <cloudnull> this is true.
16:20:02 <cloudnull> Sam-I-Am: see ceilometer
16:20:10 <cloudnull> :)
16:20:22 <alextricity> What do you guys think about having the separate db backend?
16:20:30 <d34dh0r53> pretty sure that is the 2nd definition of ceilometer
16:20:35 <alextricity> lol
16:20:49 <cloudnull> alextricity "separate db backend?" ?
16:20:58 <andymccr> alextricity: i think if sql is not a viable option we dont have a choice so much :D
16:21:03 <andymccr> unless we want to port everything else to use mongo...
16:21:17 <cloudnull> andymccr for web scale
16:21:20 <cloudnull> :p
16:21:29 <andymccr> how about we just plug it into objectrocket
16:21:32 <andymccr> like cloudfiles for glance :D
16:21:36 <alextricity> cloudnull: If ceilometer is deployed, you'll have the sql db and mongodb for ceilo
16:21:40 <stevelle> community, andymccr :)
16:21:55 <cloudnull> certainly . i like the idea of having ceilometer only need a connection string
16:21:56 <palendae> andymccr: That config will be in rpc-extras ;)
16:22:03 <cloudnull> which is what we are doing in other services.
16:22:09 <palendae> Yeah, if we can get it to just needing a connection string, then awesome
16:22:16 <cloudnull> but we need a way to test it. which leads to having something that deploys mongo
16:22:24 <palendae> Yeah
16:22:27 <alextricity> Right
16:22:34 <d34dh0r53> unless we test it with objectrocket :p
16:22:51 <cloudnull> we could do that but that would have to be an external ci test
16:23:16 <palendae> Is that necessarily bad? I'm not aware of the implications around that
16:23:16 <alextricity> I'm okay with having something that deploys mongo
16:23:29 <Sam-I-Am> has anyone looked at the improvements for kilo?
16:23:34 <Sam-I-Am> i havent even installed it yet
16:23:38 <cloudnull> me too. and if upstream can do that for us. i think we should look at that.
16:23:39 <palendae> The ceilometer ones? no
16:23:43 <stevelle> from hard experience, tuning mongodb in replication is going to require a strategy for where to put the arbiter
16:23:46 <stevelle> alextricity: ^
16:23:54 <d34dh0r53> Sam-I-Am: I can't until I have docs
16:24:06 <alextricity> Sam-I-Am: I have. Like I said they are expected to beef up their game. I still have to do more research, however.
16:24:20 <palendae> stevelle: I wonder if our support shouldn't only go so far as using it for testing
16:24:31 <palendae> And deployers are responsible for Mongo
16:24:48 <palendae> That would be in line with our usage of HAProxy/load balancers
16:25:23 <d34dh0r53> palendae: which is slowly becoming a thing, not something I want our mongo play to become
16:25:25 <cloudnull> palendae that might be fine initially, but with everything else we have, we're targeting production (except HAProxy).
16:25:38 <cloudnull> yet
16:25:46 <palendae> cloudnull: yeah, true. I wouldn't fight that super hard, because you're right - we manage everything else now
16:25:48 <stevelle> palendae: with redundant infra nodes, we would by default have too many arbiters
16:26:01 <palendae> stevelle: I was thinking just for AIOs/gating
16:26:14 <stevelle> it is still a design challenge
16:26:15 <palendae> But might as well use reference architecture
16:26:34 <palendae> Fair enough. I don't know enough about Mongo to speak knowledgeably, so I'll pipe down :)
16:26:39 <Sam-I-Am> stevelle: i thought mongo was smart enough not to need redundancy :)
16:26:49 <d34dh0r53> lol
16:26:51 <cloudnull> Sam-I-Am see web scale.
16:26:57 <d34dh0r53> hahaha
16:27:04 <palendae> I thought we were targeting cloud scale
16:27:07 <palendae> Web scale's not good enough
16:27:11 <Sam-I-Am> space, the final scale.
16:27:16 <cloudnull> #link https://www.youtube.com/watch?v=b2F-DItXtZs
16:27:37 <palendae> cloudnull: Hahaha
16:27:47 <stevelle> too soon.
16:27:51 <stevelle> my wounds have not healed
16:28:00 <andymccr> valid question tbh
16:28:03 <Sam-I-Am> they never really heal
16:28:08 <palendae> Right, stevelle is the Mongo SME
16:28:09 <Sam-I-Am> the scabs just keep coming off
16:28:25 <alextricity> lol
16:28:30 <Sam-I-Am> stevelle: sounds like you've volunteered yourself
16:28:40 <sdake> cloudnull epic video have seen it before ;)
16:29:15 <cloudnull> ok so, alextricity: more research on how we deploy mongo. stevelle can you sync up with alextricity on some of your mongo SME-ness?
16:29:25 <cloudnull> sdake ikr?! :)
16:29:46 <stevelle> alextricity: you know where to find me online?
16:29:59 <cloudnull> and with that update the spec for further review
16:30:01 <alextricity> Unfortunately I don't know enough about Mongodb to say how all of that is handled. So if we are going to go the route of deploying mongo as part of the plays..it's going to be challeging
16:30:23 <alextricity> stevelle, no
16:30:30 <sigmavirus24> alextricity: #openstack-meeting-4
16:30:34 <sigmavirus24> * #openstack-ansible
16:30:34 <alextricity> lol
16:30:42 <sigmavirus24> (tab complete fail, I swear)
16:31:01 <palendae> sigmavirus2: that never happens
16:31:03 <alextricity> Sounds good, we'll definitely sync up
16:31:07 <alextricity> Thanks
16:31:30 <Bjoern__> I think mongo is a good choice for ceilomter, especially when it comes down to expiring objects (built in). SQL DBs are usually killed with ceilometer
16:31:37 <Bjoern__> also it's just ceilometer
16:32:28 <sigmavirus24> palenda: I don't know what you mean
16:32:32 <cloudnull> so next BP: https://review.openstack.org/#/c/169189/
16:32:36 <cloudnull> #link https://review.openstack.org/#/c/169189/
16:33:03 <cloudnull> which is related to bp https://blueprints.launchpad.net/openstack-ansible/+spec/dynamically-manage-policy.json
16:33:38 <cloudnull> while one is for policy.json and the other is for config files i think there's a lot of overlap
16:34:13 <palendae> Yeah - I think merging those would be good
16:34:20 <cloudnull> me too.
16:36:02 <stevelle> I was of the opinion that treating json as json would be easier
16:36:17 <cloudnull> im violently opposed to hash merging. but if the concept / idea wins out among cores then i say we execute on it.
16:36:35 <cloudnull> however the bp that Daniel Curran put through and the pseudo code Sudarshan Acharya is working on creates a module which should allow us to add in extra config without having to do hash merging
16:36:39 <andymccr> cloudnull: do we have another solution? cos i kinda agree with you.
16:36:48 <cloudnull> #link https://review.openstack.org/#/c/168104/
16:36:57 <Sam-I-Am> the spec for neutron plays looks... interesting
16:37:01 <Sam-I-Am> "have fun"
16:37:16 <cloudnull> andymccr it presently works only with policy files.
16:37:36 <cloudnull> but extending to config using the ConfigParse std lib in Py2.7 should make that go.
16:37:47 <andymccr> hmm  - i quite like that idea
16:38:01 <cloudnull> alextricity where's suda ?
16:38:11 <alextricity> I don't think he knows about these meetings
16:38:19 <alextricity> want me to get him in here
16:38:20 <alextricity> ?
16:38:26 <cloudnull> throw something at him :)
16:38:54 <cloudnull> andymccr you mind reviewing that module to make sure its not bat shit crazy ?
16:39:19 <cloudnull> i like it, i had some inline comments. but i like the concept.
16:39:34 <cloudnull> and with a bit of clean up i think it could be awesome.
16:39:54 <cloudnull> sacharya: the man the myth the legend.
16:39:59 <sacharya> haha
16:40:07 <cloudnull> oh look b3rnard0 thanks for showing up ....
16:40:18 <b3rnard0> oh hai
16:40:31 <cloudnull> sacharya we made some inline comments on your module.
16:40:54 <sacharya> saw that… I am fixing those… i was out the last couple of days!
16:41:00 <cloudnull> also we've moved all the things to specs. can you re-pull the bp against our specs repo so that we can get some more review on it.
16:41:15 <cloudnull> time off sacharya ? UNpossible !
16:41:28 <cloudnull> no worries. :)
16:42:12 <alextricity> brb
16:42:32 <cloudnull> sacharya: IE https://github.com/stackforge/os-ansible-deployment-specs
16:42:54 <cloudnull> pulling into https://github.com/stackforge/os-ansible-deployment-specs/tree/master/specs/kilo would be ideal .
16:43:40 <cloudnull> Next: bp https://review.openstack.org/#/c/169189/
16:44:32 <Sam-I-Am> yeah, this one :)
16:44:58 <cloudnull> Cores this is a bp targeted 10.x and not master at this time. we know that its a feature add in the rax technical debt branch, but could/should be extendable to master without a lot of work depending on the implementation.
16:45:47 <cloudnull> i think we can work with Javeria Khan to make that go.
16:46:03 <Sam-I-Am> are we still adding features to 10?
16:46:04 <palendae> Yeah, I think so too
16:46:10 <Sam-I-Am> something that might require architectural changes
16:46:13 <cloudnull> as, by reading the spec, it seems that he has already done most of the work.
16:46:30 <Sam-I-Am> e.g., how we're doing the lxc/bridge stuff
16:47:07 <cloudnull> Sam-I-Am: i dont think so . but we're eventually going to have to re-approach OVS.
16:47:33 <cloudnull> which will require some of those types of changes.
16:47:37 <Sam-I-Am> if any of these plugins/agents use something outside of linuxbridge
16:48:00 <Sam-I-Am> we discussed some of the interesting ovs bits for metal hosts
16:48:08 <palendae> I think Javiera's intent was to bring in plumgrid support
16:48:26 <palendae> But that was split into 2 phases - making ml2 replaceable was the first step
16:48:31 <Sam-I-Am> which i think uses linuxbridge
16:49:34 <cloudnull> palendae i think so it seems that using a different neutron plugin makes that more approachable.
16:50:00 <palendae> cloudnull: If I remember correctly, Javiera was saying plumgrid doesn't support ml2
16:50:28 <cloudnull> Sam-I-Am with the addition of the provider_networks ansible module the data structures in master should be far more malleable.
16:50:52 <cloudnull> #link https://github.com/stackforge/os-ansible-deployment/blob/master/playbooks/library/provider_networks
16:51:10 <Sam-I-Am> cloudnull: this is true
16:51:30 <cloudnull> Which came about to help with your OVS tragedy you were working on .
16:51:34 <cloudnull> :)
16:52:00 <stevelle> we are nearly out of time here
16:52:08 <cloudnull> so getting a few more reviews on that spec would be great.
16:52:29 <cloudnull> we are. so lets open up to general discussion .
16:52:41 <cloudnull> #topic Open discussion
16:53:12 <cloudnull> begin the festivus
16:53:25 <d34dh0r53> I've got a lot of problems with you people
16:53:31 <cloudnull> :D
16:53:32 <d34dh0r53> :)
16:53:37 <stevelle> I know there are multiple reviews open for moving master to kilo. I wanted to raise osprofiler as a topic
16:53:55 * cloudnull hands mic to stevelle
16:53:57 <d34dh0r53> go
16:54:02 <stevelle> I feel we should configure the same way, and we had three slightly different ways between heat, glance, and cinder
16:54:54 <stevelle> it's pretty clear that profiling should be off, but git-harry rightly raised the point that having the middleware in place would be good
16:55:50 <cloudnull> i think we should, in config, set it off. we expose vars to enable it.
16:56:03 <cloudnull> by default i think that it should be functional in paste.
16:56:16 <stevelle> from there it's less clear. Is the middleware always enabled or configurable?  It looks like we all made our own HMAC secret per-service
16:56:26 <sigmavirus24> Point of order: We all recognize that without ceilometer having osprofiler configurable to be on by default is kind of ... pointless, right?
16:56:57 <stevelle> sigmavirus24: agreed.  The initial glance bp and work all excluded osprofiler.
16:57:01 <cloudnull> sigmavirus24: unless there is somethine else, outside of our deployment scope that is consuming those messages.
16:57:26 <andymccr> having it configurable but off by default seems sensible to me.
16:57:41 <sigmavirus24> cloudnull: pretty sure osprofiler only emits things for ceilometer and I've heard 0 about it being used by anything else (I've looked a lot)
16:57:50 <stevelle> so each service has it's own hmac, and the middleware is on then?
16:57:59 * sigmavirus24 is just making sure everyone is aware
16:58:07 <cloudnull> sure.
16:58:31 <cloudnull> well that ties back to alextricity  and getting ceilometer as a supportable service .
16:58:41 <alextricity> Yeah :/
16:58:50 <stevelle> lets just try to make sure all the services apply the same style to osprofiler config
16:59:00 <cloudnull> if he does that, and we have the profiler options, then i think we should be good . right?
16:59:10 <andymccr> consistency would be good.
16:59:17 <cloudnull> +1 for consistency
16:59:22 <d34dh0r53> yes
16:59:41 <cloudnull> ok we're out of time. lets continue this convo in the channel or on the ML.
16:59:59 <d34dh0r53> thanks all
17:00:02 <cloudnull> thanks everyone.
17:00:04 <cloudnull> #endmeeting