13:00:55 <newt_> #startmeeting openstack-salt
13:00:56 <openstack> Meeting started Tue Jun 21 13:00:55 2016 UTC and is due to finish in 60 minutes.  The chair is newt_. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:00:57 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:00:59 <openstack> The meeting name has been set to 'openstack_salt'
13:01:20 <newt_> #topic roll call
13:01:30 <newt_> hello
13:02:07 <majklk> hi
13:02:29 <newt_> hi majk
13:04:26 <jpavlik> hello all
13:04:32 <epcim> is Ales here?
13:04:35 <jpavlik> what is the agenda?
13:04:44 <jpavlik> newt can you start?
13:05:06 <newt_> yes I'm waiting for the folk
13:05:22 <newt_> for the Penguin especially
13:05:56 <genunix> o/
13:06:12 <newt_> #topic Introduction
13:06:20 <newt_> This meeting for the openstack-salt team
13:06:32 <newt_> if you're interested in contributing to the discussion, please join #openstack-salt
13:06:49 <newt_> #link http://eavesdrop.openstack.org/#OpenStack_Salt_Team_Meeting
13:06:57 <newt_> #link https://wiki.openstack.org/wiki/Meetings/openstack-salt
13:07:09 <newt_> #topic Review past action items
13:07:38 <newt_> newt_ to test the new heat/vagrant provisioning
13:08:17 <newt_> we have done this with Tux, I saw you entering buddy, can you tell us more about the status?
13:08:36 <newt_> Of the heat/vagrant, is there any docs?
13:09:08 <Tux_> Heat stacks in two variants, single and cluster with parametrized bootstrap scripts are already tested for several possible variations of operating system, networking etc., Im working on documentation now
13:09:44 <jpavlik> are there any blueprints on plan features?
13:09:50 <newt_> yes
13:10:23 <jpavlik> https://blueprints.launchpad.net/openstack-salt
13:10:27 <newt_> it's according to the blueprint for orchestration automation
13:11:06 <jpavlik> we should update status and assign specific tasks
13:11:20 <jpavlik> newt as PTL should approve appropriate bluperints
13:11:20 <newt_> #link https://blueprints.launchpad.net/openstack-salt/+spec/service-orchestration
13:12:19 <newt_> the idea of this blueprint is to allow orchestration of openstack with various configuration options
13:12:37 <newt_> when the docs of possible parameters is up, we're ready to go on and test it
13:12:55 <jpavlik> great
13:13:22 <newt_> then there is: marco to provide support for testing midonet setup on heat stack
13:13:59 <jpavlik> OK can we move to topic of getting these formulas official
13:14:11 <newt_> is marco around
13:14:13 <jpavlik> do we have a list of formulas? midonet, kubernetes, swift
13:14:34 <jpavlik> marco is offline
13:14:49 <newt_> I see
13:14:52 <jpavlik> but he finished salt-formula-midonet
13:15:28 <newt_> I've heard, I was wondering how far did he get with the testing suite.
13:16:10 <newt_> How do we setup process to accept a new formula to opesntack official repos?
13:16:18 <jpavlik> marco can you provide update for midonet?
13:16:24 <newt_> Shall we vote?
13:16:41 <jpavlik> we can vote
13:17:19 <marco_> midonet formula is done for kilo, salt-formula-neutron is under review
13:17:33 <newt_> who is for to add midonet-formula to mainstream?
13:17:50 <newt_> or has some objections?
13:18:25 <jpavlik> +1
13:18:33 <Tux_> +1
13:18:50 <marco_> +1
13:18:59 <newt_> +1
13:19:20 <genunix> +1
13:19:36 <newt_> #action add midonet to openstack-salt formulas
13:19:41 <newt_> #action newt add midonet to openstack-salt formulas
13:20:13 <newt_> Now the swift formula, it has been tested and been running on kilo and liberty
13:20:24 <epcim> regarding task , epcim to find out if our networking approach is rh7 compatible and suitable- - consistent naming interfaces on ubuntu >= 15.10 and rhel >= 7.   Are we about to open this topic..
13:20:39 <marco_> i wrote, that it is ready only for kilo
13:20:51 <marco_> and we need to test with rh
13:20:56 <Tux_> We should add midonet formula support to our OS Salt lab stacks after all necessary reviews are closed and merged
13:21:17 <jpavlik> we can get repository under openstack official and start review procedure to be able add support for redhat
13:21:57 <newt_> Tux_: agreed, but the formula will be under oficial processes when being worked on
13:22:24 <newt_> we will fix all the remaining issues using reviews and openstack CI
13:22:36 <jpavlik> epcim: hold on. We have to approve all formulas going official.
13:22:39 <newt_> the question to add the swift
13:22:47 <newt_> who votes for?
13:22:56 <epcim> +1
13:23:02 <genunix> +1
13:23:06 <jpavlik> +1
13:23:11 <Tux_> +1
13:23:36 <newt_> ok
13:24:04 <newt_> agreed and what about kubernetes? this one is the trickiest, not being the openstack service
13:24:49 <jpavlik> it should be there as well
13:24:55 <genunix> -1
13:25:00 <genunix> nothing related to openstack
13:25:08 <jpavlik> but it is community
13:25:19 <jpavlik> kolla-kubernetes is related to openstack?
13:25:51 <newt_> jpavlik: kolla is just about kubernetes, we focus on all services in general
13:26:41 <newt_> I'm hesitant, it does not belong to openstack services but it is used to run them and it would be nice if it was managed by OS CI
13:27:01 <marco_> kolla is about running openstack in containers, which is part of salt-formula-kubernetes as well
13:27:17 <jpavlik> I would like to get it out from tcpcloud namespace to be more community open solution. Alternative for kolla
13:28:03 <jpavlik> it is same like https://github.com/openstack/fuel-plugin-saltstack
13:28:08 <jpavlik> nothing related to openstack
13:28:10 <newt_> anyone else throws a vote?
13:28:16 <genunix> +1 for move, but we should not mess openstack community with non-openstack. Otherwise we can also move formulas like postfix, freeipa, jenkins, etc.
13:28:40 <genunix> I would better try to push again discussion with SaltStack about making our formulas more official
13:28:49 <epcim> what repo?
13:29:16 <jpavlik> genunix: I think that kubernetes formula is not same like linux or postfix
13:29:30 <Tux_> I think I agree with jpavlik on this, if there are other barely related projects already, this one could really benefit the comunity
13:29:48 <jpavlik> I see huge benefit in open CI
13:30:00 <genunix> jpavlik: but you can say freeipa has some relation because you can use LDAP backend for keystone auth that it provides.
13:30:05 <jpavlik> kubernetes should be a new mainstream for running openstack-salt
13:30:17 <epcim> +1 me too
13:30:19 <genunix> we can make SaltStack community CI for all of these non-openstack formulas
13:30:42 <jpavlik> genunix: this is best way, however we do not have power to get there.
13:31:21 <epcim> jpavlik: but both solutions deployment to virtuals / kubernets should coexist. As both will be used for productions..
13:31:40 <newt_> well, we agreed on midonet and swift, kubernetes is a little different, we should take care of the non-openstack formulas in consistent way
13:31:45 <epcim> containerised services for OS might attract new users.. for sure..
13:32:19 <Tux_> I also think Kubernetes should be an alternative, I wouldnt throw away legacy solutions just yet
13:33:23 <newt_> I will talk to the infra guys about the number of repos we can have
13:33:29 <jpavlik> I am not talking about throwing. Discussion is about get kubernetes under openstack namespace, because it is related and it can be part of future simple installer for community
13:34:11 <newt_> If the limits are fine I'd consider moving all things related to running openstack with salt there.
13:34:53 <newt_> this leads us to: Tux to add the SPM support to formulas and register to inverntory if there's any
13:35:06 <newt_> epcim: we'll get to your issue shortly
13:35:07 <genunix> to including linux and other formulas?
13:35:34 <jpavlik> they will  not approve, because it is duplication of official saltstack
13:35:35 <newt_> if you look at ansible repositories under openstack-salt
13:35:46 <newt_> openstackansible i ment
13:36:07 <newt_> you see many non openstack related repos
13:36:16 <epcim> jpavlik: no there may be as many as spm sites for individual solution
13:36:22 <Tux_> newt_: Im currently facing some difficulties with metadata, default metadata in formula root cannot be cleanly included to SPM package, Ill look into this more, otherwise SPM packaging metadata are already prepared for all formulas
13:36:29 <epcim> tcpcloud may provid spm repo for openstack-salt formulas
13:36:56 <newt_> the metadata issue, I'll update the blueprint to handle this
13:37:08 <newt_> +1 to spm repo
13:37:25 <epcim> +1 to spm repo
13:37:29 <jpavlik> +1 spm
13:37:33 <Tux_> I was thinking the same thing +1 for spm repo
13:38:00 <newt_> Tux_: can you write to salt issue?
13:39:05 <newt_> that tcp can host the spm repository and setup jobs to deliver new versions there
13:39:46 <newt_> so the salt community may start using the openstack-salt formulas right away
13:40:08 <epcim> fine!
13:40:17 <newt_> the metadata issue means the reclass classes are not available, but it is solveable
13:40:32 <newt_> now epcim to find out if our networking approach is rh7 compatible and suitable
13:40:38 <epcim> continue "networking approach is rh7 compatible" -> Basically you should read http://askubuntu.com/a/689143 (udev/systemd is giving the names based on multiple attributes ( mac, firmware, etc.. ) Thus on virtualization it will gent "inconsistent" as a side effect for a automation and cfg. mgmt tools.  The solutions are: A, remove the persistent rules from
13:40:38 <epcim> configuration on each platform  (ie: ln -s /dev/null /etc/udev/rules.d/80-net-name-slot.rules; Pass the `net.ifnames=0` to kernel.) B, to write custom naaming (internet0, public0) rules and still do (A); C, avoid using a interface names (as they are not important) and introduce new attributes (private_ip, public_ip, ...), that will be shared in grains etc..
13:40:38 <epcim> (example, as OHAI does for cloud ISVs: https://github.com/chef/ohai/blob/master/lib/ohai/plugins/cloud.rb)
13:40:50 <epcim> pls have a quick look on the links
13:40:59 <Tux_> newt_: Yeah I would like your support with this issue, I didnt come up with any good solution yet
13:41:22 <Tux_> newt_: At the moment Im unable to deliver service class with SPM package
13:42:48 <genunix> which is expected
13:42:50 <newt_> epcim: we can setup kernel params by states, the default interface names can be map.jinja based?
13:42:58 <epcim> I am for (C) as it's pluggable per environment..
13:43:57 <epcim> i see the issue with (B) that may not fit the other customers
13:44:06 <newt_> but I think we'll need to go in more detail it is too much info for me to make some decision :)
13:44:17 <jpavlik> I do not understand what is the problem. You tested ubuntu 16.04 and hit issue with networking?
13:44:39 <epcim> as well as on some platforms (not sure but ec2, docker) you may not be given the options to modify interface names
13:45:27 <epcim> the issue is that since 16.04 and rhel 7 the intefaces has names like: ens6s0
13:45:40 <jpavlik> can we discuss this later on irc channel? because this is very deep dive
13:45:52 <jpavlik> and bring resolution on next irc meeting?
13:46:00 <newt_> this about the past meeting issues
13:46:00 <epcim> re, first link. based on naming sheme used in rules it may contain part of mac/firmware etc..
13:46:01 <jpavlik> I need to understand it in more detail
13:46:19 <jpavlik> We are not targeting interface by specific names
13:46:24 <jpavlik> it is different per deployment
13:46:31 <epcim> The following different naming schemes for network interfaces are now supported by udev natively:
13:46:32 <epcim> 1) Names incorporating Firmware/BIOS provided index numbers for on-board devices (example: eno1)
13:46:32 <epcim> 2) Names incorporating Firmware/BIOS provided PCI Express hotplug slot index numbers (example: ens1)
13:46:32 <epcim> 3) Names incorporating physical/geographical location of the connector of the hardware (example: enp2s0)
13:46:32 <epcim> 4) Names incorporating the interfaces's MAC address (example: enx78e7d1ea46da)
13:46:32 <epcim> 5) Classic, unpredictable kernel-native ethX naming (example: eth0) - depreciated
13:46:47 <jpavlik> but this is not a problem
13:46:55 <epcim> reclass salt models they does...
13:47:07 <jpavlik> reclass must be fit to every physical server
13:47:20 <jpavlik> you cannot predict anything, even if you use mac address someone has to set it manually
13:47:33 <epcim> but that is changing with containers for example
13:47:47 <jpavlik> we do not manage interfaces inside of container
13:47:52 <jpavlik> you do not care about ip in container
13:47:55 <newt_> yes
13:48:04 <jpavlik> there is not interface management
13:48:08 <newt_> no network management if the model is note set
13:48:20 <epcim> in formulas you want to bind services to particular subnets (so you are aquire them by interface name "today")
13:48:26 <genunix> + for these purposes (determine interface, mac addr, whatever), there are grains
13:48:40 <jpavlik> we bind on ip address not interface
13:48:45 <jpavlik> 0.0.0.0 or vip address
13:48:49 <epcim> (example, as OHAI does for cloud ISVs: https://github.com/chef/ohai/blob/master/lib/ohai/plugins/cloud.rb)
13:48:51 <jpavlik> or single local address
13:49:45 <jpavlik> I suggest to discuss this individualy
13:49:56 <Tux_> epcim: can you provide us with specific example of state that uses interface names directly? I don't know wheter we use this at the moment or not
13:49:59 <epcim> grains shoud return 'private_ipv4' of node (what ever interface). Ohai for example is checking on what interface is default route to return these entries - it's more complex than just map interfaces.
13:50:22 <genunix> we can introduce grain that will determine and provide that information
13:50:29 <genunix> if needed
13:50:37 <jpavlik> but there is no need to do that
13:50:41 <jpavlik> I need to know use case
13:50:53 <jpavlik> because until now I do not care about naming on interfaces
13:50:54 <epcim> genunix: will we manage to do that somehow pluggable (so others, using the formulas may modify?)
13:50:58 <jpavlik> only in case of vrouter
13:51:20 <newt_> ok, let's move to the today's workload, this discussion is getting some friction :)
13:51:57 <epcim> Tux_: grep them (it's general issue for future compat).
13:52:10 <newt_> #topic Today's Agenda
13:52:30 <newt_> today's agenda is about the past agenda, but I'll try to summarise:
13:52:46 <newt_> get the midonet, swift formulas in infra repo
13:53:33 <newt_> get the test suite documented and used for midonet and dvr in 1st wave
13:54:01 <newt_> get the spm to formulas and repo ready for all formulas
13:54:21 <newt_> that's it for recapitulation of our tasks
13:54:22 <newt_> now
13:54:23 <newt_> #topic Open Discussion
13:55:10 <jpavlik> OK, lets discuss this on irc
13:55:26 <jpavlik> thanks everybody
13:55:55 <newt_> does anyone else have something on their mind?
13:57:41 <newt_> we'll it's all gentlemen
13:57:55 <newt_> have a nice rest of day
13:57:58 <newt_> #endmeeting