16:02:33 <m3m0__> #startmeeting openstack-freezer 12-11-2015
16:02:34 <openstack> Meeting started Thu Nov 12 16:02:33 2015 UTC and is due to finish in 60 minutes.  The chair is m3m0__. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:02:36 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:02:38 <openstack> The meeting name has been set to 'openstack_freezer_12_11_2015'
16:02:40 <m3m0__> All: meetings notes available in real time at: https://etherpad.openstack.org/p/freezer_meetings
16:03:04 <m3m0__> hello all, the first topic of the meeting will be parallel backups
16:03:08 <m3m0__> #topic parallel backups
16:03:16 <m3m0__> reldan any update?
16:03:36 <daemontool_> Hi m3m0__
16:03:41 <reldan> Hi m3m0
16:04:24 <reldan> I’m working on parallelb backups. Everything seems to be nice, I have a MultipleStorage now, so it seems a big of part of code will be the same as before
16:04:37 <reldan> I’m going to show result next week
16:04:56 <m3m0__> cool, any blockers that you have so far?
16:05:15 <reldan> Nope, have no blockers. Everything is clear
16:05:39 <m3m0__> is this going to be release for liberty right?
16:05:48 <m3m0__> what will be next?
16:06:28 <reldan> I don’t know, Fausto should know better. We decided that deadline will be 25 November.
16:06:45 <m3m0__> we need to define that on the freezer roadmap
16:07:00 <daemontool_> yep
16:07:01 <daemontool_> ==
16:07:02 <daemontool_> ++
16:07:22 <m3m0__> https://wiki.openstack.org/wiki/FreezerRoadmap
16:07:24 <reldan> What will be next. I would like to make some changes in swift storage.
16:07:39 <m3m0__> anything critical?
16:07:53 <reldan> Nope, just code optimization and cleaning
16:08:33 <reldan> I also would like to refactor 1) lvm and shadow (separate database locking from snapshotting)
16:09:04 <reldan> 2) mysql, mssql etc - I would like to have classes with some structure instead of functions
16:09:11 <daemontool_> yes
16:09:15 <m3m0__> #agree on that
16:09:15 <daemontool_> reldan,  that sounds reasonable
16:09:31 <daemontool_> #agreed
16:09:36 <m3m0__> and I would like to add something to that
16:09:51 <m3m0__> the sql server behaviour should be similar to the one in mysql
16:09:57 <daemontool_> yep
16:09:59 <vannif> reldan, code for snapshots and mysql needs to be intermixed. are we going to use some kind of "hooks" ?
16:10:08 <m3m0__> where the database only gets locked at the snapshot creation
16:10:56 <reldan> I don’t believe it should be intermixed. We should have separated objects for application and for lvm/shadow. Like
16:11:05 <reldan> Application: stop(); run()
16:11:07 <vannif> yes. separated objects
16:11:16 <reldan> Snapshot: create(); delete()
16:11:29 <reldan> And probably the third class that will be actually execut it
16:11:33 <vannif> but, the procedure to backup calls the various methods in sequence
16:11:47 <vannif> and has to roll back in case of errors
16:11:48 <reldan> appllication.stop(); snapshot.create(); application.run(); snapshot.detete()
16:12:00 <reldan> Ok then
16:12:09 <reldan> application.stop();
16:12:15 <reldan> try snapshot.create()
16:12:21 <reldan> finally application.run()
16:12:32 <daemontool_> the ; at the end came from Perl? lol :)
16:13:06 <reldan> Because now I cannot execute mysql backup on windows
16:13:12 <reldan> And it’s really wrong
16:14:22 <m3m0__> yes, the behaviour right now is very *unix
16:14:56 <reldan> Or we can provide application to snapshot, or snapshot to application. But I absolutly belive, we shouldn’t know doing lvm/shadow about concrete mysql, mssql, mongodb stop/run implementation
16:15:36 <daemontool_> #agreed
16:15:41 <m3m0__> the less dependencies between each other the better
16:15:45 <daemontool_> yep
16:16:15 <reldan> http://pastebin.com/abeDAQ0h
16:16:18 <m3m0__> that will allow us to create an snapshot at the begginig of the job and not in the action itself
16:16:33 <reldan> line 70
16:16:46 <reldan> line 85
16:17:26 <reldan> Otherwise it will be a lot of checks if mode is mysql … elif mode is mongodb … elif msserver ...
16:18:00 <reldan> Next thing
16:18:18 <reldan> I know that Zaher wants to provide possiblity to write plugins for freezer
16:18:59 <reldan> Like I need to create s3 storage, but don’t want to share it. Probably it is not good enough for merging to master, but good engough to me
16:19:28 <reldan> So If I understand it correct, we should have some dir in freezer, scan this directory and loading modules from it
16:19:53 <m3m0__> we need to explore the option on how horizon handles this
16:19:55 <reldan> Sounds good, but requres from us to think very carefully about internal interfaces
16:20:29 <m3m0__> that way we can import/extend/override freezer functions and classes
16:20:35 <reldan> We should say, if you want to have new storage: 1) create class 2) implement methods a, b, c 3) add metainformation
16:20:50 <reldan> yes, agree
16:21:03 <reldan> So we need to decide which components we would like to have pluggable
16:21:18 <reldan> Decide and fix interfaces
16:21:34 <m3m0__> I would like to be involved in that :)
16:21:55 <daemontool_> m3m0__,  you are involved already :)
16:22:09 <m3m0__> that's true :)
16:22:18 <reldan> Great ) So probably when we have time, we shold review components and interfaces between them
16:22:22 <reldan> It is all from my side
16:22:25 <m3m0__> yep #agree
16:22:33 <daemontool_> #agreed
16:22:42 <m3m0__> does anyone has anything more to say on this topic?
16:23:05 <openstackgerrit> Memo Garcia proposed openstack/freezer-web-ui: Improved horizon dashboard for freezer  https://review.openstack.org/236175
16:23:07 <daemontool_> (let's use a subsection in the wiki)
16:24:02 <m3m0__> ok, next topic
16:24:13 <m3m0__> #topic freezer included in the big tent
16:24:19 <daemontool_> http://governance.openstack.org/reference/projects/freezer.html
16:24:34 <daemontool_> congratulations to everybody :)
16:24:46 <m3m0__> I would like to take a minute and congratulate everyone for this amazing achievement
16:24:58 <daemontool_> ++
16:25:01 <daemontool_> #agreed
16:25:07 <m3m0__> we need some more beers to celebrate :)
16:25:24 <daemontool_> tonight 6:30 pm GMT :)
16:25:38 <daemontool_> McSwiggans of Galway if anyone is interested :)
16:25:49 <m3m0__> +2
16:26:20 <Slashme_> +1 congrats everyone
16:26:30 <m3m0__> well moving to the next topic :) we are short on time
16:26:40 <m3m0__> #topic freezer api
16:26:46 <m3m0__> vannif, any news?
16:27:17 <vannif> yep. sent the code for review, with unit tests and coverage
16:27:41 <m3m0__> which commit is it?
16:27:49 <m3m0__> is the one related to action_id ??
16:28:06 <vannif> now, when creating a job in the api, the api can understand the content of the job, extract the actions and put them in the actions db with a proper action_id
16:28:33 <vannif> https://review.openstack.org/#/c/244245
16:28:58 <m3m0__> so guys, please review ^^
16:29:02 <vannif> the, solved some minor bugs with the db-init
16:29:10 <m3m0__> question
16:29:23 <vannif> https://review.openstack.org/#/c/244072/
16:29:23 <vannif> https://review.openstack.org/#/c/243679/
16:29:24 <m3m0__> when freezer is installed it create a freezer-db-init executable
16:29:26 <vannif> shoot
16:29:43 <vannif> yes
16:29:47 <m3m0__> should it be there? I don't think it make sense in an end user perspective
16:29:51 <m3m0__> but I could be wrong
16:30:19 <fabriziof> hello all, sorry to be late
16:30:28 <vannif> to be more precise, its in the freezer-api repo
16:30:35 <vannif> not freezer (agent)
16:30:38 <m3m0__> oooooo
16:30:46 <m3m0__> well makes more sense now
16:31:51 <vannif> the. you suggested the addition of a backup_uuid to support the  web-ui
16:32:04 <vannif> and there is the issue of the client_id
16:32:05 <m3m0__> there is also this commit related to the api: https://review.openstack.org/#/c/244563/
16:32:23 <vannif> exaclty
16:32:55 <vannif> regarding the client_id, we talked a little about that
16:32:58 <vannif> to summarize:
16:33:22 <openstackgerrit> Merged openstack/freezer-api: fix freezer-db-init modifies replicas in test mode  https://review.openstack.org/244072
16:33:42 <vannif> having it default to ptoject-id_hostname can result in conflicts
16:34:03 <vannif> but having it created as a uuid everytime can be a problem as well
16:34:29 <m3m0__> yes, while retrieving data from the api when the you don't now the id for example
16:34:47 <vannif> yes. so we are thinking of having it stored in a configuration file.
16:35:04 <m3m0__> what happens if that configuration gets deleted?
16:35:16 <fabriziof> vannif: a file where ?
16:35:34 <vannif> when not defined it gets created, but then stored. so subsequent invocation of the freezer-scheduler (and the future python-freezerclient) will result in using the same client_if
16:35:38 <vannif> *client_id
16:35:53 <vannif> we were thinking of ~/.freezer/scheduler.conf
16:36:11 <vannif> or (if not present) /etc/freezer/scheduler.conf
16:36:11 <m3m0__> why not /etc?
16:36:37 <m3m0__> etc should be first IMO
16:36:57 <fabriziof> in /etc you need to be root
16:37:49 <vannif> to modify it, yes. anyway, we'll stick to *nix standard way of looking for config files, that's a side problem, I think.
16:38:03 <vannif> the important concept is that of storing the client_id locally
16:38:31 <fabriziof> exactly vannif, I don't like the idea
16:39:08 <m3m0__> why not? fabriziof ?
16:39:54 <vannif> I'm not that sure too, but I can see it has some good points
16:40:35 <fabriziof> if you loose the file ?
16:40:50 <m3m0__> yes I agree with that point
16:41:06 <m3m0__> is this a blocker vannif?
16:41:24 <Slashme_> If you lose the file, you can recover the id from freezer-scheduler client-list
16:41:58 <m3m0__> there is a bp on this:
16:42:27 <Slashme_> About the api, I'm working on a patch that will allow to define property that will span across multiple actions of a job.
16:42:28 <Slashme_> When the api will receive a job that contains actions_default, it will expand those properties to all actions in that job.
16:42:35 <vannif> https://blueprints.launchpad.net/freezer/+spec/better-client-id
16:42:46 <Slashme_> Oosps wrong copy / paste
16:42:47 <Slashme_> https://blueprints.launchpad.net/freezer/+spec/better-client-id
16:43:40 <m3m0__> please all review this bp and send feedback
16:43:51 <fabriziof> if you can retrieve the client_id then the only problem I see is security
16:44:23 <vannif> the client_id is not a secret even now
16:45:04 <vannif> even if you provide the client_id of another user, you need to provide valid OS credentials to access that information
16:45:17 <vannif> or you mean something else ?
16:47:26 <fabriziof> I'm just worried saving sensible thing locally on disk
16:47:42 <emildi> https://docs.python.org/2/library/uuid.html
16:48:19 <fabriziof> since most of the customers I have been in touch are concerned
16:48:56 <Slashme_> What sensible info ?
16:49:14 <fabriziof> users / pwds / ids etc...
16:49:52 <fabriziof> sometimes users of vm's are not allowed to access the openstack part
16:50:15 <Slashme_> Yes, but the scheduler client_id is not sensitive at all
16:50:28 <fabriziof> but let's skip this for now and have a security specific discussion in the near future
16:50:39 <daemontool_> ++
16:50:40 <m3m0__> and the users should have permissions at least to store data on swift and get a keystone token
16:50:49 <m3m0__> #agree on that
16:51:02 <m3m0__> we need to define that on the roadmap as well
16:51:14 <daemontool_> I think we can move forward
16:51:17 <m3m0__> anything more to say related to the api?
16:51:18 <daemontool_> yes
16:51:23 <m3m0__> #topic free4all
16:51:30 <daemontool_> branching
16:51:47 <daemontool_> so currently our code is aligned with kilo requirements
16:51:51 <m3m0__> yes
16:52:01 <daemontool_> as soon as https://review.openstack.org/#/c/244245/
16:52:12 <daemontool_> is in we can create the stable/kilo branch
16:52:19 <m3m0__> and as soon as we merge the new changes on the ui and api we have to create that branch
16:52:25 <daemontool_> yes
16:52:32 <m3m0__> this: https://review.openstack.org/#/c/236175/
16:52:40 <m3m0__> and this: https://review.openstack.org/#/c/244563/
16:52:41 <daemontool_> eggsactly thanks
16:52:51 <daemontool_> after that
16:52:57 <Slashme_> So, stable/kilo tomorrow? And we need to define the date where we will branch liberty so I can put it on the roadmap
16:53:20 <m3m0__> do we need to agree on that this very moment Slashme_ ?
16:53:22 <daemontool_> after stable/kilo, master will be liberty
16:53:45 <daemontool_> Slashme_, I think as soon as the parallel backup change is in
16:53:53 <daemontool_> we can branch liberty
16:54:19 <daemontool_> and in master align the requirements with mitaka
16:54:37 <daemontool_> so I think, no late than tomorrow we need to create stable/kilo
16:54:51 <m3m0__> so, as Slashme_ said after kilo, liberty should be very quick relase
16:55:04 <m3m0__> only containing few new features and bug fixes
16:55:14 <m3m0__> because we need to catch up with mitaka
16:55:27 <daemontool_> and also see if anything brake after we change the requirements
16:55:29 <daemontool_> yes
16:55:47 <m3m0__> for liberty we need to send the windows scheduler
16:55:52 <m3m0__> I'll work on that
16:55:56 <daemontool_> we need to make sure freezer works with kilo onwards..
16:55:59 <daemontool_> ok
16:56:04 <daemontool_> #agreed
16:56:08 <m3m0__> and for mitaka test coverage for web-ui
16:56:15 <daemontool_> ++
16:56:22 <daemontool_> let's write that in the wiki please
16:56:29 <daemontool_> so we can send an email to the OS ml
16:56:45 <m3m0__> yes, and send an email to the openstack-dev
16:57:10 <daemontool_> ++
16:57:23 <daemontool_> and release also a new pypi version
16:57:28 <daemontool_> so looks like
16:57:30 <m3m0__> so we can agree that will release liberty by the end of december?
16:57:32 <daemontool_> openstack is movign away
16:57:38 <daemontool_> from release with dates
16:57:47 <m3m0__> to releases with ...?
16:57:51 <daemontool_> m3m0__,  before Christmass
16:57:57 <daemontool_> like
16:58:02 <daemontool_> release 2015.12.10
16:58:06 <m3m0__> liberty by 24 of of december?
16:58:09 <daemontool_> that is not used anymore
16:58:12 <daemontool_> liberty
16:58:29 <daemontool_> I'd say by 18th of dicember
16:58:33 <daemontool_> december
16:58:40 <daemontool_> no later than that
16:58:54 <daemontool_> it would be more then 1 month from now
16:59:22 <m3m0__> 20 is a friday
16:59:27 <Slashme_> +1 fot 20
16:59:44 <m3m0__> ok guys we are running out of time
16:59:59 <daemontool_> isn't 18th of December aq Friday?
16:59:59 <m3m0__> are we clear on the next steps?
17:00:10 <daemontool_> ok
17:00:23 <daemontool_> also
17:00:31 <daemontool_> I'd like to propose the following
17:00:50 <daemontool_> Slashme_, as a core reviewer, all in favor?
17:00:54 <m3m0__> +2
17:01:04 <daemontool_> vannif, fabriziof ?
17:01:08 <daemontool_> reldan,  ?
17:01:19 <daemontool_> sounds good?
17:01:33 <vannif> #agree
17:01:52 <daemontool_> and reldan also as a core reviewer, all in favor?
17:01:57 <vannif> and reldan too
17:02:01 <daemontool_> yes
17:02:03 <daemontool_> :)
17:02:05 <vannif> yes
17:02:07 <reldan> :)
17:02:10 <m3m0__> +2 for reldan as well
17:02:15 <Slashme_> +1 for @reldan
17:02:24 <daemontool_> we have to send an email to the openstack-dev ml
17:02:28 <daemontool_> but before
17:02:44 <daemontool_> I'd like to send the roadmap changes that was requested by the TC
17:02:51 <daemontool_> so please let's do that fast :)
17:03:35 <daemontool_> so please when I send the email for Slashme_  and reldan, the current cores send a reply with +1
17:03:44 <daemontool_> as that is how it works in os
17:03:48 <m3m0__> yes of course
17:04:05 <daemontool_> I'm also revisiting the mission
17:04:08 <daemontool_> with dhellmann
17:04:26 <daemontool_> that's all from me
17:04:34 <m3m0__> thanks a lot daemontool_
17:04:36 <daemontool_> I'd like to add block based incrementals for Liberty
17:04:41 <daemontool_> let's see if I can make it
17:05:22 <m3m0__> guys thanks for your time
17:05:26 <m3m0__> and please review
17:05:27 <m3m0__> #endmeeting