Thursday, 2016-06-16

*** openstack has joined #openstack-watcher05:42
*** openstack has joined #openstack-watcher05:57
*** sets mode: +ns 05:57
*** sets mode: -o openstack05:58 *** Notice -- TS for #openstack-watcher changed from 1466056639 to 144130288805:58
*** sets mode: +ct-s 05:58
*** ecelik has joined #openstack-watcher05:58
*** mordred has joined #openstack-watcher05:58
*** mestery has joined #openstack-watcher05:58
*** logan- has joined #openstack-watcher05:58
*** openstackgerrit has joined #openstack-watcher05:58
*** jinquan has joined #openstack-watcher05:58
*** odyssey4me has joined #openstack-watcher05:58
*** harlowja_ has joined #openstack-watcher05:58
*** gzhai2 has joined #openstack-watcher05:58
*** pkoniszewski has joined #openstack-watcher05:58
*** esberglu has joined #openstack-watcher05:58
*** edleafe has joined #openstack-watcher05:58
*** zigo has joined #openstack-watcher05:58
*** tpeoples has joined #openstack-watcher05:58
*** sballe_ has joined #openstack-watcher05:58
*** junjie has joined #openstack-watcher05:58
*** tkaczynski has joined #openstack-watcher05:58
*** jimbaker has joined #openstack-watcher05:58
*** aspiers has joined #openstack-watcher05:58
*** ChanServ has joined #openstack-watcher05:58
*** sets mode: +o ChanServ05:58
*** changes topic to "OpenStack watcher Project"05:58
*** gzhai2 has quit IRC06:02
*** gzhai2 has joined #openstack-watcher06:04
*** vmahe has joined #openstack-watcher06:38
*** thorst_ has joined #openstack-watcher06:44
*** Kevin_Zheng has joined #openstack-watcher06:51
*** thorst_ has quit IRC06:52
*** jed56 has joined #openstack-watcher07:16
*** acabot has joined #openstack-watcher07:42
*** danpawlik has joined #openstack-watcher07:47
jed56hello edleafe07:48
jed56when you will have some time,I would like to discuss with the nova-policies spec07:48
jed56I think that I was not enough clear about the scope of this specification.07:48
jed56I agree with you that since the implementation of 'check-destination-on-migrations-newton'07:49
jed56blueprint the nova scheduler is able to check if the live migration will breaks the scheduler rules.07:49
jed56However, you have to think this specification from the point of view of Watcher.07:49
jed56Do you think that we can optimize a system without know his constraints ?07:49
*** thorst_ has joined #openstack-watcher07:49
*** dtardivel has joined #openstack-watcher07:49
*** thorst_ has quit IRC07:56
*** vmahe has quit IRC08:18
openstackgerritMerged openstack/watcher: Fix StrategyContext to use the strategy_id in the Audit Template
openstackgerritTomasz Kaczynski proposed openstack/watcher: Add scoring engines to database and API layers
*** thorst_ has joined #openstack-watcher08:54
openstackgerritTomasz Kaczynski proposed openstack/python-watcherclient: Add scoring engine commands
*** thorst_ has quit IRC09:01
*** Kevin_Zheng has quit IRC09:11
*** alexchadin has joined #openstack-watcher09:23
alexchadinGood day!09:24
jed56You too :)09:31
*** Kevin_Zheng has joined #openstack-watcher09:43
*** alexchadin has quit IRC09:45
jed56tkaczynski: you should in your commit message Partially-Implements: blueprint scoring-module09:57
*** thorst_ has joined #openstack-watcher09:59
*** thorst_ has quit IRC10:06
tkaczynskijed56: thanks, will do. also, I've noticed that some tempest tests failed. do you know how can I run that locally and debug?10:16
tkaczynskiapparently adding a new endpoint to the api breaks some tests10:17
openstackgerritTomasz Kaczynski proposed openstack/python-watcherclient: Add scoring engine commands
openstackgerritTomasz Kaczynski proposed openstack/watcher: Add scoring engines to database and API layers
*** thorst_ has joined #openstack-watcher11:03
*** thorst_ has quit IRC11:11
*** alexchadin has joined #openstack-watcher11:20
jed56tkaczynski: yes we are aware of the tempest problem11:30
tkaczynskiis this something I can fix or this is the infrastructure problem?11:31
jed56tkaczynski: yes you can follow
tkaczynskijed56: I've followed these steps and there is a problem: this directory cd <TEMPEST_DIR>/watcher-cloud/etc doesn't exists11:33
tkaczynskiafter tempest init command11:33
jed56$ cd <TEMPEST_DIR>11:34
jed56$ tempest init --config-dir ./etc watcher-cloud11:34
tkaczynskicd <TEMPEST_DIR>/watcher-cloud/etc11:34
*** jed56 has quit IRC11:34
tkaczynskibut I'm doing it within virtual env, so the other command should be used?11:34
*** jed56 has joined #openstack-watcher11:34
jed56I have to try, it's been a while that I didn't try to run tempest.11:37
tkaczynskijed56: thanks. I'm trying as well in the meantime. will let you know if I manage to do this11:38
tkaczynskiok, got the etc folder. now I'm editing the tempest.conf file. how do I know which keystone I have (V3 or V2)?11:42
*** danpawlik has quit IRC11:44
jed56tkaczynski: it depends of your deployment of openstack11:45
jed56you can take a look at keystone endpoints11:46
alexchadintkaczynski: as I remember there is special field in openrc file which forces to use v3 keystone11:50
*** thorst_ has joined #openstack-watcher11:53
tkaczynskialexchadin: OS_AUTH_URL has this value: cd <TEMPEST_DIR>/watcher-cloud/etc. so I guess I'm on V2?11:54
tkaczynskialexchadin: OS_AUTH_URL has this value: . so I guess I'm on V2?11:55
alexchadinfor v3 we need11:55
*** thorst__ has joined #openstack-watcher11:55
tkaczynskiok, thanks11:56
alexchadinI ran watcher tempest about 2 months ago11:56
alexchadinIt worked good11:56
*** thorst_ has quit IRC11:58
tkaczynskiwe'll see. for me it's another hurdle to pass11:58
*** danpawlik has joined #openstack-watcher12:15
edleafejed56: good UGT morning12:17
jed56I'm currently in a meeting12:17
edleafejed56: ok, I'm just starting my day12:19
*** alexchadin has quit IRC12:19
edleafeping me when you're free12:19
jed56edleafe: okay thanks a lot12:19
*** alexchadin has joined #openstack-watcher12:19
openstackgerritJinquan Ni proposed openstack/watcher: Use disabled/enabled to change service state
openstackgerritJinquan Ni proposed openstack/watcher: Use disabled/enabled to change service state
tkaczynskijed56 alexchadin: any idea where can I get public_network_id value (in tempest.conf) from?12:34
alexchadintkaczynski: good question:)12:35
alexchadintkaczynski: don't you have one?12:35
alexchadintkaczynski: public_net12:35
tkaczynskiyou mean, environment value?12:36
alexchadinneutron net12:36
tkaczynskiI'm bad with openstack configuration :) I guess there is a neutron command which should list my networks?12:37
alexchadinneutron net-list12:39
alexchadinor you can use openstack network list if you have openstackclient installed12:40
tkaczynski"public endpoint for network service in RegionOne region not found"12:42
tkaczynskiopenstack client returned only one row: "f14c0655-5a02-47c7-89b9-09b6a14c101e | private |"12:43
alexchadinyou can try to use uuid of private net12:45
alexchadinor create public net
alexchadinyou also need to check whether you have public neutron endpoint or not12:47
alexchadinfeel free to ask :)12:48
tkaczynskiI don't want to mess with my devstack. it took me few weeks to set it up! trying to run tests now, but no success. I suspect that testr doesn't know about the tempest.conf file in watcher-cloud/etc floder12:48
*** alexchadin has quit IRC12:51
*** alexchadin has joined #openstack-watcher12:51
alexchadinbut you use --config parameter, right?12:51
tkaczynskithis command: ./ --config watcher-cloud/etc/tempest.conf -N -- watcher12:52
tkaczynskigetting: tempest.exceptions.InvalidConfiguration: Invalid Configuration12:52
tkaczynskiDetails: Identity v3 API enabled, but no identity.uri_v3 set12:52
tkaczynskibut I do have v2 enabled (as in instruction)12:53
alexchadincould you please give me your conf?12:54
alexchadinI don't know why but irccloud isn't working for me12:55
alexchadinI just can't open link12:56
*** thorst__ is now known as thorst_12:58
alexchadinyou need to disable api_v312:59
alexchadinIt is in true state currently12:59
tkaczynskiwhich section is this?13:02
tkaczynskisomething has started, tons of GET queries13:05
tkaczynskialexchadin: thank you for your help, really appreciate it!13:06
alexchadintkaczynski: you are welcome :)13:07
tkaczynski30 tests passed, 8 failed. now I'm wondering what my change has to do with these tests :) I don't see any connection...13:09
*** hvprash has joined #openstack-watcher13:38
*** hvprash_ has joined #openstack-watcher13:39
*** hvprash has quit IRC13:42
*** alexchadin has quit IRC13:44
*** ecelik has quit IRC13:46
*** alexchadin has joined #openstack-watcher13:46
*** alexchadin has quit IRC14:01
openstackgerritDaniel Pawlik proposed openstack/puppet-watcher: Implement api.pp
*** hvprash_ has quit IRC14:38
*** hvprash has joined #openstack-watcher14:40
openstackgerritJinquan Ni proposed openstack/watcher: Use disabled/enabled to change service state
jed56edleafe: yep15:10
edleafejed56: so what am I missing?15:10
jed56so, I think that I was not enough clear about the scope of this nova policies.15:10
jed56Watcher doesn't want to  only pick up a VM and ask nova to find a new placement15:11
jed56We want to build an Audit which can contains several actions15:12
edleafeYes, I understand that15:13
jed56So, how do we develop a strategy capable of find a new placement without knowing the constraints of the system15:13
*** danpawlik has quit IRC15:14
edleafeIt sounds like you want to check any/every potential placement with the nova scheduler before making a choice. Is that correct?15:15
jed56For example, we have an instance with Affinity constraint15:15
jed56this is not an idea to try every potential placement because it will consume to much time15:16
edleafeMy view is that Watcher cannot possibly know these things, and shouldn't15:17
jed56However, we could for example use constraint programming or meta-heuristics which can use the constraints as an input15:17
edleafeIt should make movement conditions based on the state of the data center and the established Watcher rules15:17
jed56what do you mean ?15:18
edleafeOnce it decides to move, it is up to the Nova scheduler to check/enforce any nova-specific constraints15:18
openstackgerritMichael Gugino proposed openstack/watcher: WIP: Implement goal_id and watcher_id into Audit api
jed56yes, in that you limit watcher as a instance selector15:18
edleafeThat's the point of the spec I have to allow Watcher to send multiple potential hosts to a live migrate call15:19
edleafeNova will then select one that meets its constraints15:19
jed56yes, watcher will maybe never try propose to move an group of instances with an affinity constraint15:20
jed56so, the proposed solutions will never be accepted15:21
jed56by the scheduler15:21
jed56moreover, we will only be able to propose one action at a time15:21
edleafejed56: of course. Watcher tries to optimize, but the constraints don't allow optimization15:21
edleafeWhy only one at a time?15:22
jed56example: if I have move this instance from this host to an another i will be able to the move this 5 vms together15:23
jed56after to this host15:23
jed56we need a more holistic approach15:23
*** michaelgugino has joined #openstack-watcher15:24
edleafeThat's an even bigger change to live migration15:24
jed56moreover with watcher we will able to schedule this actions15:24
jed56in time15:24
edleafeYou will have race conditions if you go through all the pre-checking to make sure that a host is OK, and then some other process builds a big VM on your selected host15:25
edleafeIt's much better to use an optimistic design, and plan for the occasional failure15:26
jed56it's the reason why we need to have alternative paths15:26
jed56and when the audit stale we have to build a new one15:26
edleafeThere are no plans now to expose select_destinations as a public API15:28
jed56IMHO, we could find a new cluster organization in the watcher strategies the propose the ActionPlan to nova only to check that the audit is still valid we apply it other we create a new one15:28
jed56I agree this is maybe not for now but this is something we could maybe help15:29
jed56the select_destinations will only be used for validate the proposed actions15:30
edleafethat is a long-term goal of the separation of the scheduler from nova15:30
edleafeit is several years off, at the earliest15:30
jed56I agree, i forked nova to do the modifications15:30
edleafeI don't think that relying on that being implemented is a good strategy for Watcher15:31
jed56edleafe: i agree we don't want to relies to the select_destinations in the strategies15:31
jed56we want to retrieve the nova policies15:31
jed56find a new organization of the cluster15:32
jed56the in the watcher applier we want to verify that the choice we made are still conform15:32
edleafeWhat exactly do you mean by "nova policies"?15:32
jed56by nova policies i mean the scheduler hints or the constraints15:32
jed56affinity, no affinity, etc15:32
jed56We can retrieve these constrains internally like that15:33
*** openstackgerrit has quit IRC15:34
*** openstackgerrit has joined #openstack-watcher15:34
edleafethose are a) just a small subset of the decision making criteria, and b) not a contract that can be relied on15:34
edleafeSo at best you might get a small increase in host selection15:34
edleafeAt worse you will be just as bad, but running slower15:35
jed56what are the others criteria ?15:35
jed56At worse you will be just as bad, but running slower => this is the problem of the strategy15:35
jed56there is many solution to find a pseudo optimal solution in a short period of time15:36
edleafeThere are flavor extra_specs, aggregate/availability zones, PCI device mapping...15:36
edleafeYou will be essentially duplicating the scheduler functioning, or relying on an API that won't exist for some time15:37
jed56and they are not available in the Instancegroup ?15:37
*** hvprash has quit IRC15:38
jed56edleafe: or relying on an API that won't exist for some time => I agree but if we are proposed better solution : )15:38
*** hvprash has joined #openstack-watcher15:39
edleafeI don't think that you can get the host information you need from InstanceGroups. That data is also in a Nova database, and it should not be accessed from outside of Nova.15:40
jed56edleafe: by default don't give these informations. this is reason why, we should try to propose to modify nova to expose these informations15:41
jed56I have version which is capable of doing that locally15:42
edleafejed56: I will guarantee that proposals like that will never be accepted15:42
jed56this a PoC, which allow to access to these informations through an RestFul Api15:42
jed56edleafe: why ? :)15:43
edleafeSeveral reasons15:43
jed56This is not the idea behind that
edleafeThis is internal nova data, and is not guranteed to remain the same15:44
edleafeThere are more important things to work on15:44
edleafeAnd yes, the separate scheduler endgoal is another reason15:44
jed56I agree that the data structure can evolve15:44
edleafewe have to get the scheduler cleaned up in order to make it stand on its own15:45
edleafeOnce that happens, then maybe what you want will be possible.15:45
edleafeBut until then, trying to add more stuff will almost certainly be rejected15:45
edleafeI'm not trying to be negative; I'm trying to keep things on the most realistic path forward15:46
jed56I'm not trying to be negative => don't worry15:46
jed56Yes I totally agree that is not easy15:46
edleafeIf we can get the change to allow specifying multiple hosts in a live migration request, that will be a huge win for Watcher15:46
edleafeAnd even a change like that, proposed in Mitaka, is disruptive enough that it will be made in Ocata at the earliest15:47
jed56the multiple hosts blueprint is one step for us15:47
*** vmahe has joined #openstack-watcher15:47
jed56So, what do you propose ? do we have to add several plugin in watcher capable of use the internal nova rpc api ?15:48
edleafeno! that would be terrible15:49
jed56edleafe: I agree15:49
edleafeIt would mark Watcher as a rogue project that doesn't respect boundaries15:49
edleafeI suggest going for an optimistic design15:50
jed56but the problem still existing we will never able to propose good optimization without these informations15:50
edleafeIOW, assume that one of the proposed hosts will pass, and handle the few cases where they don't15:50
edleafejed56: I don't believe that's true15:50
edleafeIn the case where there are lots of affinity rules, well, they are attempting their own optimization15:51
edleafeThey won't probably be using Watcher15:51
edleafeMost deployments that would be interested in Watcher will be more cloudy than that15:51
jed56they will not use watcher because with the current version we are not aware of these affinity15:52
edleafeIOW, where all hosts are roughly equivalent in suitability for a VM15:52
jed56what do you mean by more cloudy ?15:52
edleafeNo, if they have affinity rules, you can't do very much re-arranging, can you?15:52
edleafesee my next line15:53
jed56No, if they have affinity rules, you can't do very much re-arranging, can you? => IMHO, we can15:53
jed56we will maybe violate these constraints but during a short period of time15:54
edleafeWell, you may be right, but that hasn't been my experience15:54
jed56Do you mean that if you have an affinity constraint this is the responsibility of administrator ?15:55
jed56IMHO, watched is designed to be an automatic tool15:56
edleafeNo, it's the user who requests the affinity15:56
edleafeso those VMs aren't very movable15:56
edleafeMany deployments don't use the affinity filters15:56
edleafeUsers get VMs wherever they fit15:57
jed56you are sure about that ?15:57
edleafemany use anti-affinity filters15:57
edleafeto keep VMs from the same project/user on different hosts15:57
edleafeI have a meeting in 2 minutes15:58
jed56the problem stay the same for watcher15:58
edleafeI'd like to continue this on the Watcher meeting15:58
jed56edleafe: thanks a lot for your time15:59
edleafeexcept that next week it's the early one and I'll be asleep15:59
jed56edleafe: i will enjoy to continue the discussion15:59
jed56have a nice day15:59
edleafeyou too!15:59
dtardivelon Telco cloud, with complex applications, I think we use a lot (anti)affinity rules16:01
*** alexchadin has joined #openstack-watcher16:04
openstackgerritMichael Gugino proposed openstack/watcher: WIP: Implement goal_id and watcher_id into Audit api
openstackgerritOpenStack Proposal Bot proposed openstack/watcher: Updated from global requirements
openstackgerritMichael Gugino proposed openstack/watcher: WIP: Implement goal_id and watcher_id into Audit api
*** alexchadin has quit IRC16:43
*** alexchadin has joined #openstack-watcher16:44
*** alexchadin has quit IRC16:47
*** alexchadin has joined #openstack-watcher16:48
*** hvprash has quit IRC17:16
*** Zucan has joined #openstack-watcher17:29
*** hvprash has joined #openstack-watcher17:44
*** alexchadin has quit IRC17:44
*** wootehfoot has joined #openstack-watcher17:45
*** alexchadin has joined #openstack-watcher18:04
*** hvprash_ has joined #openstack-watcher18:13
*** wootehfoot has quit IRC18:15
*** hvprash has quit IRC18:16
*** michaelgugino has quit IRC18:38
*** wootehfoot has joined #openstack-watcher18:41
*** alexchadin has quit IRC18:51
*** dtardivel has quit IRC19:37
*** wootehfoot has quit IRC19:52
*** hvprash has joined #openstack-watcher20:51
*** Zucan has quit IRC20:52
*** hvprash_ has quit IRC20:52
*** hvprash_ has joined #openstack-watcher21:03
*** hvprash has quit IRC21:05
*** hvprash has joined #openstack-watcher21:06
*** hvprash__ has joined #openstack-watcher21:07
*** hvprash_ has quit IRC21:09
*** hvprash has quit IRC21:11
*** thorst_ has quit IRC21:11
*** thorst_ has joined #openstack-watcher21:11
*** hvprash__ has quit IRC21:16
*** thorst_ has quit IRC21:16
*** hvprash has joined #openstack-watcher21:16
openstackgerritMichael Gugino proposed openstack/watcher: WIP: Implement goal_id and watcher_id into Audit api
*** hvprash has quit IRC21:20
*** hvprash has joined #openstack-watcher21:21
*** thorst_ has joined #openstack-watcher21:23
*** thorst_ has quit IRC21:28
*** hvprash has quit IRC21:30
*** hvprash has joined #openstack-watcher21:32
*** thorst_ has joined #openstack-watcher21:44
*** hvprash has quit IRC21:45
*** thorst__ has joined #openstack-watcher21:46
*** hvprash has joined #openstack-watcher21:46
*** thorst_ has quit IRC21:49
*** thorst__ has quit IRC21:50
*** hvprash has quit IRC21:52

Generated by 2.14.0 by Marius Gedminas - find it at!