Monday, 2016-03-07

*** Qiming has quit IRC00:02
*** shu-mutou-AFK is now known as shu-mutou00:12
*** dixiaoli_ has quit IRC01:00
*** zzxwill has joined #senlin01:04
*** Qiming has joined #senlin01:07
*** dixiaoli_ has joined #senlin01:10
*** dixiaoli_ has quit IRC01:14
*** Yanyanhu has joined #senlin02:09
openstackgerritMerged openstack/python-senlinclient: Updated from global requirements
openstackgerritMerged openstack/senlin: Improve the text in install-guide
openstackgerritYanyan Hu proposed openstack/senlin: Move an item back to TODO list
lixinhuiYanyan Hu, it seems that the latest neutron/neutron-lbaas can not built up lbaas successfully...02:26
lixinhuicould you let me know which version you are using?02:26
Yanyanhuok, let me check it02:33
Yanyanhuhi, lixinhui, I think it is
YanyanhuI installed it using devstack a long while ago02:35
Yanyanhuseveral month ago02:36
lixinhuigot it, thanks, Yanyanhu02:37
Yanyanhuyou're welcome02:38
*** elynn has joined #senlin02:39
*** dixiaoli has joined #senlin02:46
openstackgerritYanyan Hu proposed openstack/senlin: Minor revise os.nova.server profile
*** xuhaiwei has quit IRC02:54
openstackgerritMerged openstack/senlin: Move an item back to TODO list
*** dixiaoli has left #senlin03:04
openstackgerritYanyan Hu proposed openstack/senlin: Minor revise os.nova.server profile
*** yuanying has quit IRC03:22
*** idonotknow_ has joined #senlin03:31
*** idonotknow__ has joined #senlin03:39
*** yuanying has joined #senlin03:40
*** idonotknow_ has quit IRC03:41
*** yuanying has quit IRC03:44
*** zzxwill has quit IRC03:50
*** elynn has quit IRC03:58
openstackgerritMerged openstack/senlin: Minor revise os.nova.server profile
*** zzxwill has joined #senlin04:04
*** yuanying has joined #senlin04:08
*** zzxwill has quit IRC04:10
*** Yanyanhu has quit IRC04:13
*** Yanyanhu has joined #senlin04:13
*** elynn has joined #senlin04:19
*** elynn has quit IRC04:23
*** elynn has joined #senlin04:24
*** zzxwill has joined #senlin04:54
*** dixiaoli_ has joined #senlin05:30
*** dixiaoli_ has left #senlin05:30
lixinhuiQiming, Yanyanhu, what threshold for incomming.rate is sensible in your mind to be as alarm05:51
lixinhuiidonotknow_ what is your idea here05:52
Yanyanhuhi, lixinhui, actually that depends on the application type05:52
lixinhuiall the suggeston are welcome05:52
lixinhuiwhen redering to an application type05:52
Yanyanhuwe just used a very small value, e.g. 100 bytes per second to make the test05:52
Yanyanhuif you just want to test the workflow, this is ok I think05:54
lixinhuiYanyanhu, when referring to the application type, are you mentionning the average rate for one application or the workload to afford05:54
Yanyanhubut in real cases, this threshold could vary cross different applications, like http server, or ftp server05:54
lixinhuithat is why to ask this question05:54
Yanyanhue.g. for http server, I think this value won't be that large05:54
YanyanhuI also have no experience about this issue :(05:55
Qimingit is always a co-design factor regarding your benchmark05:55
lixinhuiI see, Qiming.05:55
lixinhuiso I am asking what if you do when the workload is chosen to skeleton this threshold05:55
lixinhuiget the average rate05:56
lixinhuihalf of max05:56
Qimingsay if you have your benchmark send 10 requests per second, it makes no sense to do autoscaling above 20 requests per second05:56
lixinhuior some best practice here05:56
lixinhuionce you mentioned05:56
QimingI heard some opinions about 70%05:57
lixinhuiyou IBM presents some rate trigerred alarm to customers05:57
lixinhuicould you find more details?05:57
Qimingwhich is already a signal of urgency05:57
lixinhui70% waht?05:57
lixinhuithe peak bandwidth05:57
Yanyanhuyea, I guess 70% is common choice05:57
Qiming70% of the total capacity05:57
Qimingsome users would prefer a lower threshold05:58
Qimingso ... you will need to know your capacity limit05:58
Qimingwhich is a per-application metric05:59
Qimingyou just push, push, ... stress test it, until it cannot produce more outputs05:59
lixinhuiso maybe we can get the max rate range per member, then use 70% of the max as the threshold06:00
Qimingor the service becomes not so responsive06:00
Qimingif you have a LB, you can actually stressing the cluster as a whole06:00
lixinhuithat is another problem06:00
Qimingbut it would be good to get a per-node metric first06:00
lixinhuiceilometer seems only get incoming/out bytes in total, no06:01
Qimingthe per-node metric can serve a base line06:01
lixinhuicalcaution per memebr06:01
lixinhuiwe can only set a pre-set threahold06:01
Qimingso the metric is already aggregated at the LB side06:01
lixinhuibased on planning on the side of pool06:01
Yanyanhuhi, lixinhui, you can calculate the rate per member06:02
Qimingthat is reasonable06:02
Yanyanhujust need to extend related support in lbaas06:02
Yanyanhuand also configure the pipeline06:02
lixinhuiokay, Yanyanhu06:02
Yanyanhuthe doc I gave you before about our extension on lbaas support in ceilometer includes related code and description :)06:02
lixinhuithat is a good sharing06:02
Yanyanhunot very complicated in lbaas v106:02
Yanyanhubut not sure about lbaas v206:03
lixinhuiThat part included by the doc has been implemented by default ceilometer06:03
lixinhuijust the per member part06:03
lixinhuiI think it is about to use len(pool['members']06:04
Yanyanhuoh, about per member, you need first to confirm the metadata is set correctly06:04
Yanyanhuif so, you just need to confiure pipeline06:04
Yanyanhu    @staticmethod06:05
Yanyanhu    def _get_sample(pool, data):06:05
Yanyanhu        res_metadata = {06:05
Yanyanhu                         'metering': {06:05
Yanyanhu                           'pool_id': pool['id'],06:05
Yanyanhu                           'member_count': len(pool['members'])06:05
Yanyanhu                         }06:05
Yanyanhu                       }06:05
Yanyanhu        return make_sample_from_pool(06:05
Yanyanhu            pool,06:05
Yanyanhu            name='',06:05
Yanyanhu            type=sample.TYPE_CUMULATIVE,06:05
Yanyanhu            unit='B',06:05
Yanyanhu            volume=data.bytes_in,06:05
Yanyanhu            resource_metadata=res_metadata06:05
Yanyanhu        )06:05
Yanyanhusome code like this06:05
Yanyanhu please notice the line 'member_count: len(pool['members'])'06:05
idonotknow__I am testing this06:05
lixinhuiTThanks, Yanyanhu06:06
lixinhuiHope we can get them out06:06
idonotknow__I can get the member_count through pdb...06:06
Yanyanhuthen add something like this in pipeline06:06
Yanyanhuidonotknow__, nice :)06:06
Yanyanhu    - name: lb_sink_byte06:06
Yanyanhu      transformers:06:06
Yanyanhu          - name: "rate_of_change"06:06
Yanyanhu            parameters:06:06
Yanyanhu                source:06:06
Yanyanhu                   map_from:06:06
Yanyanhu                       name: "\\.(incoming|outgoing).bytes"06:06
Yanyanhu                       unit: "B"06:06
Yanyanhu                target:06:06
Yanyanhu                    map_to:06:06
Yanyanhu                        name: "\\1.bytes.rate"06:06
Yanyanhu                        unit: "B/s"06:06
Yanyanhu                    type: "gauge"06:06
Yanyanhu                    scale: "1.0 / (resource_metadata.metering.member_count or 1)"06:07
Yanyanhu      publishers:06:07
Yanyanhu          - notifier://06:07
Yanyanhuthis is example for byte, for connection, I think it's the same logic06:07
idonotknow__so just do not consider about an inactive one in bytes case?06:08
YanyanhuI guess so06:08
Yanyanhubut it really depends on your strategy since inactive ones actually have no contribution to the entire capacity of your system06:09
idonotknow__it will trigger another alarm to recover or just delete06:10
idonotknow__they should be separate from each other,cause I can not get its number as well as its status at the same time now06:12
lixinhuiI think for scaling scenario, we do not consider the inactive member case inside06:12
lixinhuiRecovering case is different from it06:13
idonotknow__OK,I will just assume they are all active06:13
lixinhuithat need more consideration and addtional work06:13
idonotknow__it works using that code...but it will meet a problem...when I delete a member from a pool,the incoming rate will be a negative number at one moment06:27
idonotknow__{"counter_name": "", "user_id": null, "message_signature": "16b49dc9dbc331985242de02071881c7fd8ca30ba935323eb2bab60979d03ea2", "timestamp": "2016-03-07T06:24:41.493311", "resource_id": "b2c8c946-f67c-4607-b1aa-131225b8ed22", "message_id": "46ac7ac6-e42d-11e5-a0e2-f80f41fd1e94", "source": "openstack", "counter_unit": "B/s", "counter_volume": -47499.39104240597, "project_id": "1b9b9920a8284f06:27
idonotknow__just like this06:27
Yanyanhuhmm, didn't notice this problem before...06:29
Yanyanhumaybe some bugs in ceilometer's lbaas support06:30
YanyanhuI remeber all these metrics are read from haproxy's socks06:30
idonotknow__I do not happens as soon as the pool's member changed06:31
*** idonotknow_ has joined #senlin06:33
Yanyanhubut I think that will be ok since once you remove a member from pool, it has no relationship with the loadbalancer you're evaluating06:35
*** idonotknow__ has quit IRC06:35
Yanyanhuit won't influence the bytes_in/out of lb pool any more06:36
idonotknow_just once every may be OK06:39
Qimingone question06:45
Qimingthe LBaaS (v1 or v2) only cares about quests, right?06:45
Qimingit is not able to do load balancing based on CPU consumptions on backend nodes?06:45
Yanyanhuyes, I think so06:46
idonotknow_I only see it is based on network06:47
Qimingbecause the LB is based on technologies like HAProxy06:51
Qimingthey won't have any idea how much resources are actual consumed at the backend06:51
Qimingit only cares about redirecting different requests to different machines06:51
Qimingso ... there is a huge gap here06:52
Qimingit only makes sense when you are running a single-purposed application which exposes a single API06:52
Qimingbut that will never be the case06:53
Qimingif you have requests like a 80-20 mix of browsing and buying, the workload are never "evenly" distributed onto the backend nodes, so ... it is not a strict "load-balancing"06:54
idonotknow_hmmm, I have no idea now..06:56
Qimingit is perfectly okay to start from the common practices06:56
QimingI'm talking about a much more broader (more unrealistic) load-balancing06:57
*** zzxwill has quit IRC07:01
-openstackstatus- NOTICE: gerrit is going to be restarted due to bad performance07:25
*** ChanServ changes topic to "gerrit is going to be restarted due to bad performance"07:25
*** ChanServ changes topic to "IRCLog: | Bugs: | Review:,n,z"07:28
*** idonotknow__ has joined #senlin07:57
*** idonotknow_ has quit IRC07:59
*** elynn has quit IRC08:34
openstackgerritYanyan Hu proposed openstack/senlin: Add property updatable check in Senlin profile
*** elynn has joined #senlin08:50
*** elynn has quit IRC08:54
*** elynn has joined #senlin08:55
Qimingguys, there is a meeting on openstack-meeting, about HA09:06
Yanyanhuopenstack HA?09:07
Qimingyou can sneak in if you are interested09:07
Qimingno, HA in general09:07
Qimingcurrent topic is about compute node HA09:07
Yanyanhuhave join that channel09:07
*** elynn has quit IRC09:08
*** elynn has joined #senlin09:12
openstackgerritYanyan Hu proposed openstack/senlin: Add property updatable check in Senlin profile
*** elynn has quit IRC09:17
*** elynn has joined #senlin09:18
*** shu-mutou is now known as shu-mutou-OFF09:21
*** elynn has quit IRC09:24
*** elynn has joined #senlin09:38
*** elynn has quit IRC09:42
*** Yanyanhu has quit IRC09:42
*** elynn has joined #senlin09:42
*** elynn has quit IRC09:51
Qiminglixinhui, there is a channel welcoming you -- openstack-ha09:57
*** Qiming has quit IRC10:16
*** zzxwill has joined #senlin10:19
*** elynn has joined #senlin10:21
*** elynn has quit IRC10:26
*** elynn has joined #senlin10:26
*** elynn_ has joined #senlin10:38
*** elynn has quit IRC10:39
*** zzxwill has quit IRC10:56
*** elynn_ has quit IRC10:59
*** idonotknow__ has quit IRC10:59
*** Qiming has joined #senlin11:05
lixinhuiOkay, Qiming, I will turn on that channel. Thanks.12:04
*** zzxwill has joined #senlin12:52
*** zzxwill has quit IRC14:16
*** zzxwill has joined #senlin14:24
*** Qiming has quit IRC15:36
*** zzxwill has quit IRC17:15
*** xuhaiwei has joined #senlin23:29
*** Qiming has joined #senlin23:34

Generated by 2.14.0 by Marius Gedminas - find it at!