20:00:05 #startmeeting Octavia 20:00:05 Meeting started Wed Feb 17 20:00:05 2016 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:06 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:06 Howdy, howdy! 20:00:08 The meeting name has been set to 'octavia' 20:00:12 * mhayden stumbles in 20:00:13 Hi folks 20:00:20 hi 20:00:22 hello 20:00:26 \o/ 20:00:27 o/ 20:00:29 o/ 20:00:38 Let's roll as we have a long agenda today 20:00:39 o/ 20:00:40 #topic Announcements 20:00:55 L7 stuff is under review 20:00:57 #link https://etherpad.openstack.org/p/lbaas-l7-todo-list 20:01:05 and under assault 20:01:09 Please try it out and review the patches. 20:01:10 Haha 20:01:18 \o/ 20:01:23 o/ 20:01:24 Also, we need more attention on the Neutron-LBaaS side of things. 20:01:26 Well, yes, but that has happened to all of us, so I wasn't going to rub more salt. 20:01:38 We have a couple reviews there for shared pools that have sat without reviews all last week. 20:01:43 i like salt 20:01:48 Today is the last day to vote for summit sessions 20:01:58 #link https://etherpad.openstack.org/p/Austin-LBaaS-talks 20:02:12 o/ 20:02:13 o/ 20:02:16 This link has the LBaaS related talks that I am aware of, add as needed 20:02:36 hi 20:02:40 #topic Mitaka blueprints/rfes/m-3 bugs for neutron-lbaas and octavia 20:02:58 dougwig Any items you want to hit before I bring up Octavia bugs? 20:03:09 johnsom: no, go for it 20:03:44 Ok, I went through the Octavia bugs yesterday and tagged the ones I think we should try to get done for Mitaka 20:03:46 #link https://bugs.launchpad.net/octavia/+bugs?field.tag=target-mitaka 20:03:52 Nice! 20:04:24 I'm open for comment on the list. I'm extra open to people assigning these to themselves and working on them! 20:04:59 Ok, I'll start chewing through that list as I wait for feedback on the L7 stuff. 20:05:04 I have a review out for 1489963 that i probably wont get to any time soon 20:05:22 I'll let folks have time to look at these and we can review next week what is open, comments, etc. Or bring them up in the channel 20:05:23 It requires restarting the service, which doesnt seem to work at all 20:05:54 johnsom: Sounds good. 20:06:08 Yeah, that service restart issue is on the list below as well. xgerman is the current owner. 20:06:13 Would love an update 20:06:19 :) 20:06:23 :-) 20:06:53 #link https://bugs.launchpad.net/octavia/+bug/1496628 20:06:53 Launchpad bug 1496628 in octavia "Amphora agent reload fails with socket in use" [High,In progress] - Assigned to German Eichberger (german-eichberger) 20:07:03 yeah, need to find time to look.fix 20:07:25 Ok. If you think you can work on it great, otherwise we should take you off as owner 20:07:34 #action xgerman to work on https://bugs.launchpad.net/octavia/+bug/1496628 20:08:01 I wouldn’t put myself into a critical path 20:08:07 #topic Tempest-Plugin for Octavia tempest tests 20:08:15 Madhu you have the floor 20:08:19 if we figure that out then my review should start working. I added the config and code to bind to proper ip 20:08:25 would like to see tempest integration with octavia tempest tests. Currently in lbaas, it is hardcoding the test discovery for tempest tests, which I feel, it is not the right thing to do. It can be implemented easily as per: http://docs.openstack.org/developer/tempest/plugin.html 20:08:44 I can push a dummy patch in octavia for tempest plugin. Depending on the patch, I can create a job for the same without touching gate hooks. 20:09:12 madhu_ak have you talked with fnaval about this at all? 20:09:18 fnaval: Is that sounds good to you? 20:09:48 nope TrevorV maybe we need to talk about this and keep going ? 20:10:03 using the plugin sounds like a good idea overall 20:10:05 i've barely seen how the plugin works - do other projects use this as well 20:10:22 i'm worried that we may be the early adopters of it, and thus have issues with it 20:10:23 i dont know if there are any issues with it though 20:10:36 We can spend a little time here if everyone is present. I know we talked about this at the mid-cycle and we need to keep moving on tests for Octavia 20:10:48 yes. For VPN, it is in review. for FW, it is already implemented. 20:10:59 did the lbaas tests ever get converted to being a tempest plugin? 20:11:06 Agreed-- I am looking forward to having real scenario tests for Octavia. :) 20:11:12 sadly no 20:11:52 the neutron-lbaas tests have not become a plugin; I was unaware that is was a requirement 20:12:36 dougwig Is this a requirement or nice to have? Is it something that is high priority? 20:12:46 madhu_ak I think I'm missing something. fnaval has a review with octavia tempest tests already, is there some reason those are insufficient? 20:12:48 we have scenario test in review that does not implement the plugin interface; but we can definitely look into that as a later refactor? 20:13:11 yep. the crent tempest tests for octavia will not be disturbed at all. 20:13:15 current* 20:13:31 Would using the plugin mean we have to copy less of the tempest tree into Octavia? 20:13:45 (ie. helps us avoid code that will surely become crufty in Octavia?) 20:13:51 it is just for runnign the tests as a part of gate job 20:14:46 johnsom: tempest team is waiting on us to undo the mess of cloned tempest, but that mostly affects us being broken. 20:14:50 Okay, what's the idea here. We don't want tempest testing in tree? Is that what this is supposed to fix? 20:15:11 the tests should still be in-tree; what i'm missing is what does the plugin accomplish? i guess if we were testing internal implementations, it would be easy to plug in different test cases 20:15:20 since upstream would differ from down 20:15:34 it's for test discovery 20:15:38 nope. We can have tempest tree in our tree. rather than hardcoding the test patch in gate hooks, its best to have tempest plguin that wil discover the tests path automatically 20:15:38 i mentioned this at the midcycle 20:15:43 fnaval: we currently have half of tempest cloned into neutron-lbaas, interfacing with tempest-lib. it's very brittle. 20:16:08 have tempest tests* 20:16:15 dougwig +1 20:16:36 so, the octavia tests are using tempest_lib; as I understand it now, there is a request to also use tempest plugins 20:16:41 the current octavia tempest test is more like a combination of tempest and tempest-lib 20:16:52 +1 20:17:04 having the plugin stuff done is a good idea IMO 20:17:14 fnaval: has put quite a bit of work in from where min left off 20:17:21 im not sure thats the case anymore minwang2 20:17:23 Yeah. 20:17:24 i just wasn't sure of the work effort involved -- if madhu_ak knows how to do it, and can easily do so, i say go for it 20:17:40 my latest understanding is that tempest-lib is going away in favor of 'tempest' being in pypi, btw. so plugins are the future. 20:17:43 we'll merge what makes sense 20:17:55 yep. lets go for it. 20:17:58 +1 20:18:02 rm_work I'm not sure what we're talking about changing... Is it like adding a line above each test or a header in each file that allows tempest to magically pick up what tests its supposed to run? 20:18:08 if you can do it with neutron-lbaas or direct me to an example implementation using Tempest plugins with FwaaS, I can research it 20:18:08 dougwig: but tempest-lib was the future 20:18:11 madhu_ak: 20:18:15 it's kinda like how devstack-plugin works 20:18:21 dougwig: who's to say tempest in pypi isn't an alternate future as well 20:18:25 it just changes how the gate scripts are written to simplify stuff 20:18:28 it's openstack, the future changes before it arrives. 20:18:37 yeah, there is a reason I was advocating rally 20:18:44 lol 20:18:48 >_< 20:18:52 sure, fnaval I shall post some links to keep moving forward 20:19:06 I am just looking for the alternate future where we have good scenario test coverage and it's clear for people how to write new tests. 20:19:14 madhu_ak: thanks. any help is appreciated 20:19:15 +1 20:19:26 johnsom: that is the future that shall not exist 20:19:26 * johnsom throws rally eraser at xgerman 20:19:28 tempest-lib mostly overwrites a lot of the methods in tempest, in octavia some of the methods cannot call it directly from tempest-lib, that’s why we can see the combination of tempest and temespt-lib 20:19:49 haha 20:19:53 what is up for review totally works now, but if we have the time to refactor it again, I can definitely do that. but I strongly think it should be a future task. 20:20:05 +1 fnaval 20:20:19 That sounds reasonable to me. 20:20:30 yeah 20:20:37 however tempest_plugon implementation will no way affect fnaval's patch 20:20:48 Ok, so the plan for Mitaka is move forward with the work fnaval has been doing (Thank you!) and for Newton look at tempest plugin? 20:20:57 cool madhu_ak 20:21:05 this is openstack, we should -2 it because someone might someday do it differently. 20:21:14 +1 20:21:28 Haha! 20:22:00 dougwig Let's vote on the -2 20:22:01 johnsom: it's not really a big deal, we can get it in now probably if it's ready 20:22:09 it doesn't affect the actual test code 20:22:17 +1 rm_work 20:22:23 they're ... really fairly unrelated as much as that's possible 20:22:25 Ok, just trying to understand what was decided. So, parallel development? 20:22:34 Ok, got it. 20:22:53 who has the action? 20:22:55 how it would be better for us to review, how much workload for this plugin madhu_ak 20:23:16 #agreed Move forward with current scenario tests, in parallel madhu_ak will implement tempest plugin for Octavia 20:23:18 plugin implementation can take a day or two to push a patch 20:23:27 agreed 20:23:38 cool 20:23:38 #action made_ak add plugin code 20:23:41 Nice. 20:23:50 cool - please involve me with that madhu_ak as I want to know more about how that works 20:23:56 xgerman mispelling names since 2014 20:24:03 #topic Octavia models 20:24:03 sure fnaval 20:24:07 Haha! 20:24:09 thanks madhu_ak 20:24:11 lol 20:24:11 heh 20:24:11 Ok, I will start, RED 20:24:39 No, just kidding. So we had a loooooonnnnggg discussion about the models Friday. Monday I think they got fixed. 20:24:48 Is there more we need to discuss here? 20:24:59 Not until after Mitaka, I think. 20:25:07 assuming the fix works :) 20:25:12 So long as L7 doesn't get blocked because of it. ;) 20:25:12 maybe discuss the possibility of moving away from the static data models, to just using the SA models throughout 20:25:19 can we get some ERD diagram 20:25:20 ? 20:25:23 Ok, yeah, there was some talk of moving to sqlalchemy models. Definitely post Mitaka 20:25:28 yep 20:25:43 we should vote on when to discuss votig. 20:25:50 Haha 20:25:57 dougwig don't tempt me 20:26:09 well, I like to have some more architecture docs — 20:26:30 Ok, so we are cool on the models and order for the Mitaka release. My testing so far today looks good. 20:26:58 Probably a few more little bugs (need to check the peer ports bana_k mentioned), but we should be able to fix those. 20:26:59 Yay! 20:27:24 #topic Ideas for other options for the single management network (blogan) 20:27:35 blogan You have the floor 20:27:54 ssh driver! 20:28:03 dougwig: Almost dead! 20:28:07 is dougwig a bot? 20:28:10 dougwig It's dead!!!! sbalukoff is an over-achiever 20:28:24 dougwig got up early today, so he closely approximates a bot, but less useful. 20:28:34 * mhayden finds this topic intriguing ;) 20:28:35 Well, I haven't reviewed yet, but it warmed my heart to see the patch 20:28:51 oh god sorry 20:28:53 im back 20:28:56 johnsom: I'm probably missing something important, because it was way too easy to kill. :/ 20:29:13 * dougwig mutters about over-complicated over-engineering, and picks up his red stapler. 20:29:18 blogan You have the floor for your mgmt-net topic 20:29:21 No more ssh driver makes me happy after seeing how it did its business... 20:29:32 so roiginally the single mgmt net was a solution that either meant deployers use a provider network for it, or we come up with something better in the future 20:29:43 sbalukoff: I commented on your ssh-driver removal patch 20:29:45 well we should start coming up with an optional better way to do it 20:29:56 blogan: Agreed! 20:30:45 * rm_work still likes the ssh-driver <_< 20:30:53 one way is just create a mgmt net for eveyr lb and connect all controllers to this mgmt net, that has scaling issues but can be solved with more complicated clustering 20:30:56 * sbalukoff beats rm_work with a stick. 20:31:21 blogan: What's the problem we're trying to solve by making that change? 20:31:28 The idea I have had is to have one or more controller networks, that are routable with many amphora mgmt networks, likely managed by housekeeping. 20:31:34 blogan: makes me wonder if we could use IPv6 to help with the scalability somehow 20:31:44 antoher is what johnsom suggested yesterday in that we have a single controller network that every controller is plugged into, a router is also plugged into it, and each lb has its own network plugged into the router 20:31:51 mhayden Yes, IPv6 is good 20:31:52 mhayden: +1 20:31:58 +1 20:32:02 a /64 subnet would serve a LOT of LB's :) 20:32:19 * mhayden skips the math 20:32:19 well, I think we just support a list of management networks and leave it to the operator to set that up 20:32:23 blogan LB, tenant, or some number of amphora 20:32:32 mhayden: the scale problem for option 1 is the number of ports is multiplied by the number of controllers, give or take a few 20:32:37 I like johnsom's idea. Means the controller doesn't have to re-bind when a new loadbalancer is deployed. 20:32:53 blogan: ah, do we have a neutron limitation when we have too many ports on a particular subnet/network? 20:33:07 mhayden: in our public cloud yes :) 20:33:08 I think we should leave it to the operator and just support a way for them to tell us where to plug 20:33:09 * johnsom notes sbalukoff likes my idea in the history book 20:33:15 we have a limit per network, and a global limit 20:33:15 Haha! 20:33:31 what if an LB only had connectivity in the tenant's network? 20:33:37 Seems like I'll be in the minority, but I'd prefer to not have a management "network" at all. Management in Neutron across basically every other service is out-of-band, on purpose. 20:33:40 we'd have issues reaching out to it from wherever octavia is running (possibly) 20:33:48 Some folks have mentioned number as low as ~250 ports per network. 20:34:19 Apsu how would you connect to a LB for maintenance tasks or something similar? Over the customer network? 20:34:20 Apsu: yeah another option is realize a way to meet our requirements without a mgmt network, if its possible 20:34:51 well, can we agree that having only one management net si limiting? 20:34:54 perhaps octavia could have an agent/worker of some sort that sits on the tenant network? as routers do today 20:35:06 xgerman: a pool of mgmt networks, how would that be different than a mgmt network for each lb 20:35:06 TrevorV: Well there's a few options for getting data in/out without having to couple the data and control plane networks. Metadata, mountpoints, agent/worker on tenant (per mhayden), etc. 20:35:07 xgerman: +1 20:35:17 xgerman: yes we can if its not a provider network 20:35:57 if we put the LB *only* on the tenant network, we would be eating additional IP addresses in the tenant network to support the LB, which is annoying :/ 20:36:15 So, this is a Newton project. How do you all want to start working on it? An etherpad for ideas? 20:36:23 mhayden isn't that also the case with an agent/worker on their network? 20:36:29 mhayden: LB in the tenant network probably wouldn't happen with active-active. 20:36:30 mhayden: Yep. Another reason I don't like having a network for management whatsoever, insofar as "network" means "something neutron/nova configure out of the cloud's resources" 20:36:35 TrevorV: indeed it is :/ 20:36:37 johnsom sounds goopd 20:37:03 we had discussed putting an agent on the *VM host*, don't know where that went or if it's feasible in real deployments (might not be) 20:37:04 Apsu: i don't see that as a show-stopper. We're consuming resources on the cloud by launching amphorae. 20:37:09 or another idea is to build a couple of LB VM's per tenant and put containers/namespaces in that VM 20:37:17 but we could be putting a lot of eggs into one basket 20:37:26 yeah, not a huge fan of that 20:37:29 Same 20:37:40 Yep. 20:37:42 sbalukoff: Sure. Personally I'd prefer to see a namespace driver, but that's probably also a minority opinion. 20:37:55 On the plus side, it gives you the OOB control for free, given shared process namespace. 20:37:55 Apsu: Namespace driver doesn't scale. 20:38:05 rm_work the agent per VM host still communicated over a network we built for the Octavia service, which still counts as a "management network" of sorts 20:38:08 The whole *point* of Octavia is scale. 20:38:45 sbalukoff: Well, it's not as automatic to scale if you equate namespaces with "don't run on compute nodes", sure 20:38:52 i don't think we should spend any more time on the namespace driver. it's not the ref. if someone wants to take it on, fine. 20:38:59 But nothing prevents you from using network namespaces on compute nodes, without actually running in VMs. 20:39:03 dougwig +1 20:39:18 Ok, let's collect ideas in an etherpad 20:39:22 #link https://etherpad.openstack.org/p/Octavia_Mgmt_Net_Ideas 20:39:45 Apsu: I think that's really far from the direction we were planning on taking with Octavia. Far enough to be its own load balancer project, perhaps. 20:39:55 could we formally deprecate the namespace driver in mitaka/newton for lbaasv2? 20:40:14 that has been controversial since it has it’s fnas 20:40:21 fans 20:40:27 mhayden It would probably need to be O since is is used 20:40:28 sbalukoff: Fair enough. I'll leave that alone. I still think OOB comms is a worthwhile pursuit 20:40:35 mhayden: I would like to; I think we need to make sure that Octavia deployment is simpler and performance is definitely at or better than namespace to do that. 20:40:53 i'd vote to deprecate and let someone maintain in a separate repo if they want. 20:41:02 dougwig +1 20:41:05 dougwig: +1 20:41:20 so #startvote 20:41:28 i still kind of like having it as a simpler driver 20:41:31 i'm happy to help with some octavia docs, but i'll need some help on that to get the concepts right 20:41:42 Sounds like another agenda topic to cover at the next meeting? I would like to advertise that one before a vote. 20:41:49 mhayden: I'll do my best to help you there. 20:41:50 ha 20:41:51 Please ping me. 20:41:52 with new features, right now we have to implement it in octavia and neutron-lbaas, which sucks 20:42:04 #topic Do we need an admin tenant? (blogan) 20:42:35 Please update the etherpad with mgmt-net ideas. Moving on to blogan's next gernade 20:42:49 Haha! Ok. 20:42:51 yeah so neutron has rbac now, which allows a user to say this tenant can create ports on my network 20:42:53 basically 20:42:57 to be fair, i pushed blogan to throw these grenades :P 20:43:02 * blogan throws grenade at johnsom 20:43:38 mhayden: well they're good topics anyway, the mgmt net needs improving, and if people are adverse to having an admin account then it should be noted at least 20:43:49 Sounds like the ACL fun we are having with barbican 20:44:08 Yep. 20:44:16 johnsom: yep, however, we could just use an admin account to only set the rbac policy to allow a normal octavia user to do it 20:44:21 Yeah, BBQ is a time sink for me 20:44:27 yeah I don't think ACLs are working properly in consumers still 20:44:28 so it lessens the admin calls, but still would require an admin account 20:44:38 blogan: we don't gate on it or test it *at all*. that sounds deprecated to me. 20:44:48 someone needs to poke at some barbican devs, i haven't had time/priority to look at it 20:44:54 i had a patch up a while back 20:45:08 * blogan hops in his time machine to reply to dougwig's comment 20:45:28 then there are nova quotas which require an admin count to be changed 20:45:58 xgerman: true, so there's probaby a lot of little gotchas like that 20:46:05 yep 20:46:11 but in that case, that should be the deployer who ups those quotas for the octavia user 20:46:21 and this is where I have spend some of my time recently ;-) 20:46:32 but octavia will automatically do security group rules, which may require admin access, i honestly can't remember 20:46:43 yep, we do 20:46:52 blogan If we still need an admin account, what is it really buying us? 20:46:57 Okay so the answer is, "yes we need an admin tenant" 20:46:59 Is that it? 20:47:17 well, you could make roles in kesytone 20:47:25 there is admin stuff we don’t need 20:47:33 johnsom: the point of tihs discussion is if htere's a way to not have an admin account we should use it and rbac seemed like a possibility but sounds like there's a lot of gotchas on it 20:47:42 We can have a different tenant.. but admin role to do all the above 20:48:03 Role membership seems like a reasonable middle ground. Least privilege required and such 20:48:11 Is this because permissions on "admin" stuff aren't granular enough? 20:48:54 could be, and if thats the case then the roles and policy can be done by deployers 20:49:05 and this is a moot subject 20:49:15 if thats the solution 20:49:15 A better question for me is, why don't we want admin tenant if we can use it? 20:49:32 if octavia gets compromised... 20:49:48 the more admin accounts, the greater the attack vectors 20:49:50 TrevorV: "Least privilege" is a basic security requirement in most places. 20:49:52 disable octavia service account? 20:49:52 ^ 20:50:30 blogan Newton time frame? 20:50:48 johnsom: +1 20:50:51 johnsom: or its a deployer problem 20:50:53 or is this something that is more urgent for folks? 20:50:58 johnsom: but yeah never intended to be for Mitaka 20:51:04 Ok 20:51:09 would require too much work 20:51:42 deployer problem 20:51:46 I think the unique tenant/account with roles might be a good way to go. I'm not sure about the RBAC stuff as I haven't really looked at it. 20:52:15 #topic LBaaS tests with OVN in the gate? 20:52:27 Someone added this one to the agenda 20:53:11 Anyone claim it? dougwig? You seem to like more gates.... 20:53:36 any context on this topic? 20:53:54 Just the line someone added to the agenda. First I have heard of it 20:53:55 i have no experience with OVN mitts 20:54:14 Huh. 20:54:26 * mhayden gets it 20:54:29 lol 20:54:31 ovn = oven 20:54:35 Ok, if nobody claims it, I will declare it's a drive-by OVN 20:54:40 Haha! 20:54:41 * mhayden resists posting ascii-art 20:54:48 Drive-by baking 20:54:49 #topic Open Discussion 20:55:15 xgerman: What was the name of that distro to look into to try to cut down on amphora image size? 20:55:17 Alpine? 20:55:24 What's its license? 20:55:26 yep, alpine 20:55:41 could always go with something like RancherOS and containerize ;) 20:55:42 sbalukoff: What distro is it based on currently? 20:55:46 #link http://www.alpinelinux.org 20:55:48 Apsu: Ubuntu. 20:55:51 Ok. 20:55:54 I'll have a look. 20:55:55 sbalukoff: Have you seen Ubuntu Core? 20:56:03 yeah, it’s on my list 20:56:07 or CoreOS? :) 20:56:07 alpine 20:56:17 mhayden: Eventually we do want containers. But it sounds like that's going to take some work. 20:56:22 coudl go with red star os too 20:56:30 Or clearOS 20:56:45 I'd just like to get to a VM that boots a lot quicker even with vmx. 20:56:55 well, one negative from the last lab was memory consumption — I hope alpine can help 20:56:59 sbalukoff: Would probably be the least movement from the current image, and their goal is tiny footprint. It's the basis for the new transactional pkg mgmt they're working on, too 20:57:00 Yep. 20:57:08 But seems like it might be a good candidate 20:57:23 Apsu: Thanks for the recommendation, eh! 20:57:29 Oh, I guess it's clear linux, not clearos. The distro Intel was pitching in Vancouver 20:57:32 https://wiki.ubuntu.com/Core yep 20:57:45 yeah, I wish that disk image build had some good tiny ones build in 20:58:03 Also: does anyone here know enough about virtualization tech to know whether using a 32-bit image would actually gain us anything? 20:58:15 (As far as resource footprint) 20:58:23 I tried the ubuntu core DIB element about six months ago. It was broken. 20:58:43 (since I think it's unlikely we'll need to support >4GB haproxy processes for the forseeable future...) 20:59:02 johnsom: aah, good to know. 20:59:03 sbalukoff 32bit would get us a smaller on disk footprint. 20:59:18 johnsom: But on a 64-bit host, doesn't save us ram? 20:59:29 It wouldn't on modern processors, tmk. The virt engines actually work harder to run 32-bit instructions because (afaik) they use 64-bit for all memory addressing regardless of instruction size and class. 20:59:39 Ok. 20:59:45 So, don't bother with 32-bit. :/ 20:59:48 So it's a conversion on 32-bit instructions. I could be wrong, but that's how I understand it. 20:59:50 sbalukoff probably. 21:00:10 I'll ping Dustin, as he keeps up on this far better than I... 21:00:14 Ubuntu core could be worth looking at it again. It just wasn't working last time I tried it. 21:00:33 johnsom: Ok! 21:00:56 I suspect talking with Canonical and mentioning the intended use of Core would probably get some hands to help make it right from their side, if it's broken 21:01:02 * Apsu shrugs 21:01:10 I don't see a DIB element for alpine yet, so might be some work. 21:01:15 Ok, that is time. 21:01:18 Yeah. 21:01:19 #endmeeting