Tuesday, 2015-01-06

*** ametts has quit IRC00:03
*** echevemaster has quit IRC00:16
*** amitgandhinz has joined #openstack-zaqar00:43
*** amitgandhinz has quit IRC00:48
*** achanda has joined #openstack-zaqar00:50
*** achanda has quit IRC00:55
*** achanda has joined #openstack-zaqar00:56
*** mpanetta has joined #openstack-zaqar00:57
*** mpanetta has joined #openstack-zaqar00:58
*** achanda has quit IRC01:01
*** kgriffs is now known as kgriffs|afk01:04
*** kgriffs|afk is now known as kgriffs01:05
*** JAHoagie has quit IRC01:07
*** kgriffs is now known as kgriffs|afk01:15
*** amalagon has quit IRC01:21
*** amalagon has joined #openstack-zaqar01:22
*** amalagon has quit IRC01:26
*** cpallares has quit IRC01:53
*** amitgandhinz has joined #openstack-zaqar01:53
*** amitgandhinz has quit IRC01:54
*** amitgandhinz has joined #openstack-zaqar01:54
*** exploreshaifali has quit IRC02:06
*** amitgandhinz has quit IRC02:13
*** mpanetta has quit IRC02:28
*** amitgandhinz has joined #openstack-zaqar02:31
*** vkmc has quit IRC03:04
*** amitgandhinz has quit IRC03:07
*** amalagon has joined #openstack-zaqar04:27
*** flwang1 has quit IRC04:35
*** JAHoagie has joined #openstack-zaqar04:58
*** JAHoagie has quit IRC05:51
*** reed has quit IRC06:54
openstackgerritZhi Yan Liu proposed openstack/zaqar: Integrate OSprofiler with Zaqar  https://review.openstack.org/14135608:34
*** bradjones has quit IRC11:28
*** bradjones_ has joined #openstack-zaqar11:29
*** bradjones_ is now known as bradjones11:29
*** bradjones has joined #openstack-zaqar11:29
*** exploreshaifali has joined #openstack-zaqar11:29
*** vkmc has joined #openstack-zaqar11:46
*** vkmc has quit IRC11:46
*** vkmc has joined #openstack-zaqar11:46
vkmcmorning o/11:48
exploreshaifalivkmc: good morning \o11:51
exploreshaifaliwhats going on?11:51
vkmcexploreshaifali, hey there!11:52
vkmcall good and you?11:52
exploreshaifaliyeah good me too :)11:53
vkmccould you make the server run?11:54
exploreshaifalino.... tbh I don't know what should be done next11:54
exploreshaifaliI tried few guesses but all were worthless11:55
exploreshaifalivkmc: should I try to change the method name and replace everywhere11:55
exploreshaifaliand then try to run server11:56
exploreshaifalimethod name--something else than queue_controller11:56
exploreshaifalibut I think there should be some other problem, as getting error QueueController object is not callable11:57
exploreshaifaliflaper87: around?12:12
vkmcis always a good idea to do something else when you are stucked with something12:20
vkmcalso, we have to fix the debugger12:21
vkmcso you have an extra tool12:21
vkmcright now I have to do some stuff for work, otherwise I'd help you to debug12:22
exploreshaifaliyeah okay!!12:22
exploreshaifalino worries..... I will keep trying :)12:23
vkmcwould you like to tackle the problem with testtools?12:23
vkmcthat requires researching a bit12:23
exploreshaifaliyea sure12:23
*** shibanis has joined #openstack-zaqar13:19
*** shibanis has left #openstack-zaqar13:19
vkmcpcaruana, o/13:21
vkmcexploreshaifali, btw, did you moved *all* appearances of queue_controller in DataDriver to ControlDriver?13:35
vkmcexploreshaifali, for instance, in pooling.py?13:35
*** flwang1 has joined #openstack-zaqar14:00
flwang1flaper87: ping14:10
*** sriram has joined #openstack-zaqar14:14
*** kragniz has quit IRC14:22
*** kragniz has joined #openstack-zaqar14:22
*** flwang1 has quit IRC14:31
*** JAHoagie has joined #openstack-zaqar14:33
*** shaifali_ has joined #openstack-zaqar14:41
*** mpanetta has joined #openstack-zaqar14:42
*** shaifali_ has quit IRC14:44
*** shaifali_ has joined #openstack-zaqar14:45
*** shaifali_ has quit IRC14:45
*** mpanetta has quit IRC14:51
*** mpanetta has joined #openstack-zaqar14:51
*** vipul has quit IRC14:52
*** vipul has joined #openstack-zaqar14:52
*** kgriffs|afk has quit IRC14:58
*** kgriffs|afk has joined #openstack-zaqar14:58
exploreshaifalivkmc: sorry I was out for some time14:58
vkmcnp14:58
*** kgriffs|afk is now known as kgriffs14:58
exploreshaifaliI have actually not moved queue_controller from DataDrive to ControlDrive14:59
exploreshaifalihave added queue_controller to ControllerDrive14:59
exploreshaifaliand modified the defination of queue_controller in DataDrive14:59
exploreshaifalisuch that now it dose not use QueueController directly14:59
vkmcmove it directly15:00
exploreshaifalinow it use QueueController instance(or create and delete methods) trough ControllDriver15:00
exploreshaifalivkmc: you want me to move whole queue_controller to ControlDriver?15:01
vkmcexploreshaifali, isn't that the desired change?15:02
*** cpallares has joined #openstack-zaqar15:02
exploreshaifalithough main task is to move queue_controller from data to control plane...but if we look in deep main thing is that we should not use use QueueController directly in data driver15:03
exploreshaifalias we want data and control planes to be separated15:04
vkmcso... remove it15:04
vkmcfor v1_1 queue creation is lazy15:04
vkmc(a queue is created when a message is sent)15:04
vkmcI really need to see a detailed spec of thigs change :|15:05
vkmccpallares, wasaaaaaaaaaaaaap15:05
exploreshaifalivkmc: yea but does new queue is created for each new message we send?15:05
cpallareshey vkmc!15:05
cpallareshi exploreshaifali!15:05
exploreshaifalicpallares: heeeeeeeeeeelllllllllloooooooo :D15:05
vkmcexploreshaifali, suppose you send a message to queue 'fizbit'15:05
vkmcif the queue doesn't exist, 'fizbit' is created15:06
vkmcif it exists, then nothing happens15:06
vkmcalthough that is not the case for v1_015:06
vkmcso we might have to make a control for that15:06
*** ametts has joined #openstack-zaqar15:07
exploreshaifalivkmc: yeah that is correct15:07
exploreshaifalibut I think method name(queue_controller) dosen't matter15:07
exploreshaifaliwhat does that method do is what we need to change15:08
exploreshaifalithough I may be wrong15:08
vkmccome again?15:09
exploreshaifalivkmc: see https://github.com/openstack/zaqar/blob/master/zaqar/storage/base.py#L21215:09
exploreshaifalithat is abstract method15:09
exploreshaifaliused at line 15415:09
vkmcyep15:09
exploreshaifaliand one more time below it15:10
vkmcthat is what I say15:10
vkmcremove those lines15:10
exploreshaifaliohhhh....... you are asking to remove those line?15:10
vkmcyeap15:10
vkmcbecause queue creation is lazy15:10
vkmcbuuuuuuuuuut15:11
vkmcwe might need to make an special case for v1_015:11
vkmcbut for now, just to make your server up and running15:11
vkmctry that15:11
exploreshaifaliokay15:11
exploreshaifalibut let me explain what I was trying to do15:11
exploreshaifaliI didn't though to remove those line15:12
exploreshaifalirather change the defination of overried method15:12
vkmcI know15:12
exploreshaifalihahaha15:13
exploreshaifaliokay15:13
exploreshaifaliso if I will remove those line for creating and deleting queues15:13
exploreshaifalithan either I should provide some method for the same task to be done htere15:13
exploreshaifali*there15:13
exploreshaifaliand those methods will be called on ControlDriver object15:14
*** JAHoagie has quit IRC15:15
vkmcnope15:15
vkmcjust remove them15:15
vkmcand leave the health method where it is now15:16
exploreshaifalivkmc: but in that case we will not create and delete queue whenever needed15:17
exploreshaifaliright?15:17
vkmchhmm15:17
vkmcwhat do you mean by that?15:17
exploreshaifaliif I will remove those lines and will provide no replacement for them15:18
vkmcthen, as I mentioned, the queue required will be lazily created15:18
vkmcor that is what I expect15:18
exploreshaifaliokay.... let see :)15:19
exploreshaifalivkmc: thanks!!15:19
vkmcexploreshaifali, np15:20
vkmctry that and let me kno15:20
vkmcw15:20
exploreshaifaliyeah.....sure!!15:20
*** amitgandhinz has joined #openstack-zaqar15:23
*** kgriffs has quit IRC15:25
*** kgriffs|afk has joined #openstack-zaqar15:25
*** kgriffs|afk is now known as kgriffs15:26
* cpallares waves at kgriffs15:33
* vkmc waves at *15:34
kgriffso/15:37
vkmc\o15:37
*** mpanetta has quit IRC15:37
*** mpanetta has joined #openstack-zaqar15:38
vkmcmpanetta, yooo15:39
*** miqui_ has joined #openstack-zaqar15:49
*** vkmc_ has joined #openstack-zaqar15:53
*** reed has joined #openstack-zaqar15:59
*** vkmc has quit IRC16:03
*** vkmc_ is now known as vkmc16:03
*** vkmc has quit IRC16:03
*** vkmc has joined #openstack-zaqar16:03
*** vkmc_ has joined #openstack-zaqar16:03
sriram\o/16:04
*** JAHoagie has joined #openstack-zaqar16:04
* sriram frantically waves at everyone16:04
kragniz~o~16:05
*** vkmc_ has quit IRC16:05
cpallareskragniz: o~o16:10
vkmc~o/16:10
* vkmc waves back to sriram 16:11
cpallaressriram: /o\16:11
* cpallares salutes vkmc16:12
* vkmc gives cpallares some crackers16:13
vkmcits lunchtime here :o16:13
kragnizvkmc: eat some lunch16:14
vkmckragniz, oh I will16:15
vkmc:D16:15
kragniz:( ||| ) <- vkmc eating a cracker16:15
vkmc*crunch* *crunch*16:15
vkmcI'm that annoying colleague that makes weird noises when eating16:16
kragnizI've got a mechanical keyboard, which I think may annoy people16:16
vkmclol16:18
*** kgriffs has quit IRC16:26
*** kgriffs has joined #openstack-zaqar16:27
pcaruanavkmc hi there, sorry forgot to change my afk status :)16:28
vkmcpcaruana, no problem :)16:29
vkmcpcaruana, saw you in #sysarmy and #pyar... so I had to say hi :D16:29
vkmcglad to have you here!16:29
pcaruanavkmc, yes keeping my .ar  network ;)16:30
vkmcpcaruana, that's great!16:30
vkmcpcaruana, how long have you been working with the stack?16:32
vkmcI don't know many ar people involved unfortunately16:32
mpanettahiya vkmc :)16:38
vkmcheeeeeeeey mpanetta :)16:38
mpanettaHow goes?16:39
vkmcall good... trying to debug something weird with pip :|16:40
vkmcyou?16:40
pcaruanavkmc, played a little between blosson and essex, rejoined interest between grizzly and havanna ,ON the other hand I am an Argie expat living in Europe for at least 5 seasons :).16:41
mpanettaPip eh?  Is it broken again? heh16:42
vkmcpcaruana, that's super cool! it has been a while then :)16:43
vkmcmpanetta, yeaaaah, not in Zaqar though... I'm with TripleO's DIB16:44
vkmcbut its super weird http://paste.openstack.org/show/155762/16:44
* vkmc sighs 16:44
*** exploreshaifali is now known as exploreshai|afk16:46
vkmcoff to lunch, brb16:49
*** amalagon has quit IRC16:52
*** amalagon has joined #openstack-zaqar16:52
*** amalagon has quit IRC16:57
mpanettaHmm lunch sounds good16:59
kragniza second lunch also sounds good17:00
cpallaresa second lunch does sound good17:02
cpallaresI have to eat breakfast first though17:02
mpanettacpallares: Brunch ftw?17:03
kragnizan n+1 lunch also sounds good, where n is the current lunch number17:03
cpallaresmpanetta: Double brunch :P17:03
cpallareshaha17:03
mpanettabrunch lunch17:03
kragnizbrulunchfast17:04
*** nakul_cpani has joined #openstack-zaqar17:04
cpallaresbreakbrunlunch17:06
*** amalagon has joined #openstack-zaqar17:11
*** amalagon has quit IRC17:57
*** amalagon has joined #openstack-zaqar17:57
*** amitgandhinz has quit IRC18:11
*** amitgandhinz has joined #openstack-zaqar18:21
*** nakul_cpani has quit IRC18:46
*** reed has quit IRC18:49
* vkmc lurks18:50
*** exploreshai|afk has quit IRC18:52
*** nakul_cpani has joined #openstack-zaqar19:13
*** reed has joined #openstack-zaqar19:17
vkmcnakul_cpani, how are you doing?19:19
*** nakul_cpani has quit IRC19:53
*** kgriffs has quit IRC20:01
*** flwang1 has joined #openstack-zaqar20:01
*** kgriffs has joined #openstack-zaqar20:01
*** nakul_cpani has joined #openstack-zaqar20:05
*** flwang1 has quit IRC20:06
*** flwang1 has joined #openstack-zaqar20:08
flwang1flaper87: piiiiiiiiiiiiiiiiiiiiiing20:16
vkmcflwang1, hey!20:17
flwang1vkmc: hi20:18
vkmcflwang1, how are you? I saw you uploaded some patches, I'll review those20:24
flwang1vkmc: not bad20:31
flwang1yep, I uploaded 2 patch set for the notification service20:32
flwang1but I still run into some design issues for the pool support for notification20:34
flwang1not sure if you saw the discussion between kgriffs and me20:35
flwang1vkmc: our current pool implement is really depending on 'queue'20:35
vkmcflwang1, exactly yes20:36
vkmcI saw your discussion with kgriffs yesterday20:36
vkmcso... in the dropping queue in favor of topics spec20:38
flwang1vkmc: right, so for messages, claims, queue, you have to pass in 'queue' as the parameter20:38
vkmcone of the reviewers pointed out that it would be a good idea to keep it in the url20:38
flwang1to get the correct pool20:38
vkmcyeah20:38
vkmcthat won't respect rest though20:38
flwang1vkmc: yep, that's one option20:39
flwang1and no matter if we want to drop queue in the future, that's another problem20:39
flwang1i'd like to get feedback from core reviewers to move on20:40
flwang1since it will impact the design and many other stuff20:40
vkmcyeah, its important that we move on with notifications20:40
vkmcI'm also in a similar situation with the persistent transport one20:40
flwang1bad :(20:41
vkmclet's have a meeting tomorrow, flaper87 should be back then20:42
flwang1vkmc: ok, sounds good20:42
vkmcin the meantime, I see that you are mentioning that if we drop the queue concept then all the operations are more complex20:42
vkmcso... we should discuss if dropping the queue concept is actually a good idea20:42
kgriffsseems like we have a couple topics here ^^^20:44
kgriffsfirst is simply the design of the api - where/when to specify a queue name when doing CRUD on subscriptions20:45
*** pcaruana is now known as pcaruana|afk|20:45
kgriffsthe second is how to map a subscription to a queue, and by extension a pool, right?20:45
vkmckgriffs, exactly20:46
kgriffsif you know the queue name and the project ID, then you can look up the pool, right?20:46
vkmcright20:47
kgriffsok, so next question: during which operations do you need to look up the pool when dealing with subscriptions?20:48
vkmcso that happens in a pooled deployment when the number of subscriptions are bigger than certain value20:51
vkmcflwang1, ^20:51
vkmcthat is what I understood at least20:51
kgriffsare you saying that we are sharding the subscriptions list across pools?20:51
kgriffs(the list itself)20:52
flwang1kgriffs: yes20:52
flwang1for now, one subscription will have one record in database20:53
flwang1that means for a mobile app notification service, the subscription number maybe very big20:53
flwang1so it's useful to sharding the subscriptions across pools, thoughts?20:54
* kgriffs is considering20:54
flwang1based on that, for the CRUD actions, there is the 'source' field for creation, so it's ok20:54
flwang1but for get, delete, update, we only have the subscription id and project id, so it would be hard to get the correct pool given we're depending 'queue' for sharding20:56
flwang1maybe it's time to consider to shard based on 'project' instead of 'queue'20:56
vkmcI remember talking about that with flaper8720:57
vkmche redirected me to his blog :p20:57
kgriffsso, a few things20:57
kgriffsfirst, I want to verify our assumption that subscriptions have to be sharded20:57
kgriffssecond, I think we should not confuse how we use pools for data plane with how we (may) want to shard control plane data20:58
kgriffsso, #120:58
kgriffsthe mobile device scenario i suppose would mean subscriptions that use APN, etc. for the delivery20:59
flwang1so you mean there is a proxy point to delivery the notifications, right?21:00
flwang1that means not too much subscriptions we should concern?21:00
kgriffsright, you would need a notification worker that could deliver the notification using mobile push21:00
kgriffsbut that worker (or pool of workers) would need to know all the mobile devices it was supposed to push a given message to21:01
kgriffsi guess we need to think about some hypothetical services/apps that would want to do this21:02
*** JAHoagie has quit IRC21:03
kgriffswhat we need to figure out is if this use case is something that people would want to use Zaqar for directly. could be, but I want to be sure.21:03
kgriffsfor the moment, however, let's assume that we do need to support this21:04
flwang1kgriffs: so you believe that's the key if there will be huge subscriptions in zaqar db, right?21:04
kgriffshow many subscriptions would that be? 10 million? more?21:04
flwang1kgriffs: i have no idea, but let's assume it's a number we should concern about the performance21:05
*** nakul_cpani has quit IRC21:06
*** nakul_cpani has joined #openstack-zaqar21:06
kgriffsso, I would recommend testing mongo, mysql, maybe also redis - just seed it with, say, 10 million rows containing the kind of data we need for each subscription record. Add an index to lookup based on queue name and project ID. Then do some quick and dirty query perf tests, and see how much storage space the set of records consumes. Then do the same for 100 million records.21:12
kgriffsThat will let us know whether a single DB cluster is up to the task.21:13
flwang1so you prefer to keep one db cluster and put it on management plane, is it?21:14
flwang1sorry21:15
flwang1i mean keep the table on mangement plance, is it21:15
kgriffswell, I think regardless of what we do, we should keep the subscription data separate from the messages. they will have different scaling and performance characteristics.21:16
kgriffsalso, consider that subscriptions are mostly read-only21:19
kgriffsyou will read them way more often than write them21:19
flwang1kgriffs: good points21:19
kgriffsthis changes how you might want to deploy them21:19
kgriffsI suppose the pool catalog is similar.21:20
kgriffsso to me, it makes sense to have a DB cluster dedicated to the control data (pool catalog, subscriptions) and then a separate N clusters AKA pools for messages21:20
flwang1kgriffs: make sense for me, but it depends on the performance21:21
kgriffsI suspect that 100 million records wouldn't be a big deal for a single control plane cluster. You couldn't run it on a Raspberry Pi, but you wouldn't need an IBM Power box either21:22
kgriffsbut yeah, we should check it out21:22
flwang1kgriffs: yep21:22
flwang1kgriffs: we can revisit this topic after flaper87 back21:22
kgriffsscaling a read-heavy workload can be done with mysql by reading from a pool of slaves behind HAProxy or similar21:23
kgriffsbut it is still one overall cluster, so need for sharding21:23
kgriffson mongo you could easily use built-in sharding on a single cluster21:23
flwang1kgriffs: right, TBH, that's my initial design21:23
kgriffsand also with redis, redis-cluster would probably work well (TBD)21:23
flwang1but then maybe i just think too much21:23
kgriffsanyway, like I said, we should actually test it out a little before jumping to conclusions21:24
flwang1agree21:24
flwang1I will paste above discussion into the etherpad21:24
kgriffsflwang1: heh, it is always good to think about handling huge numbers of records/traffic/etc. But you have to balance that with YAGNI21:24
kgriffskewl21:24
* vkmc was in read only mode21:25
flwang1https://etherpad.openstack.org/p/zaqar-notification21:26
*** JAHoagie has joined #openstack-zaqar21:26
*** nakul_cpani has quit IRC21:47
*** sriram has quit IRC22:16
*** mpanetta has quit IRC22:19
*** kgriffs has quit IRC23:20
*** miqui has quit IRC23:20
*** notmyname has quit IRC23:21
*** flwang1 has quit IRC23:21
*** reed has quit IRC23:21
*** ekarlso- has quit IRC23:21
*** notmyname has joined #openstack-zaqar23:22
*** flwang1 has joined #openstack-zaqar23:22
*** reed has joined #openstack-zaqar23:22
*** ekarlso- has joined #openstack-zaqar23:22
*** boris-42 has quit IRC23:24
*** miqui has joined #openstack-zaqar23:25
*** boris-42 has joined #openstack-zaqar23:26
*** amitgandhinz has quit IRC23:37
*** kgriffs has joined #openstack-zaqar23:55

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!