Tuesday, 2016-02-09

*** arnoldje has joined #openstack-performance00:11
*** rfolco has quit IRC00:17
*** rfolco has joined #openstack-performance00:20
*** rfolco has quit IRC00:20
*** ngoracke has quit IRC02:43
*** pbelamge has joined #openstack-performance02:49
*** pbelamge has quit IRC03:09
*** dims has joined #openstack-performance03:27
*** dims_ has quit IRC03:28
*** dims has quit IRC03:30
*** dims has joined #openstack-performance03:43
*** ngoracke has joined #openstack-performance03:44
*** dims has quit IRC03:47
*** ngoracke has quit IRC03:49
*** mdorman has joined #openstack-performance04:50
*** arnoldje has quit IRC05:08
*** mdorman has quit IRC05:34
*** zz_dimtruck is now known as dimtruck08:37
*** dimtruck is now known as zz_dimtruck08:51
johnthetubaguyDinaBelova: yeah, its probably worth trying to get some kind of functional tests ready on top of that patch, so we can merge it quickly in newton, thats the current blocker for getting it merged, the code is there now, its just we don't have any path towards functional tests yet09:55
johnthetubaguyI totally missed that till the last moment, which is annoying, but happens09:55
*** rfolco has joined #openstack-performance10:44
*** dims has joined #openstack-performance10:45
*** dims has quit IRC10:49
*** dims has joined #openstack-performance10:50
*** dims_ has joined #openstack-performance11:40
*** dims has quit IRC11:40
*** boris-42 has quit IRC11:43
*** xek_ is now known as xek11:51
*** dims_ has quit IRC12:11
*** dims_ has joined #openstack-performance12:19
*** openstackgerrit_ has joined #openstack-performance12:39
*** ngoracke has joined #openstack-performance12:50
*** ngoracke has quit IRC12:54
*** regXboi has joined #openstack-performance13:31
*** mriedem has joined #openstack-performance13:41
*** ngoracke has joined #openstack-performance13:41
*** openstackgerrit_ has quit IRC13:56
*** dims_ has quit IRC14:03
*** dims has joined #openstack-performance14:10
*** zz_dimtruck is now known as dimtruck14:44
*** cgross has quit IRC14:56
*** cgross has joined #openstack-performance14:56
*** dimtruck is now known as zz_dimtruck15:02
*** spyderdyne has joined #openstack-performance15:27
*** zz_dimtruck is now known as dimtruck15:31
spyderdynedid this team's weekly meeting start 5 minutes ago or do I have the wrong time?15:35
*** mdorman has joined #openstack-performance15:45
*** mdorman has quit IRC15:46
*** mdorman has joined #openstack-performance15:47
DinaBelovaspyderdyne you're in the right place :)15:55
*** AugieMena has joined #openstack-performance15:56
*** anevenchannyy__ has joined #openstack-performance15:56
DinaBelovaspyderdyne we'll start in 4 mins15:56
DinaBelovakun_huang, harlowja - are you around?15:58
DinaBelova#startmeeting Performance Team16:00
openstackMeeting started Tue Feb  9 16:00:20 2016 UTC and is due to finish in 60 minutes.  The chair is DinaBelova. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
openstackThe meeting name has been set to 'performance_team'16:00
DinaBelovaok guys I'm not sure if people are around :)16:00
andreykurilino/16:00
DinaBelovaandreykurilin o/16:00
*** rvasilets_ has joined #openstack-performance16:00
DinaBelovaanyone else around? :)16:01
rvasilets_me=)16:01
rvasilets_Hi16:01
DinaBelovarvasilets_ o/16:01
AugieMenaHi16:01
DinaBelovawe have usual agenda16:01
DinaBelova#topic Action Items16:02
DinaBelovalast time ew had in fact only one official action item on me16:02
*** redixin has joined #openstack-performance16:02
DinaBelovaI went through the spec about performance testing by Dragonflow team - it seems to be ok16:02
DinaBelovathe only thing I need to do right now will be to make them publish it to our docs as well16:03
DinaBelova:)16:03
DinaBelovaas for the other changes to performance-docs, we in fact did have much progress last week - no significant changes proposed16:04
*** gokrokve has joined #openstack-performance16:04
DinaBelovaso it looks like this meeting will be quick :)16:04
DinaBelova#topic Test plans statuses16:04
gokrokvehi16:04
DinaBelovagokrokve o/16:04
*** ikhudoshyn has joined #openstack-performance16:04
gokrokveGive me couple minutes at the end, please16:04
DinaBelovagokrokve sure16:05
gokrokveWe can discuss what we will do in the lab in Q116:05
DinaBelovagokrokve ack16:05
DinaBelovaso going back to the test plans16:05
DinaBelovacurrently some refactoring work is in progress by ilyashakhat16:06
* SpamapS has a test plan with a status.16:06
DinaBelovahe's splitting test plans to something generic + examples with real tools16:06
*** pglass has joined #openstack-performance16:06
DinaBelovaSpamapS - oh, cool :) can you share some details?16:06
SpamapSDinaBelova: certainly16:07
SpamapSSo, unfortunately, the ansible I'm using to deploy is not open source, though I'm working on that. :) But we've successfully spun up tooling to test fake nova in 1000 containers using just a couple of smallish hosts.16:07
SpamapSWe are going to test 2000 nova-compute's this week, and also the new Neutron OVN work which moves a lot of the python+rabbitmq load off into small C agents.16:08
DinaBelovaSpamapS wow, that's very cool16:08
SpamapSOur results show that with a _REALLY_ powerful RabbitMQ server, you can absolutely scale Nova well beyond 1000 hypervisors.16:08
DinaBelovaSpamapS do you have any plans on pushing the test plan and the results to the performance/docs?16:09
DinaBelovato share it with wider audience?16:09
SpamapS1000 hypervisors, with 60 busy users, results in RabbitMQ using 25 cores (fast baremetal cores) and 10GB of RAM.16:09
SpamapSI plan to publish a paper and discuss it at the summit.16:09
SpamapSAnd I'm hoping to include ansible roles for spinning up the thousands of fake nova's.16:09
SpamapSWhat we have found, though, is that the scheduler is a HUGE bottleneck.16:10
DinaBelovaSpamapS - I think we can place all testing artefacts to the performance-docs as well16:10
DinaBelovaSpamapS did you file a bug?16:10
SpamapSWith 2 schedulers running, they eat 2 cores up, and take increasingly long to finish scheduling requests as more busy users are added.16:10
*** boris-42 has joined #openstack-performance16:11
SpamapSI have not filed bugs just yet, because I have not tested with Mitaka yet. Most of this has been on Liberty16:11
DinaBelovaSpamapS ack16:11
*** redixin has quit IRC16:11
SpamapSKeystone and Neutron also eat cores very fast, but at a linear scale so far.16:11
DinaBelovaok, so can you keep our eyes on the progress for this testing? also test plan sharing is very welcome for us to take a deeper look :)16:12
SpamapSThe problem with the scheduler is when we run with 3, it does not speed up scheduling, because they start stepping on eachother, resulting in retries.16:12
gokrokveThis is interesting. I heard there is a config option for scheduler to increase number of threads.16:13
SpamapSSo yeah I hope to open this up a bit. My long term goal is to be able to spin up a similar version of this inside infra, with maybe 10 nova-compute's and my counter-inspection stuff verifying that we haven't had any non-linear scale effects.16:13
SpamapSgokrokve: yeah, threads just don't help unfortunately.16:13
SpamapSWe have some ideas how they could though16:13
gokrokveNice16:13
SpamapSLike, have a random filter, that randomly removes half the hosts that come back as possibilities.16:13
SpamapSshould reduce conflicts16:14
gokrokveSounds like a workaround16:14
gokrokveI wonder if Gant project is alive and wants to fix this issue16:14
SpamapSLong term, I'm not sure we can get away from cells as the answer for this.16:15
SpamapSAFAIK, gantt is dead.16:15
SpamapSBut maybe it's doing things more now and I missed it.16:15
gokrokveThat what I heard as well16:15
DinaBelovahm, I wonder what's the mechanism schedulers are using for the coordination (if any) - it looks like it's wrong16:15
SpamapSthey have 0 coordination with eachother.16:15
SpamapSThey read the database, choose a host, then attempt to claim resources.16:15
DinaBelovaSpamapS :D heh, looks like the issue16:16
SpamapSWhat they should do is attempt to claim things in the database, all in one step.16:16
gokrokve+116:16
DinaBelova+116:16
gokrokveSounds like we need to engage nova team16:16
gokrokveWe can start with Dims and Jay Pipes.16:17
SpamapSAlso they could collaborate by making a hash ring of hosts, and only ever scheduling things to the hosts they own on the hash ring until those hosts fail, and then move down the line to the next hosts.16:17
SpamapSThe nova team is focused on cells.16:17
SpamapSThat is their answer.16:17
SpamapSAnd I am increasingly wondering if that will just have to be the answer (even though I think it makes things unnecessarily complex)16:17
DinaBelovawell, we can ping someone from Mirantis Nova team and find out if they are interested in fixing16:17
SpamapSTo be clear, I'm interested in fixing too. :)16:18
DinaBelovaSpamapS I would say it'll be overkill16:18
DinaBelovaSpamapS :D16:18
SpamapSYeah, we want to be able to run cells of 1000+ hypervisors.16:18
DinaBelovawell, yeah - so let's keep our eye on this issue and testing16:18
SpamapSOne reason cells have been kept below 800 is that RAX ops is just not built to improve OpenStack reliability, so they have needed to work around the limitations while development works on features.16:19
*** jaypipes has joined #openstack-performance16:19
jaypipessomebody say scheduler?16:19
jaypipes:P16:19
SpamapSEverybody else who is running big clouds is in a similar boat. Cells allows you to make small clouds and stitch them together.16:19
*** cdent has joined #openstack-performance16:19
SpamapSjaypipes: \o/16:19
DinaBelova#info SpamapS is working on testing 2000 nova-compute's, and new Neutron OVN work which moves a lot of the python+rabbitmq load off into small C agents.16:19
DinaBelovajaypipes :D16:19
SpamapSjaypipes: indeed. I'll summarize where I'm bottlenecking...16:19
DinaBelovaSpamapS absolutely agree with your thoughts16:20
SpamapSjaypipes: We run 1000 pretend fake hypervisors, and 60 pretend users doing create/list/delete.16:20
SpamapSjaypipes: With 1 scheduler, this takes an average of 60s per boot. With 2 schedulers, this takes an average of 25s per boot. With 3 schedulers, this takes an average of 60s per boot.16:21
SpamapSjaypipes: the 3rd scheduler actually makes it worse, because retries.16:21
*** klindgren__ is now known as klindgren16:21
jaypipesSpamapS: yes, that is quite expected.16:21
SpamapSyeah, no shock on this face  ;)16:21
DinaBelovabut not that scalable as probably people want :D16:22
SpamapSjaypipes: I think one reason there's no urgency on this problem is that cells is just expected to be the way we get around this.16:22
jaypipesSpamapS: not really... the medium-to-long term solution to this is moving claims to the scheduler and getting rid of the cache in the scheduler.16:23
SpamapSBut a) I want to build really big cells with less bottlenecks, and b) what about a really busy medium sized cell?16:23
SpamapSjaypipes: claims to the scheduler is definitely the thing I want to work on.16:23
cdentthe patchset that's in progress right now to duplicate claims in the scheduler probably helps those retries16:23
* cdent locates16:23
DinaBelovaSpamapS - it looks like jaypipes has the same approach16:23
DinaBelovayeah16:23
SpamapSsweet16:24
jaypipesSpamapS: I am writing an email to the ML at the moment that describes the progress on the resource tracker and scheduler front. you should read it when it arrives in you inbox.16:24
cdenthttps://review.openstack.org/#/c/262938/16:24
DinaBelova#link https://review.openstack.org/#/c/262938/16:24
SpamapSjaypipes: wonderful. I have been editting a draft of questions since I started this work, and never ready to send it because I keep finding new avenues to explore. Hopefully I can just delete the draft and reply to you. :)16:25
jaypipescdent: actually, that patch doesn't help much at all... and is likely to be replaced in the long run, but I suppose it serves the purpose of illustration for the time being.16:25
DinaBelovaSpamapS :D16:25
cdentjaypipes: I know it is going to be replaced, just it is a datapoint16:26
SpamapScdent: oh yes, that's definitely along the lines I was thinking to get them to stop stomping on eachother.16:26
SpamapSThe other thing I want to experiment with is a scheduler that runs on each nova-compute, and pulls work.16:26
DinaBelovaSpamapS jaypipes cdent - anything else here?16:26
cdentIt's a given that the shared state management in the scheduler, inter and intra, is a mess and having a lock and claim stuff is just stopgaps.16:27
jaypipesSpamapS: that's not the direction we're going but is an interesting conversation nonetheless16:27
SpamapSjaypipes: yeah no it's a spike to solve other problems. Making what the general community has work well would be preferred.16:28
cdentpresumably it will just move around the slowness rather than getting to right sooner16:28
SpamapScdent: it would strip all the features out.16:28
DinaBelova#info let's go through jaypipes email that describes the progress on the resource tracker and scheduler front - let16:28
SpamapSprecalc queues based on flavors only16:28
cdentSpamapS: sorry, my "it" was the patch I was referencing.16:29
SpamapSand each compute can decide "can I handle that flavor right now? yes: I will pull jobs from that queue"16:29
SpamapScdent: ah k.16:29
SpamapSanyway, yes I patiently await your email sir pipes.16:29
DinaBelovaok, cool16:30
SpamapSOh also we haven't been able to try zmq to solve the 25 core eating RMQ monster because of time constraints.16:30
SpamapSTurns out a couple of 56 core boxes are relatively cheap when you're building thousands of hypervisors out.16:30
*** alaski has joined #openstack-performance16:31
dimsSpamapS : on that front, we need help with the zmq driver development16:31
DinaBelovaSpamapS our internal experiments show that ZMQ is much quicker and eating less resources - but it'll be interesting to take a look on the comparison of the results on your env and your load indeed16:32
dimsSpamapS : and testing of course :)16:32
DinaBelovaSpamapS yeah, dims is right16:32
DinaBelovawork on zmq driver is in progress16:32
SpamapSdims: I hope to test it later this week once I get through repeating my work with mitaka.16:32
DinaBelovaand continuous improvement16:32
DinaBelovaSpamapS ack16:32
SpamapSI also need to send notifications via rabbitmq still, so that will be interesting. :)16:33
DinaBelova:D yeah16:33
DinaBelovaSo I suppose we may move on?16:33
SpamapSYes, thanks for the in depth discussion though. :)16:33
DinaBelovaSpamapS you're welcome sir :)16:34
DinaBelovaI just hope to get this info publickly available soon16:34
DinaBelovapublicly*16:34
DinaBelovaok, so let's jump to theosprofiler topic16:34
DinaBelova#topic OSProfiler weekly update16:34
DinaBelovain fact profiler had a nervous week :)16:35
DinaBelovawe broke some gates with 1.0.0 release and the methods we used for static methods tracing16:35
DinaBelova:D16:35
DinaBelovaso the workaround was to skip staticmethods tracing for now, as it has lots of corner cases, that need to be treated separately16:36
DinaBelovaand 1.0.1 release seems to work ok - some internal testing by Mirantis folks who needed it has shown good results16:36
DinaBelovadims thanks one more time for quick 1.0.1 release pushing16:37
DinaBelova#info Neutron change https://review.openstack.org/273951 is still in progress16:37
DinaBelovasadly I did not have enough time to make if fully workable yet, that's still in progress16:38
DinaBelovathe situation with nova change is a bit different - but with the same result16:38
DinaBelova#info Nova change https://review.openstack.org/254703 won't land in Mitaka due to the lack of functional testing16:39
DinaBelovaso either Boris or I will need to add this specific job to get it landed early in Newton16:39
DinaBelovajohnthetubaguy - thanks btw for letting me know about what do you guys want to see to get it merged :) it was sad that it won't land in M releaase, but still16:40
DinaBelovain fact all other work is still in progress with the patches for neutron, nova, keystone on review16:41
DinaBelovaany questions regarding the profiler?16:42
DinaBelovaok, so let's go to the open discussion16:42
DinaBelova#topic Open Discussion16:42
DinaBelovagokrokve the floor is yours16:42
gokrokveCool. Thannks16:43
gokrokveAs you probably know we now have a lab with 240 physical nodes16:43
gokrokveIn Q1 we plan to do several tests related to: MQ (RabbitMQ vs ZMQ), DB (MySQL Galera) and Nova conductor behavior16:44
gokrokveIn adidtion to that we a re working on rally enhancement to add networking workloads to test data plane networking performance for tenant networks16:44
DinaBelova+ baremetal provisioning scalability for the cloud deployment16:44
gokrokveThis will be done in upstream rally. Most of the functionality is already available but we need to create workload itself16:45
gokrokveYep and baremetal as well.16:45
*** ikhudoshyn has left #openstack-performance16:45
gokrokveWe also working on test results format16:46
gokrokveSo that it can be published more or less automatically in RST format16:46
DinaBelovagokrokve I think that almost all details are published on the test plans, we'll probably change them if something else will be needed to to covered16:46
gokrokveI hope we will be able to upstream this as well16:46
DinaBelovagokrokve indeed16:46
gokrokveSure, test plans will be changed accordingly16:47
DinaBelovaalso afair kun_huang 's lab is going to appear somewhen close to Feb 18th16:47
gokrokveThis is great.16:47
DinaBelovaso listomin will coordinate if we can run something else on that lab as well16:47
DinaBelovagokrokve +1 :)16:47
gokrokveMore independent runs we have more confident we will be with the results16:48
DinaBelovagokrokve indeed16:48
gokrokveSo that is almost it.16:48
gokrokveI just want to share these plans for Q1 with the audience.16:48
DinaBelovagokrokve thank you sir16:48
DinaBelovaok, folks, any questions?16:49
DinaBelovaok, so thanks everyone for coming and special thanks to SpamapS for raising very interesting topic16:50
DinaBelovasee u guys16:50
DinaBelova#endmeeting16:50
openstackMeeting ended Tue Feb  9 16:50:16 2016 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:50
openstackMinutes:        http://eavesdrop.openstack.org/meetings/performance_team/2016/performance_team.2016-02-09-16.00.html16:50
*** anevenchannyy___ has joined #openstack-performance16:50
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/performance_team/2016/performance_team.2016-02-09-16.00.txt16:50
openstackLog:            http://eavesdrop.openstack.org/meetings/performance_team/2016/performance_team.2016-02-09-16.00.log.html16:50
*** anevenchannyy__ has quit IRC16:53
*** dimtruck is now known as zz_dimtruck16:58
*** zz_dimtruck is now known as dimtruck16:58
*** spyderdyne has quit IRC16:58
*** redixin has joined #openstack-performance17:07
*** amaretskiy1 has quit IRC17:25
*** gokrokve has quit IRC17:37
*** anevenchannyy___ has quit IRC17:54
*** pglass has quit IRC18:00
*** cdent has quit IRC18:02
*** cdent has joined #openstack-performance18:43
*** mriedem has quit IRC18:49
*** mriedem has joined #openstack-performance18:51
harlowjadamn guess i should of came to that meeting18:55
harlowjalol18:55
harlowjaSpamapS good stuff btw19:01
* harlowja read meeting 'logs 19:01
harlowjaSpamapS 'precalc queues based on flavors only' didn't u suggest that a few years ago, lol19:04
harlowjaor was that someone else19:04
* harlowja its all a blur19:04
harlowjajaypipes i also await your email :)19:06
*** pglass has joined #openstack-performance19:14
*** suro-patz has joined #openstack-performance19:17
*** dims_ has joined #openstack-performance19:17
*** dims has quit IRC19:17
*** dims_ has quit IRC19:27
*** dims has joined #openstack-performance19:30
*** ngoracke has quit IRC19:39
*** ngoracke has joined #openstack-performance19:39
*** dims has quit IRC19:41
*** dims has joined #openstack-performance19:42
*** mriedem has quit IRC19:45
*** dims_ has joined #openstack-performance19:46
*** mriedem has joined #openstack-performance19:47
*** dims has quit IRC19:49
*** dims_ has quit IRC19:57
*** rfolco has quit IRC20:08
harlowjaDinaBelova so i think i got a volunteer from yahoo that will show up to this weekly (since i never get up), another senior guy here20:16
harlowjalet's see if that happens :)20:16
DinaBelovaharlowja volunteer who will be doing what? :)20:19
harlowjastuff20:19
harlowjalol20:19
DinaBelovaanyway glad  to hear this :)20:19
DinaBelova:D20:19
harlowjastuff stuff20:19
harlowjalol20:19
harlowjaall the stuff20:19
harlowjalol20:19
DinaBelovaperformance-related I hope :D20:20
harlowjaya, he has been doing that recently20:20
harlowja(with ironic)20:20
harlowjaso can offer insight there as well20:20
DinaBelovaharlowja - cool!20:53
DinaBelovahttps://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/739920:54
DinaBelovabtw20:54
DinaBelova^^20:54
harlowjacool20:54
harlowjawould love to see!20:54
harlowjaobviously20:54
harlowjalol20:54
DinaBelova:D20:54
harlowja'Her experience includes close interaction with all OpenStack projects and their improvement to satisfy the needs of cloud users.'20:54
harlowjagood job!20:54
harlowjalol20:54
harlowja:)20:54
harlowjapretty sure u are dina20:54
harlowjahaha20:54
DinaBelovaharlowja :D lol, you know these bullshitty descriptions you sometimes need to either write yourself or read what others (marketing, usually :D) write about you :D20:56
harlowja;)20:57
harlowjaya, i know, ha20:57
DinaBelovaJoshua Harlow is one of the technical leads on the OpenStack team who has helped OpenStack gain initial and continued traction inside Yahoo!. He was a member of the initial CTO subteam investigating IAAS solutions where OpenStack was chosen and has been pushing OpenStack from the start of its usage at Yahoo!. Joshua has been involved in OpenStack since 2011 and has contributed patches to most of the core projects (with the hope of20:57
DinaBelova contributing many more patches soon).20:57
DinaBelova:D20:57
harlowjasuperduper20:57
harlowjai wrote that myself i think20:57
harlowjalol20:57
DinaBelovawell, your one is also pathetic :D20:57
harlowjalol20:57
harlowjai gotta add more awesome to that20:57
harlowjalol20:57
DinaBelova:D https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/849020:58
DinaBelovacool topic btw20:58
harlowjaha20:58
harlowjavilobh did all that, i just tag along to talks, lol20:58
harlowjai am the support-speaker20:59
DinaBelova:D20:59
harlowja(aka support the other speaker)20:59
harlowjahmmm20:59
harlowja'Vice Executive Hacker, Yahoo!'20:59
harlowjai think i need to now be 'Executive'20:59
harlowjanow more vice crap20:59
harlowjalol20:59
DinaBelovayeah, that's VERY pathetic :D20:59
harlowjalol20:59
harlowjaChief Executive Hacker21:00
harlowjamaybe that21:00
harlowjalol21:00
DinaBelova:D21:00
harlowjaChief Superman21:00
harlowjamaybe that21:00
harlowjaidk21:00
harlowjaha21:00
harlowjadecisions decisions21:00
harlowjalol21:00
DinaBelovaharlowja btw probably you'll be interested as well https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/742421:01
DinaBelovafolks from our Horizon team working on this now21:01
harlowjacool21:01
DinaBelovainteresting stuff21:01
harlowjathat dina person is in that one to21:01
harlowjaweird!21:01
harlowjalol21:01
DinaBelovayeah, I'm helping them with profiler :D21:02
harlowja:)21:02
DinaBelovabtw some guys from our Cinder team have several Cinder-performance-related bugs, and they asked help in its deployment/usage. I gave an instruction, etc.21:03
DinaBelovathen something strange happenned - traces were weird21:03
harlowjalol21:03
DinaBelovawe started to debug21:03
*** dims has joined #openstack-performance21:03
harlowjadid cinder take to many drugs?21:03
harlowjabad-drugs21:03
DinaBelovaand found out 2 things21:03
DinaBelova1) cinder was broken on the env21:03
DinaBelova:D21:03
DinaBelova2) folks forgot to set enable plugin osprofiler in devstack :D21:04
harlowjaya, sounds like someone over there on some drugs21:04
harlowjalol21:04
*** pglbutt has joined #openstack-performance21:04
DinaBelovaafter these two moments were fixed everything began to work ok :D21:04
harlowjawoot21:04
*** pglass_ has joined #openstack-performance21:05
* harlowja trying to figure out where the speaker profile editing page is21:05
harlowjalol21:05
DinaBelovait's somewhere in profiler21:05
DinaBelovaprofile****21:05
harlowjamy profile is in osprofiler!!!21:05
harlowjawoah21:05
harlowjai didn't know it had that feature21:05
DinaBelovahttps://www.openstack.org/profile/speaker21:05
harlowjaya, think i found it21:05
DinaBelova:D21:05
*** pglass has quit IRC21:06
harlowjaodd, saying bio is required and my bio is 'Make things more awesome!'21:07
harlowjanotcool21:07
harlowjathat is my bio21:07
harlowja:(21:07
*** pglbutt has quit IRC21:08
harlowjahmmm, bio changed doens't show up on speaker talks21:09
harlowjadurn21:09
harlowjaoh well21:09
DinaBelovaharlowja - probably it's frozen until the end of voting?21:42
*** regXboi has quit IRC21:43
DinaBelova@harlowja please take a look https://review.openstack.org/#/c/278103/21:43
DinaBelovaboris-42 ^^ https://review.openstack.org/#/c/278103/21:45
harlowjaDinaBelova good idea22:09
harlowjai'd really like to get people to use the metaclass osprofiler provides :-/22:09
harlowjavs trace_cls22:09
DinaBelovaharlowja yeah, but metaclass is good for the strict classes hierarchy with parents-children. Sadly does not work that good in case of neutron mixins :D22:10
harlowjaya22:10
harlowjagotta be a better way22:11
harlowjai believe!22:11
harlowjalol22:11
DinaBelova:D22:11
DinaBelovalet's believe together, it's a chance our mental force will be enough to fix it :D22:11
harlowja:)22:13
*** pglass_ has quit IRC22:13
harlowja+222:13
harlowjaboris-42 u want to believe also?22:13
harlowjalol22:13
harlowjaidea, very very hacky, replace type with our metaclass ...22:14
harlowjaosprofiler.patch()22:15
harlowjamonkey it22:15
harlowjanot like it so much, ha22:15
harlowjalol22:15
*** pglass has joined #openstack-performance22:22
*** redixin has left #openstack-performance22:34
*** dims_ has joined #openstack-performance22:39
*** dims has quit IRC22:40
*** cdent has quit IRC22:55
*** dimtruck is now known as zz_dimtruck22:57
*** jaypipes has quit IRC23:00
*** mriedem has quit IRC23:22
*** zz_dimtruck is now known as dimtruck23:25
*** mdorman has quit IRC23:35
*** mdorman has joined #openstack-performance23:48

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!