Friday, 2019-01-04

*** jamesmcarthur has quit IRC00:02
*** lbragstad has quit IRC00:59
*** lbragstad has joined #openstack-tc01:08
*** lbragstad has quit IRC01:09
*** tosky has quit IRC01:17
*** jamesmcarthur has joined #openstack-tc01:32
*** jamesmcarthur has quit IRC03:21
*** jamesmcarthur has joined #openstack-tc03:24
*** jamesmcarthur has quit IRC03:31
*** jamesmcarthur has joined #openstack-tc04:00
*** dangtrinhnt_x has joined #openstack-tc04:02
*** dangtrinhnt_x has quit IRC04:09
*** jamesmcarthur has quit IRC04:47
*** jamesmcarthur has joined #openstack-tc04:47
*** diablo_rojo has quit IRC05:03
*** whoami-rajat has joined #openstack-tc05:22
*** jamesmcarthur has quit IRC05:27
*** jamesmcarthur has joined #openstack-tc06:50
*** jamesmcarthur has quit IRC06:55
*** jaosorior has joined #openstack-tc08:39
*** jpich has joined #openstack-tc08:44
*** tosky has joined #openstack-tc08:47
*** zaneb has quit IRC09:17
ttxOn yesterday's discussion... I agree that OpenStack's positioning as "providing infrastructure" means you can use it to power anything (and that is where its  true value lies). But that value is hard to visualize, and having a few killer narrow use cases could definitely help09:34
ttxWe now have a few shops that deploy OpenStack solely as a backend for providing infrastructure to Zuul v309:35
ttxWe need more of that sort of thing, and we need to make the experience of deploying such narrow use cases VERY simple09:35
ttx(it is a lot easier than making the experience of deploying "OpenStack" very simple)09:36
*** ricolin has quit IRC09:41
*** jpich has quit IRC09:48
*** jpich has joined #openstack-tc09:48
*** e0ne has joined #openstack-tc09:52
*** cdent has joined #openstack-tc10:22
smcginnisI think that's key. If OpenStack can be just a piece of an overall solution, not the solution itself, making the default deployment that covers 80% of the needs would make adoption really grow I think.10:25
cmurphyis kubernetes that?10:28
cdenta) "just a piece of an overall solution" is totes right, b) smcginnis never sleeps!10:28
smcginnisI would see that as one thing.10:29
smcginniscdent: Doesn't help when I fall asleep by 1900. Considering I just got back to my normal time zone a day and a half ago, I think I'm doing pretty good. :)10:30
ttxcmurphy: K8s sits slightly higher on the stack, and also encourages more focused/small/separate clusters10:30
ttxso it suffers a bit less of that "universality" issue -- but it's still affected by it10:30
smcginnisIt could make "I just need a cloud to deploy my k8s cluster on" a lot easier for those that don't need a lot.10:31
ttxBut being "a homegrown K8s substrate" could be one of those narrow use cases10:31
ttxwhat smcginnis said10:31
ttxwhich asks the question of whether we should encourage bottom-up (Magnum and its API of cluster generation) or top-bottom (the openstack cloud provider in K8s)10:33
cdentI think "k8s substrate" is something we (whoever that is) ought to pursue aggressively10:33
ttxcdent: I think we do, via the cloud provider effort mostly10:34
ttxLike having "openstack" in that list together with Azure and GKE is awesome10:34
ttx(https://cncf.ci/)10:35
cdentttx: yeah, but how close to that is "press a button and boom I've got one"? That's the experience people get out of something like minikube10:35
ttxwell it assumes a preexisting cloud, so I think we still have work to do to come up to "the minimal set of components you need to act as a K8s cloud provider"10:36
ttxheck I don't even know that answer10:37
smcginnisI think dims was talking about an effort being started to enable deploying a minimal OpenStack cloud supporting deploying a k8s cluster when we were in Seattle.10:38
ttxand then facilitate one-node deployment of that that can then be extended10:38
ttxlike do we need a ministack (think devstack, centered on specific use case needs, that you can evolve into something durable and use in prod) ?10:40
smcginnisGiven the number of times I've seen people try to use devstack to set up a long running deployment, I'd say there is probably a need for something like that.10:41
ttxIt feels some of the container-driven deployment toolkits would have the flexibility to go from one-node quick setup to sustainable multi-node10:42
ttxif the goal is to serve as a substrate for Zuul or a single Kubernetes cluster, you don't need 100's of nodes either10:42
*** jpich has quit IRC10:43
ttxtrying/comparing/refreshing my knowledge on the various openstack deploy toolkits is on my 2019 list10:43
ttxsee which one I would place my bet on as the "simple" option10:44
EmilienMI'm not aware of "simple" option to deploy OpenStack ;-)10:47
EmilienM(and hi)10:47
cdent'ministack' would be great10:47
cdenteven greater would be some people with the time and energy to make one10:47
EmilienMthe problem with these things is that it always starts small and get (too) big because $reasons10:47
EmilienMonce ministack works, people will start to want it to be multinode and such10:48
EmilienMonce multinode works, they'll want loadbalancing, etc10:48
EmilienMthat's where it go south :)10:49
* EmilienM bbl, lunch time here10:49
*** jpich has joined #openstack-tc10:49
EmilienMFWIW, this is our attempt to make TripleO deployed by one command on a fresh system:10:49
EmilienMhttps://docs.openstack.org/tripleo-docs/latest/install/containers_deployment/standalone.html10:49
EmilienMfeedback welcome10:49
smcginnisI think it's fine if folks want to add multinode. But doing that should be a flag they need to set that does not impact those that don't need that.10:51
smcginnisProtect the simple case, enable tweaking settings for the more complicated cases.10:52
cdentdoesn't osa have an all in one mode?10:52
smcginnisYep - https://docs.openstack.org/openstack-ansible/latest/user/aio/quickstart.html10:53
cdentyeah, just found that too10:53
cdenthmm, more steps that I was hoping for10:54
smcginnisYep10:54
cdentnot a ton of steps10:54
cdentbut was really looking for one call10:54
smcginnisI need to go back and try again, but when I tried to use that to set up my lab I was hitting various problems.10:54
cdentand I can't understand why it takes so long10:55
cdent /o\10:58
cdentI'm exhausted by the fact that so many things that seemed wrong 4 years ago are still the same kind of wrong, but somewhere else.10:58
cdentAnd there's this creeping thing where if you try to hold a line on "doing better" it gets slowly clobbered by "that's the way we do things". For example I'd tried to make it normal and okay in placement that a fresh install didn't do database migrations, to save a little bit of time11:00
cdentOr removing moving parts and having fewer steps in install11:01
smcginnis"removing moving parts and having fewer steps in install" is a great thing to do, IMO.11:01
cdentyet we all agreed to make upgrade checks a community goal11:02
cdentit's a nice safety valve, but it is yet one more thing in the process11:02
cdentwe could have built it in (somehow) instead11:03
smcginnisBut the upgrade checks don't have to be run to do an upgrade, right? They just give the person or tool doing the upgrade something they can run that can check for things they should be aware of.11:13
cdentyes, it was probably a bad example, because it is mostly a good thing, but on the other hand it is yet one more thing to be aware of11:13
cdentI recognize that openstack is a complex system so there's lots to be aware of, but if you spread that awareness over N(large) different projects... I'd not enjoy managing openstack...11:14
*** cdent has quit IRC11:30
*** dtantsur|afk is now known as dtantsur11:32
*** cdent has joined #openstack-tc12:46
*** ssbarnea has quit IRC12:51
*** lbragstad has joined #openstack-tc14:08
*** mriedem has joined #openstack-tc14:30
*** EmilienM is now known as EvilienM14:34
*** cdent has quit IRC14:58
*** cdent has joined #openstack-tc15:11
TheJuliaI've heard that echoed time and time again. The key for people and their adoption is solving a real problem... and it does take growth of expertise in some areas where specific things are needed like highly specific networking to meet specific needs. We've gotten stuck in this model that an operator will install the software, and manage the software, not let the software manage the system.  In a sense we have database15:25
TheJuliamigrations that are outside of the daemon that uses the database on a normal basis. Largely because we built the interaction models to have very fixed behaviors and patterns for an operator... which in a sense is great in a regulated environment, and is a nightmare when your just wanting something to "work".15:25
cdentyes15:27
*** lbragstad has quit IRC15:36
*** EvilienM is now known as EmilienM15:37
*** lbragstad has joined #openstack-tc15:41
*** cdent has quit IRC15:59
*** diablo_rojo has joined #openstack-tc16:00
*** jamesmcarthur has joined #openstack-tc16:07
*** e0ne has quit IRC16:36
*** cdent has joined #openstack-tc16:43
fungito rephrase (makign sure i understand what you're saying) openstack evolved in an environment where explicit manual operation was preferred over magic automation of maintenance tasks?16:51
scasdeclarative over imperitive is what it seems like to me from a deploy aspect17:13
scaslest we trust those pesky computers /too/ much and have an even worse version of skynet17:13
scasor something equally ranty17:13
mriedemkeep in mind that rax public cloud, back in the day, specifically didn't want projects doing data migrations during the schema migrations (db sync) because of the downtime involved in migrating hundreds of thousands of records in a production database17:14
scashaving served time at rax, pre-openstack, i know that there was a notion of not letting machines do /too/ much unattended, whether or not it was actively fostered. it seemed that emphasizing for a declarative model was preferred over a sort of self-service model17:17
scasfor that to make into openstack is not the most surprising17:18
scass/emphasizing/optimizing/ speaking and typing do not mix17:20
scasbut, scas, what about the whole self-service thing that openstack provides?17:21
scasthe caveat seems to be that at the operator level, it's bring your own shovels and pickaxes17:22
scasthe cycle of ENOTIME enforces this, be it through attrition, lack of interest, or conflict of interest, over time, people don't like the idea of having to build their own toolbox. they'd rather go find something shrinkwrapped that sort of mostly does what needs to be done17:24
clarkbfwiw (and again huge biases like debugging openstack all the time) I run a lot of software and running infracloud was relatively easy compared to other things17:25
clarkbthe trouble with our openstack cloud wasn't openstack it was the data center flooding17:25
clarkband our hosting provider's switches turning into hubs17:26
clarkband the hard drives and cpus and motherboards and memory on our hardware dying17:26
scaschef openstack's standard deployment is getting as close to push-button as possible. documentation would close the gap, were it not for the cycle of ENOTIME pushing things down the list17:26
*** whoami-rajat has quit IRC17:27
clarkband I think its those costs that people underestimate. When you go from small fiefdoms mostly managing themselves in your datacenter to trying to provide broad infrastructure all of a sudden you are responsible for that undersized switch flooding its cam table and the hard drives on hypervisors dying17:27
clarkboperationally the cost goes up, but I'm not sure its entirely due to the software17:27
scasmy penchant for non-self-promotion is not helping that much, either17:28
clarkbthough in the flooding case you lose regardless of how you are organized as a host17:28
scasthe main theme that i've ran into in my years is that it's not about how much you prepare, but how well you can recover when things /do/ go wrong. preparing can help one see the patterns, but being in the moment is nothing like drills17:30
scaseven in the public cloud side, i've spoken with individuals about their cloud strategy. their eyes widen when i ask what would they do if 'the cloud' went down17:31
scasthe verbal response is usually a measured one, but the initial facial response is pretty much the same unless you think about this kind of stuff for morbid 'fun'17:32
scasi've seen it up and down the management stack17:33
*** e0ne has joined #openstack-tc17:43
*** e0ne has quit IRC17:44
*** jpich has quit IRC17:49
fungireminds me of when, at a previous employer providing colocation and dedicated hosting services, we had an automatic transfer switch failure during a power incident at our oldest facility and it went dark for 5 minutes. i and other staff spent the better part of two mostly-sleepless days helping customers get their systems back online due to resulting filesystem/database corruption and the like.17:52
fungicustomers thanked us up and down and seemed generally accepting of the situation, but our primary competitor in a neighboring city ran scathing ads inferring we were an untrustworthy fly-by-night operation. then a month later their flagship facility suffered a power outage that lasted several days (and nearly a week for some parts of that facility), and at least a third of their customer base walked out17:52
fungirather than wait for it to get fixed17:52
bsilvermanK8s and Docker are the hot topic right now. I have a bunch of OpenStack deals in the pipeline and most revolve around how how they are going to do hybrid cloud using containers. Some are OpenShift, some aren't. Each OpenStack distro now has it's own opinionated way of solving this challenge and most aren't aligned real closely with toolsets and methodologies.17:59
bsilvermanDiscussions around OpenStack, for most of these deals was a software drag that evolved from showing companies that orchestrating bare metal for containers and k8s wasn't easy and they'd need a private cloud with similar capabilities to public cloud for the rest of their persistent workloads.18:02
bsilvermanMeanwhile, the major telcos are still moving forward with their implementations using NFV/Edge as their main use cases.18:05
*** tosky has quit IRC18:09
bsilvermanHonestly, I was surprised at the amount of people I spoke to at Kubecon that didn't even know what OpenStack was. I see that as a major failure of the community and an opportunity to get more people familiar with the software by introducing them to OpenStack as an infrastructure orchestrator for their use case.18:09
*** jamesmcarthur has quit IRC18:10
* bsilverman fades slowly off into the distance.18:10
*** jamesmcarthur has joined #openstack-tc18:10
bsilvermano/18:10
*** jamesmcarthur has quit IRC18:11
*** dtantsur is now known as dtantsur|afk18:11
*** jamesmcarthur has joined #openstack-tc18:11
scasin some of the chat media that i lurk in, just saying 'openstack' can send people frothy18:12
scasparticularly the home power user contingent18:12
clarkbscas: are there concrete issues or is it tried it 5 years ago, got mad and forever hold on to that opinion?18:13
clarkb(I do think we don't give openstack enough credit for the improvements that have been made)18:14
scasclarkb: a little of both18:15
scason one end of the spectrum, you've the old crusty people who tried mashing it together manually, maybe had a bad time with it in the upcycle of hype. on the opposite end, you have someone that's just learning about it, sees that tripleo is the dominant tool in terms of raw numbers, gets frustrated, and forever expounds the shittiness of their experience18:16
clarkbya that second individual is why I think reevaluating our defaults for new users is worthwhile18:17
clarkbshort of building a time machine fixing the first individuals experience is less straightforward :)18:17
scasaye18:18
cdent$IF_ONLY_TIMEMACHINE18:18
scasfor the first case, it's more of changing set opinions. in the latter, they have no preconceptions of it, good or bad. they just heard it was 'cool'18:18
scasthe former is difficult to overcome, unless such a dissenter revisits that 'forever' decision18:19
scaspost-berlin, one person showed up in #openstack-chef that is a relatively new user. their opinion and perspective was enlightening in a good way, to me at least18:21
scasat the end of the day, it's down to the implementer to look at things through an unbiased lens first and foremost18:22
scasi'm simplifying for the sake of brevity for the medium18:23
*** jamesmcarthur has quit IRC18:46
*** jamesmcarthur has joined #openstack-tc18:47
*** jamesmcarthur has quit IRC18:51
*** jamesmcarthur has joined #openstack-tc18:55
*** whoami-rajat has joined #openstack-tc18:57
*** jamesmcarthur has quit IRC19:11
*** jamesmcarthur has joined #openstack-tc19:17
*** jamesmcarthur has quit IRC19:43
*** jamesmcarthur has joined #openstack-tc19:43
*** jamesmcarthur has quit IRC19:46
*** jamesmcarthur has joined #openstack-tc19:47
*** jamesmcarthur has quit IRC19:52
cdentreturn of mid-cycles siting!19:58
*** jaypipes has quit IRC20:04
smcginnisCinder discussion started soon after the PTG change announcement.20:05
cdentmake sense20:07
smcginnisIt will be interesting to see how it works now. They were always hugely productive in the pre-PTG era, but quite a bit has changed since then.20:08
smcginnisBut I think it did always help to have regular high-bandwidth face to face events at regular intervals. I hope it still has enough critical mass to keep working well.20:09
clarkbsmcginnis: do you anticipate cinder foregoing ptg room time as a result?20:10
clarkbor doing both?20:10
cdenti like the idea of them, I'm not sure how many companies will be eager to pay20:10
smcginnisI expect doing both.20:10
smcginnisThe midcycle is really to fill that gap between Summits now. (again)20:10
*** cdent has quit IRC20:13
TheJuliayay for getting distracted by other things20:32
TheJuliafungi: Essentially what scas was saying is what I was attempting to convey. Because of enotime and all of the other various issues that go into it, we've never gotten away from improving the model or interaction. I fear we've only complicated it in the grand scheme of the universe.20:36
TheJuliaclarkb: That is a downright frightening sequence of events...20:37
clarkbTheJulia: that the datacenter flooded?20:38
clarkbya that resulted in losing all the rails for our servers somehow when they moved them20:38
TheJuliascas: I would argue that thinking of the morbid kind of stuff is an excellent skill.... although it severely impacts things like... bathroom remodels.20:38
clarkbthen stacked them like pizza boxes on the floor20:38
TheJuliaclarkb: yeah20:38
clarkbalso our switches20:39
clarkbwhich is how we ended up on shared switch gear that turned into hubs when the cam tables filled20:39
TheJuliafungi: was it one of the static switches locking over to one side?20:40
TheJuliabsilverman: I think it is highly dependent upon the circle in which one keeps themselves, which is hard to break out of, and then we reach the ENOTIME issue spoken of previously. :(20:42
TheJuliaclarkb: that is... horrifying.20:43
clarkbOn the one hand it was donated hosting which was nice. On the other it was really difficult to deal with the curveballs we were thrown :)20:43
fungiTheJulia: the facility in question had a spof ats, and it fused, refusing to trip to the live feed of the two we had coming into the facility. the upses were also in a sore state, and when load all ended up on one side it decided it had insufficient capacity, gave up, failed over to the second bank of upses which then had a similar fit... it was really not a good time for us20:44
TheJuliasmcginnis: I anticipate few teams will be able to coalesce to a single location for a mid-cycle and that most teams will end up having to do high bandwidth check-ins electronically.20:45
fungion a positive note, it at least got the owner to agree to start properly servicing batteries in the ups again :/20:45
TheJulia#thisiswhycloudsarehard20:45
TheJuliaThen it kind of all goes to the sheep vs cattle mentality conundrum20:46
TheJuliaerr20:46
TheJuliapets vs cattle20:46
smcginnisTheJulia: Yeah, I'm sure it won't work for many to do face to face. Hangouts and the like are probably going to be more common.20:46
* TheJulia is braindead20:46
* smcginnis would like a pet sheep20:47
TheJuliaI actually met someone who had some... they were unbelievably cute.20:47
smcginnisTotally. :)20:47
* dhellmann wonders if other cities are out of tidy cat or if this is a localized emergency20:53
TheJuliadhellmann: I am avoiding running to the pet store since I already went to the vet this morning20:54
TheJuliaI can report in from california tomorrow! :)20:54
dhellmann:-)20:54
dhellmannI'll tell the cats to hold it20:54
TheJuliaI'm not sure that will work20:54
dhellmannno, not likely20:55
TheJuliaFeline Pine multi-cat clumping or... the walnut shell multi-cat works really well20:55
dhellmannI'm trying feline pine with them. Our previous cat would only use that, but these 2 didn't seem to like it as kittens.20:56
TheJulia:(20:57
dhellmannthey're ~7 now, so maybe they've forgotten20:57
dhellmannif that doesn't work I'll see if I can find some cheap non-clumping clay. Theresa doesn't like to use the clumping stuff because they track it all through the house20:58
TheJuliaThat is where a robotic vacuum helps....20:58
TheJuliaset it on a schedule, keep it near the boxes.... presto!20:58
TheJuliano episodes of cats riding the vacuum around either... saldy21:00
TheJuliasadly21:00
clarkbI just got our vacuum running again21:00
clarkbtoddlers were not amused21:00
clarkbbut I've got them saying robot in the original asimov pronounciation21:01
clarkbso I'll call that a win21:01
dhellmannclarkb : what is that, "rowbut"?21:02
clarkbya21:02
dhellmannTheJulia : unfortunately our vac is dougic and not robotic21:03
TheJulia:(21:10
fungibased in advanced doug-powered technology21:15
*** whoami-rajat has quit IRC22:37
*** jaosorior has quit IRC22:42
* TheJulia tries to make a joke, and fails to find the words after doing spec reviews23:35
*** openstack has joined #openstack-tc23:45
*** ChanServ sets mode: +o openstack23:45

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!