Hacker News

Containers vs. Zones vs. Jails vs. VMs (2017)(blog.jessfraz.com)

351 pointsgullyfur posted 2 months ago133 Comments
133 Comments:
outworlder said 2 months ago:

> A “container” is just a term people use to describe a combination of Linux namespaces and cgroups. Linux namespaces and cgroups ARE first class objects. NOT containers.

Amen.

Somewhat tangential note: most developers I have met do not understand what a 'container' is. There's an aura of magic and mystique around them. And a heavy emphasis on Docker.

A sizable fraction will be concerned about 'container overhead' (and "scalability issues") when asked to move workloads to containers. They are usually not able to explain what the overhead would be, and what could potentially be causing it. No mention to storage, or how networking would be impacted, just CPU. That's usually said without measuring the actual performance first.

When I press further, what I most commonly get is the sense that they believe that containers are "like VMs, but lighter"(also, I've been told that, literally, a few times, specially when interviewing candidates). To this day, I've heard CGroups being mentioned only once.

I wonder if I'm stuck in the wrong bubble, or if this is widespread.

CBLT said 2 months ago:

> To this day, I've heard CGroups being mentioned only once.

See https://www.kernel.org/doc/Documentation/cgroup-v2.txt

> "cgroup" stands for "control group" and is never capitalized. The singular form is used to designate the whole feature and also as a qualifier as in "cgroup controllers". When explicitly referring to multiple individual control groups, the plural form "cgroups" is used.

To this day, I've heard cgroup mentioned only once...

To put forth a more substantive argument, everybody has a layer of abstraction they do not peek under. You interviewed people that didn't peek under container. You went a layer deeper, but never peeked at the source tree to learn what cgroup really is. Does it really feel that much better to be one level above others?

xelxebar said 2 months ago:

> To put forth a more substantive argument, everybody has a layer of abstraction they do not peek under.

Sure. Though it's reasonable to want your level N developers to have some idea of what goes on at levels N-1 and perhaps N-2, cf. Law of Leaky Abstractions etc. It's similar to wanting your developers to be aware of their users' needs, which are level N+1 concerns.

pmichaud said 2 months ago:

Yeah, I wonder if there's an "optimal target" for the number of layers up and down you'd ideally be aware of. It has to be at least yours, and ones immediately above and below, but I see innovation coming from people with unusually keen insight into layers further away--eg. people making brilliant architectural decisions because they really, really know what the consumers of an api need and how those people think about the domain. Or vice versa, someone making something radically better or faster in a web app because they really get how the linux kernal is implemented.

It seems like cases where that deep knowledge is an advantage are rare but also very high value. I wonder how the EV pans out, both for individuals and orgs.

thaumasiotes said 2 months ago:

> The singular form is used to designate the whole feature and also as a qualifier as in "cgroup controllers". When explicitly referring to multiple individual control groups, the plural form "cgroups" is used.

They're free to say this, but since it violates the rules of the language they're never going to get any significant level of compliance.

the8472 said 2 months ago:

> To put forth a more substantive argument, everybody has a layer of abstraction they do not peek under.

I hope you meant have not peeked under, so far. Any obscure problem will you down the rabbit hole. If you just stop looking then how can you solve problems?

kqr said 2 months ago:

By working around them, which is sometimes the smart decision. Hard to know ahead of time when that is the case, though...

Harlekuin said 2 months ago:

> everybody has a layer of abstraction they do not peek under

Reminds of this xkcd: https://xkcd.com/435/

The beauty of programming is abstraction - you write a function to do one thing, do it well, and then you can abstract that concept away and reuse that abstraction. Although it's only an abstraction - a container is "like" a lightweight VM, and you can use it like that until it doesn't act like a lightweight VM. In which case, you have to dabble one layer deeper just to understand enough to know why the abstraction of your layer isn't a perfect analogy.

In which case, if you're looking for an expert in layer n, then basic knowledge of layer n - 1 might be a decent proxy for expertise

ZoomZoomZoom said 2 months ago:

>Does it really feel that much better to be one level above others?

Yes? Because levels are finite and quantifiable.

jdmichal said 2 months ago:

I mean, maybe technically, but not practically. Knowing how your computer works requires knowledge of semiconductors, p- and n-type doping, quantum physics, etc.

So yes, maybe you can enumerate the levels, but at some point they become not useful to discuss. The same way that discussing NPN gates is not really useful to discussing containers and VMs.

matharmin said 2 months ago:

I feel nowadays containers generally refer to the concept, and cgroups and namespaces are the implementation details of a specific container runtime. These are very important implementation details for security and performance, but it doesn't fundamentally impact how you structure your containerized application.

You can take the same container image, and run using Docker, Firecracker, gVisor, or many other container runtimes. Some of them are exactly "like a VM, but lighter".

cmckn said 2 months ago:

Agreed. The post feels a bit pedantic; I don't know any dev doing "cool things" with the underlying namespaces/cgroups. They're just using Docker. De-mystifying containers has value, but so does the abstraction.

mav3rick said 2 months ago:

The abstraction muddles the performance, security etc. impact of these two models. Not knowing them is going to be bad in the long run. Not everyone is a web dev.

moomin said 2 months ago:

Plenty of them using those features, if they’re using Kubernetes, Docker Swarm or especially Istio.

They might not know they are, but that’s besides the point.

jdmichal said 2 months ago:

> They might not know they are, but that’s besides the point.

I actually thought that was exactly the point here...

bregma said 2 months ago:

Your personal ignorance makes a poor argument.

Not only do I know several devs doing cool things with namespaces and cgroup, but I myself have done cool things with namespaces and cgroups.

I played with Docker briefly, but it has no real practical application in my line of work.

mav3rick said 2 months ago:

And I'm a dev "doing cool things" with namespaces / cgroups.

mav3rick said 2 months ago:

Please don't propagate this. Running in a hypervisor with a possible different kernel vs running on the same kernel in the same ring as the host are two very different things. Implications of these are very different.

imtringued said 2 months ago:

From the perspective of the developer there is no difference. They just configure kubernetes or docker to use a different Container runtime and keep using the same compose files, etc.

simonh said 2 months ago:

I think the point the OPs was making is that yes, as you say, developers can use containers without knowing these differences, but that there are actually real and important differences and maybe it would be better if more devs were aware of them.

mav3rick said 2 months ago:

Yes exactly.

pydry said 2 months ago:

I've had bugs when doing webdev stuff in docker that didn't crop up under one kernel that cropped up when using another. It's rare but it happens.

mulmen said 2 months ago:

I think it is widespread because containers are (seemingly) marketed as being some kind of magic. The impression I get is that the benefit to containers is that you don't have to think about them. This may be more a product of Docker but I think containers and Docker have become synonymous.

I'm not sure this is a fair comparison but that is my impression. I could be in a bubble too.

nurpax said 2 months ago:

I’d like to think so too. It’s hard to actually understand how Docker works by reading their documentation.

Everything looks so simple.. and it magically runs on macOS and Windows too. No wonder people think it’s some sort of a VM.

fapjacks said 2 months ago:

Technically, containers running on MacOS and Windows are a (Linux) VM, under the hood. Now, Microsoft put in a lot of work to support something like namespaces in the Windows kernel, so that it's now possible to run "Windows-native containers" for Windows software. But both Windows and MacOS still use a Linux VM under the hood to run Linux containers. That being said, I'm not familiar with the details of Windows containers because I don't use any Windows software and therefore don't have a reason to run any Windows containers. If you are interested in how containers work at a lower level, you might get more out of the documentation for containerd and runc, which are the underlying container runtimes.

ignoramous said 2 months ago:

> No wonder people think it’s some sort of a VM.

From what I read, if one packages a Linux container for Windows, Docker does then run it in HyperV?

https://stackoverflow.com/questions/52164563/what-is-the-doc...

takeda said 2 months ago:

It is product of docker, if you would deploy applications by using namespaces and cgroups directly it is very likely you would see things the same way the author does.

fluffything said 2 months ago:

I think how you are asking candidates the question might be unfortunate. A FreeBSD, MacOSX, or Windows dev that know their OSes might never tell you "namespace + cgroup".

Hell if you try to explain containers to 99% of the world programmers by saying "namespace + cgroup", I'd bet you that 0% of them will understand what you mean.

Instead, if you tell them that's "like a VM, but faster, because it "reuses" the host's kernel", you might be able to reach a substantial amount of them, and that level of abstraction would be enough for most of them to do something useful with containers.

Maybe the question you should be asking your candidates is: "How would you implement a Docker-like container app on Linux?". That's a question that specifically constraints the level of abstraction of the answer to at least one layer below Docker, and also specifies that you are interested on hearing the details for Linux.

kqr said 2 months ago:

> A “container” is just a term people use to describe a combination of Linux namespaces and cgroups. Linux namespaces and cgroups ARE first class objects. NOT containers.

Wow.

I have always wondered in which cases one would use "containers". I have asked so many Docker enthusiasts, "Why would I use containers instead of the strong process separation functionality that's built into the operating system, like users, cgroups, SELinux, etc?"

The answer has freakishly invariably been "Containers are lighter than virtual machines. Virtual machines use up all your resources. Containers can run just a single application, where virtual machines must emulate a whole operating system."

Now you're probably going, "How is that an answer to the question?"

And I know! It's not. These Docker enthusiasts have been impossible to get an answer out of. They have appeared like they're just parroting whatever they've heard from someone else as long as it seemed vaguely related to the question.

Now that I finally have an answer to my original question, it all makes sense again. And I'm inclined to agree that if you're stuck in the wrong bubble, we're both stuck in the same bubble.

Jnr said 2 months ago:

I started using Linux containers long time ago and I have used it to achieve compatibility and ease of use.

Before LXC was introduced it was somewhat painful to manage multiple environments using chroot, managing network, etc.

But running the same software in a single environment wasn't always easy. You had to take care of different software versions. And it wasn't uncommon for things to break frequently because of that.

While things like python venv, ruby rvm, etc. helped dealing with it, there was no universal tool for whole filesystem environments besides virtualization.

When LXC came out, I started using it for everything. Nowadays I use LXD and sometimes Docker and it is so nice and requires minimal effort. I know that without those tools it would be very inconvenient to manage my own servers. I have separate auto-updating containers for everything and if one thing breaks, it doesn't take everything down with it. And when everything is contained and each system has minimal set of packages set up, over the years rarely anything ever breaks.

And let's not forget that these Linux features also enabled the universal packages (flatpaks, snaps, etc.) which make it easier for Desktop users to get up to date software easily.

Of course I know that it is not virtualization. But why do people say "containers are not really containers"? It still contains different environments. No one said it is about containing memory or something else.

marcus_holmes said 2 months ago:

I figure Docker wraps stuff up into a nice "thing" that you can use, with documentation, a logo, and mindspace. You can put "Docker" on your CV and there's some hope that a recruiter will know what that means.

tyingq said 2 months ago:

Have them look at bocker (docker-like in ~100 lines of bash).

It makes it very clear what docker is, and isn't.

https://github.com/p8952/bocker

Specifically, the bocker_run function: https://github.com/p8952/bocker/blob/master/bocker#L61

hadsed said 2 months ago:

so what's the gap here with docker? what incorrect assumptions would i make from assuming this as a model for containers, if anyone knows

cle said 2 months ago:

The gap is that it's skipping half of what makes Docker powerful, and what isn't really discussed in this conversation at all either. Distribution.

E.g. see this line of code: https://github.com/p8952/bocker/blob/master/bocker#L25

"But I thought bocker implements Docker?" It doesn't, it only attempts to implement the "Docker daemon" part, and piggy-backs on Docker registries for image distribution. This is a huge part of the power of Docker, and why Docker isn't "just cgroups and namespaces". Cgroups and namespaces are a critical element, but Docker is much more than that too--it's also a set of standards for distributing and administering configuration+data for cgroups and namespaces.

barbecue_sauce said 2 months ago:

So many people don't seem to understand that artifact distribution is the compelling feature of Docker.

imtringued said 2 months ago:

And if you want to have artifacts you also need a way to build them... Running containers is just one third of the features that docker offers.

gwd said 2 months ago:

Exactly. Containers were around for at least a decade before Docker, just like mp3's were around for decades before the iPod. Docker's simple, efficient way to package containers were the key to their explosion.

irishsultan said 2 months ago:

> , just like mp3's were around for decades before the iPod

That sounded wrong intuitively, so I decided to look it up: Wikipedia claims that mp3 was initially released in 1993, and the iPod was initially released in 2001, so not decades and not even a full decade.

gwd said 2 months ago:

Thanks for the correction. For some reason I thought that the patents on mp3 compression were issued in the 80's, but there's nothing in the Wikipedia article which specifically says that.

The point stands though, that Docker didn't invent containers by a long shot; but they did make them massively more useful.

tyingq said 2 months ago:

I'd argue that bocker makes that more clear. It takes away any idea that the magic is in the containers. To the degree that it is, Linux provides that...not Docker.

It helps focus on what Docker does provide, as you mention.

cle said 2 months ago:

Right that's a reasonable claim. bocker makes false claims though, which causes a bunch of confusion (which seems to be the case for the person I was replying to).

"Docker implemented in around 100 lines of bash."

This is simply not true.

erjiang said 2 months ago:

This is true for many new waves of popular technologies.

1) A new technology or method becomes popular. 2) Developers find new advantages in using the technology. 3) Understanding of tech and original advantage is somewhat lost.

For example: containers are now widely used as part of a scriptable application build process, e.g. the Dockerfile. There are probably many developers out there who care about this and not about how containers are run and how they interact with the kernel. And for their use cases, that is probably the thing that matters most anyways.

fiddlerwoaroof said 2 months ago:

A down side is that people feel like they have to bundle an entire Linux rootfs because they think of a container as a lightweight vm: if they thought of it as a os process running inside various namespaces, they might be more inclined to only ship what they actually need.

pojzon said 2 months ago:

This is widespread unfortunately. Developers changed to „users” and no longer pursue the details of solutions. Im not gonna say its the dominant behaviour in the field right now, but I see it more and more often on various experience levels.

jagged-chisel said 2 months ago:

An experienced software engineer (yeah, developer) has experience engineering software. It’s been less than 10yrs since the advent of containerized deployments, and the space has been fraught with change nearly on par with the front end JavaScript ecosystem. Might as well just stick to writing code. OK, that’s the perception of my own peers, but I assume it scales.

DevOps is a recent advent, too, and sounds to me like it should be populated with folks who can participate in development and operations. Most developers I’ve ever known aren’t interested in operations.

dielectrikboog said 2 months ago:

SRE was born of putting software engineers to work building operational software and automation tailored to an organization and application. In contrast, no matter what anyone says, DevOps was objectively born of replacing the operations discipline and career track with a poorly-understood tool economy and ongoing opex to a cloud provider. As you say, typical JavaScript engineers can’t be bothered to understand network capacity planning yet feel they are more qualified to take their application to production by deferring all decisions to cloud providers. Who all employ SREs/PEs, not DevOps Engineers, by the way, and there is a big distinction.

We have people who can handle the operational stuff. They’re called systems administrators, network engineers, yes, even SREs, and other folks who are really good at understanding how computers and the Internet actually work, and a webdev bootcamp gives zero context into exactly what they do. None. Then, your ten-head startup suddenly scales to needing a physical footprint because it will literally save 80% of opex, and all your DevOps Engineers say “but why? There’s AWS,” and you’re in Medium hell for weeks arguing about it.

Apropos, if I interview you and find you have written a thoughtpiece on Medium about how “sysadmin is evolving” and it’s “time for the gray beards to learn code,” you do not get a call back. That has actually happened, and no, sysadmin is not evolving. I know staff-level “full stack engineers” who can’t tell me what an ASN is. The motions in the industry have merely made those people more in demand at a few companies and served to cement their position as “where computing gets done”.

Expect serious, existential operational threats and breaches to rise dramatically as DevOps continues to “win,” and consider it a smart strategic move to avoid DevOps culture in the long term. If you write a req called “DevOps Engineer,” I don’t even know what to say to you.

icedchai said 2 months ago:

Most "DevOps" folks I know are actually former sysadmins who evolved to work more with cloud technologies. To say sysadmin hasn't evolved in a bit of an exaggeration. Titles follow the trend. What people actually do is often similar.

dielectrikboog said 2 months ago:

No, it isn’t an exaggeration. They ceded one particular competency, systems administrator, and now pay cloud providers to do it instead. The job didn’t go anywhere. Capacity planning, change management, peering, supply chain management, all of that stuff is still happening, they just willingly tapped out of it and took another job (probably because the DevOps people came in with a slide deck and hand waved them out of a job at the SVP level).

That is not evolution (nor an indictment of those people, importantly). The side effect, which literally nobody is paying attention to, is a future where computing as a concept is concentrating on a few major companies. Every every every every single person who says “why would I buy a rack? There’s AWS” is furthering that outcome.

icedchai said 2 months ago:

You still need to do capacity planning, change management, monitoring, etc. within your cloud environment. Those AWS instances and the software their running doesn't manage itself. For some subset of "cloud", such as PaaS providers like Heroku, etc., you are absolutely correct. For another subset of "cloud", you still need sysadmin / ops skills to manage it.

dielectrikboog said 2 months ago:

Yeah, you do. It’s a shame pretty much every single cloud-native shop in existence, you know, doesn’t bother, and pushes out the people arguing for bothering. I’ve been at this nearly two decades, and I have yet to find engineers even running an Excel notebook of inventory, much less capacity planning. You know, because describe-instances and monitoring and Ansible and Chef and blah blah.

My role right now is telling a major government agency how much they’re wasting on Azure. You know, because describe-instances. It’s a lot, and I think there might be a business model in “let me at your API for a week and give me 10%.” I’d be retired by Labor Day.

Reminder: They’re sending Congressionally appropriated funds. To Redmond. And they’re not entirely sure why, in $millions of cases. Line up fifty startups that have had a Demo Day and I’d bet you’d find the same thing in fifty-one of them.

That’s the DevOps legacy: don’t mind the budget, because AWS, Azure, and GCP have our financial interests in mind and APIs are cheaper to staff than fiber. Parades of like-minded individuals came to D.C. and said “DevOps! Do it!” and the agencies are now increasingly beholden to organizations incentivized by profit and those contractors took their Innovation Award check and don’t return the “um, what now?” calls. That’s the mess I’m trying to help clean up, and it’s happening across every major governmental organ in the United States.

icedchai said 2 months ago:

I won't argue that running your own infrastructure is a better deal for many types of applications, especially if you can plan everything out, forecast usage, etc. There is absolutely a lot of waste in cloud spending. I've found tons of it myself. Cloud "cost optimization" is definitely a good business.

What "cloud" really buys you is flexibility. I also don't really miss the days of buying my own servers, lugging them into a data center, waiting for drops to be provisioned, going there late at night when there's a failure, or talking remote hands through stuff.

potta_coffee said 2 months ago:

As a developer, I'm not happy about it either. I'm now expected to write code, as fast as possible, and then handle all the ops / sysadmin tasks too, which I don't enjoy and am not really equipped to handle.

chrisweekly said 2 months ago:

But wait, aren't you "full-stack"? That means you also know all the minutia of UI animation rendering performance optimizations across the mobile landscape, right?

potta_coffee said 2 months ago:

I can "get by", but that doesn't mean I'm able to do an excellent job on every aspect. It's definitely a long chain of compromises.

chrisweekly said 2 months ago:

My point precisely.

icedchai said 2 months ago:

Yes! Most developers don't want to do operations work. It's not their specialty, and often uninteresting to them. A good team will let developers actually develop.

kqr said 2 months ago:

I agree with your overall point, but I also think there's a bit of conflation going on with the term "DevOps". It means different things to different people.

What I think is as close as we get to a canonical meaning is the meaning in which it is used by The State of DevOps reports, based on very good science and research by Nicole Forsgren et al.

They characterise "DevOps" as a transition into faster deployments, shorter feedback cycles, less warehousing of unexecuted code, and having developers have generally more insight into what's going on in the production environment.

This, of course, can (and arguably should) be done in cooperation with proper system administrators, network engineers, etc.

In other words, DevOps is not in opposition of having the right people people operate the systems.

In particular, it has nothing to do with cloudifying things. You can run a product with a DevOps approach right onto bare metal servers – in fact, there are a lot of companies doing that, for simple economical and reliability reasons.

I'm all for ranting against the cloud and the little experience people have when trying to operate systems, but blaming "DevOps" for it seems like a mischaracterisation. There's a lot of value to be had by getting more feedback from production, whether production means bare metal or virtualised environments.

dielectrikboog said 2 months ago:

As you say: DevOps means a million things to a million people. That’s why I ignored the person who tried to explain to me that I had the origin of DevOps wrong. Nobody alive or dead is qualified to make such a pronouncement, because nobody knows. It is an amorphous blob that usually manifests as a weapon for developers to beat the operations disciplines out of their company, which is why I speak about it as I do. Given the overwhelming evidence that the interpretation I’m going after is the popular one, arguing over the definition of the term is pointless.

You’re conflating my argument with cloud ranting and assuming DevOps methodology is the only method to acquire more feedback from production by stating your last paragraph like that. I’m saying there are potentially others, but we are entrenching on this way of doing things, and people picked this particular way of doing things and started talking organizations outside “SV” into it. That conversation gets harder a second, third, and fourth time. The prevalence of COBOL reqs should warn you of this, and what DevOps will look like in about a hundred years.

elbear said 2 months ago:

Hello!

I am a developer who wants to understand networks. Can you point me to some reading resources? For now, I've just been looking at the wikipedia pages for the different protocols.

But I think it would help me to work with concrete scenarios in which you use knowledge of networks to better understand things.

I would appreciate it if you pointed me to anything you think worthwhile.

dielectrikboog said 2 months ago:

Take certifications. Not to get them, but because preparing for them will structure the learning better than anyone can in response to that question.

aprdm said 2 months ago:

This is a great post but I want to say that I feel there's space for both. IMO a DevOps Engineer would sit between the sysadmin/network folks and the developers who wants to be users of a system.

In my current gig we've moved from the DevOps department to the Platform department as it aligns more with what we are trying to provide. A Platform for developers.

That said we essentially can speak sysadmin and can speak developers. We trust sysadmins with network, linux image and more specialist topics. We make tools for both sides and try to make them work together often sitting in the middle and negotiating.

dielectrikboog said 2 months ago:

Call them SREs and cross-train SWEs into it. It’s not a toothless distinction even though it seems like one. You absolutely, positively will hire better staff with better deliverables if you frame the work as “a software engineer focused on operational integration,” which SRE understands more.

SREs like to build platforms for exactly the same reasons you’re touching. You sound like you’re halfway there already. I strongly suggest the Google book, with “I am not Google scale” written in Sharpie on the cover for help digesting it.

aprdm said 2 months ago:

In my understanding SRE is more related to "keep the lights on and systems running", it might be just a different understanding of the nomenclature.

E.g: In my current case the software teams own their ops, my team doesn't ops for them.

We give them a platform of centralized logging, monitoring and etc. so that they can easily ops their services but is not my phone that rings and I am not on call. I am on call if some component of the platform itself fails.

At least my perception of SRE is that they're on call for products.

That said I would frame the work we do as “a software engineer focused on operational integration“.

That does sound like a good book and I will add to my to read list.

dodobirdlord said 2 months ago:

> In my understanding SRE is more related to "keep the lights on and systems running", it might be just a different understanding of the nomenclature.

SREs at Google own production in a very deep sense. They are decision makers on things like when teams can deploy, how frequently, what dependencies they can use, and possibly most significantly, who gets SRE support and who has to handle their own on call rotation. They also build monitoring and reliability services and tools.

Google also employs traditional Ops people, but not as many as you might suspect. When SREs look at traditional Ops work, they see a threat to reliability and a target for automation. The mantra is that the "E" isn't for show, and that SREs are software engineers who specialize on the topic of running highly reliable services. One of the things the SRE book stresses is making sure that SRE teams aren't so bogged down in oncall responsibilities that they don't have time to work on automating their oncall responsibilities.

aprdm said 2 months ago:

Yup absolutely and I do love the SRE book and adopt many practices of it.

Might be my own bias towards the SRE word.

twic said 2 months ago:

> no matter what anyone says, DevOps was objectively born of replacing the operations discipline and career track with a poorly-understood tool economy and ongoing opex to a cloud provider

This isn't the origin of DevOps.

jen20 said 2 months ago:

> no matter what anyone says, DevOps was objectively born of replacing the operations discipline and career track with a poorly-understood tool economy and ongoing opex to a cloud provider.

This is abjectly untrue with regards to the origins of the term - though it is the current state of the world, and your assertion about job reqs for "DevOps Engineers" is spot on.

"DevOps" as a term was coined by Patrick Debois and Kris Buytaert to succinctly refer to the concept of operations teams and development teams collaborating in a more appropriate manner than the "throw stuff over a wall" which is still common in many enterprises. It was unrelated to tooling.

We must not let vendors co-opt terms in such a way as this.

delusional said 2 months ago:

Has sysadmin not evolved? if I found some sysadmin logging into a production system and editing the config file in nano today, I'd be downright depressed.

dielectrikboog said 2 months ago:

Sounds like you’re going to be depressed when you learn how the entire Internet plane, all software engineering outside of “SV”, all IT, all government, and basically everything except your GitHub CI/CD adventure works, then. Sorry.

jmb12686 said 2 months ago:

This isn't an accurate statement. I work on behalf of a federal government agency, and no one has write access in development, let alone production. Everything is required to run thru our ci/cd pipeline. Times are changing.

dielectrikboog said 2 months ago:

For the better? I’m not asking out of preference, I’m asking out of actual conclusion: is trading the operational overhead of running LDAP for a usually homegrown, usually wobbly automated scripting soufflé that turns Make into a distributed system objectively better? Has nobody stopped to ask, is DevOps and CI/CD the best framework we can achieve? Did nobody think to ask before they told your agency it was the ‘right’ methodology and the objectively best way to build industrial, business process software in the government sector? Did the changing times come from ideology and belief or identified process gaps?

I ask because I think there’s something better. I don’t know what it is yet, but I want to find out. I’m worried about wastage in DevOps methodologies, a system where nobody is incentivized to care about the right things, going on to spook the policymakers on doing software before we find out if the DevOps and Cloud worlds, both, are objectively the best way to do software for their purposes. I strongly, strongly feel like the craft is on the wrong path, and persuasive successes in industry are getting to the right ears before we know if the discipline to efficiently handle agile infrastructure with today’s tooling is even possible. I’m not convinced DevOps will organically find the right calculus to spur the kind of systems research that took us to not only where we are, but that which will take us where we need to go.

Speaking of, I’m lazily glancing at Agile here as well but I’m not prepared for a coherent argument there, beyond pointing out that we now have better tooling for managing specifications, particularly formal and mathematical ones, than the waterfall development experiences that prompted agile thinking. We need more systems research, tinkering, rethinking POSIX, all of it.

gonzo41 said 2 months ago:

Imagine a Graph, the x-axis is time or adoption of a set of technologies. Right now the hump in the bell curve is CI/CD and devops. It's safe to be in a large group. If something better comes along then it'll start happening and in 15 years I expect the whole of government to adopt it when you are bemoaning the pitfalls of any new approach.

dielectrikboog said 2 months ago:

I know what a hype curve is, and I made two substantive points to differentiate this situation from a hype curve. I’m not “bemoaning the pitfalls,” I’ll repeat that I’m concerned this approach, which is gaining traction and getting solidified and entrenched, will spook the decisionmakers on being willing to accept your 15-year solution when it comes along.

If you’re going to be as patronizing as you are, please at least read what I’ve written and respond to it.

gonzo41 said 2 months ago:

CI/CD is a good enough framework at the moment. The goal is to build things and ship product to customers. It does that well and thats why it's winning.

The fact that a jenkinsfile starts with groovy and can include N number of different languages is just the nature of the beast. There is always fragmentation in software integration, and devops is integration on steroids.

Any other methods, formal or otherwise, need to provide X value at a cost of Y that makes adoption worth it. Currently if you don't use CI/CD then the value and cost propositions of adopting CI/CD actually start to make a lot of sense if you are mature enough to accurately do cost accounting on your IT management processes.

Yes, it's true, Jenkinsfiles, Cloudformation Json and Yaml all suck to work with. And configuration management is tricky. But I know that we'll all think the same thing about any other system or approach we adopt because it'll end up being work.

CI/CD may be a trade off but it allows us to focus on business problems rather than technical ones.

zodiac said 2 months ago:

I dont disagree with many of your points, but are you advocating "logging into a production system and editing the config file in nano"? Can't tell if you are...

said 2 months ago:
[deleted]
aprdm said 2 months ago:

Even within SF. Having talked with a bunch of folks from Amazon and Netflix they're far from having most of their workflows running in containers... imagine is the same for google.

justanotherc said 2 months ago:

You talk as if that's a bad thing.

As a developer I don't want to wade into the details of systems I'm using, I want to spend my time writing code that solves the business problems I'm tasked with solving.

If there is a system that allows me to do that by abstracting away the details I don't care about, why wouldn't I use that system?

legulere said 2 months ago:

Abstractions usually only work within a boundary. If you understand the underlying implementation you know its limitations.

Abstractions help you to not think about the implementation all the time and to have your own code work in a coherent way.

TeMPOraL said 2 months ago:

Abstractions are almost never self-contained enough. It's much easier to work within the bounds of an abstraction if you have at least a basic idea about the thing that's being abstracted.

justanotherc said 2 months ago:

> If you understand the underlying implementation you know its limitations

Maybe in some cases, but then I could probably make a case for that being a poorly built abstraction.

All you really need to understand about an abstraction is the required inputs, and the expected output. Having knowledge about what's going on in the black box inside doesn't really serve a functional purpose IMO. This is the very purpose of abstractions. If we sat and reflect on all the abstractions we utilize every day, we would realize we can't possible have intimate knowledge of how they all work. We just don't have enough space in our brains.

I have no idea how my OS runs under the hood. I don't care as long as the inputs I provide yield the expected output. Same goes for how my phone connects me on a call, or how my car manages air/fuel ratio in order to control engine power.

ben509 said 2 months ago:

I do want to wade into the details, so I'd put it more that I need to try many things, and by necessity I have to be a user before I can become an expert.

I know a good deal about how Docker works from having hammered at it. I'm not remotely an expert in it yet, but as I go I'm learning more details about cgroups and namespaces, and this is stuff I've been able to fit in while solving problems.

jschwartzi said 2 months ago:

Because frequently the devil is in the details.

bluGill said 2 months ago:

But only some of the details. I still need to ignore most of the details even while I need control of the ones that matter.

This is easier said than done.

justanotherc said 2 months ago:

Exactly. The large project I'm working on has 100k LOC that I've personally written. However out od curiosity once I ran CLOC on the vendor folder... and it gave up after 2.5 million LOC. I could never possibly understand all the details going on in there lol.

jschwartzi said 2 months ago:

The devil is in the details of the details.

nunez said 2 months ago:

This is extremely widespread amongst the really large companies I've consulted for (with equally large development teams to boot). "Containers are VMs, but smaller and/or faster" is an extremely common school of thought. I would like to think that this viewpoint is dying somewhat, especially now that many large organizations have at least experimented with orchestrators like Kubernetes or ECS.

I can't blame them, however.

If you're a dev at a company where such misunderstandings are pervasive, and you're interested in trying to get your app running in this "Docker thing that you've heard of", you will, probably:

- need to seek out a Linux dev server because you probably don't have admin rights on your machine, and getting Docker installed onto your machine in a way that doesn't suck is more trouble than its worth,

- have engineering management that are being told that containers are like VMs, but smaller, likely from magazines or sales/pre-sales consultants,

- Have to wait days/weeks to get SSH credentials to a dev server that has RHEL 7 on it, hoping that it has Docker (a thing you've heard of at this point, but don't really know much about it otherwise),

- Have to wait even more time to get Docker installed for you on the dev server by a sysadmin that dislikes Docker because "it's insecure" or something, and

- be constantly reminded that your job is shipping features before anything else, usually at the cost of learning new things that make those features more stable

The point here is that environments like this are barely conducive for getting engineering done, let alone learning about new things. Moreover, the people that will do anything to scratch that itch usually find themselves out of companies like that eventually. It's a circle of suck for everyone involved.

So when I introduce containers (which Docker is, by far, the most common runtime, so I introduce it as "Docker" to avoid confusion) to someone who doesn't know what cgroups or namespaces are, or someone who responds with something about containers being VMs or whatever, I happily meet them where they are at and do my best to show them otherwise.

cortesoft said 2 months ago:

Part of it might be that many developers target linux but code on a mac... and on a mac, docker containers DO run in a VM.

BiteCode_dev said 2 months ago:

With the new dev ops crave, we expect people to understand backend dev, and front end dev and design and sysadmin and networking and project management and infra and product ownership. Not to mention be proficient with tools related to those things.

The result is not people getting experts at all those things, but getting capable of producing something with all those things.

Obviously, to do so, people must take shortcut, and container ~= docker ~= "like VMs, but lighter" is good enough as a simplification for producing something.

Now there is something to be said about the value, quality, durability and ethics of what is produced.

But that's a choice companies make.

tasogare said 2 months ago:

The "containers are just XX Linux technology" comes regularly in the comments but it’s untrue: Windows containers are obviously not based on any Linux tech proper.

Also the overhead intuition exists for a reason: on both macOS and Windows when Linux containers are used, there is actually a whole VM running Linux underneath. And Windows containers come in two flavors, one being Hyper-V based, so again a VM tech comes in play.

So there are technical reasons why containers are "misunderstood", it’s because most people don’t run Linux natively, and on their stack containers are more than just cgroups and namespaces.

wmf said 2 months ago:

People love to bring this up, but if Linux did have first-class containers, how would the developer's experience be different?

cyphar said 2 months ago:

All system programs could operate and manage the containers running on the system, meaning you can use your existing knowledge to manage a bunch of containers.

For instance, you could run your package manager across all containers to see if they have packages with known CVEs. Or manage the filesystems of all containers on the system (the usefulness of this is only clear with filesystems like ZFS and btrfs). This is effectively what you can do with Solaris Zones.

These kinds of improvements to usability aren't as sexy now that everyone is really excited about Kubernetes (where you blow away containers at whim) but it is useful for the more traditional container usecases that Zones and Jails. LXC (and LXD) is probably the one Linux container runtime that is closest to that original container model.

There's also a very big security argument -- it is (speaking as a maintainer of runc) very hard to construct a secure container on Linux. There are dozens of different facilities you need to individually configure in the right order, with hundreds of individual knobs that some users might want to disable or quirks you need to work around. It's basically impossible to implement a sane and secure container runtime without having read and understood the kernel code which implements the interfaces you're using. If containers were an in-kernel primitive then all of the security design would rest in one single codebase, and all of the policies would be defined by one entity (the kernel).

danieldk said 2 months ago:

For instance, you could run your package manager across all containers to see if they have packages with known CVEs. Or manage the filesystems of all containers on the system (the usefulness of this is only clear with filesystems like ZFS and btrfs). This is effectively what you can do with Solaris Zones.

You can also do this on Linux with NixOS, where you can define the system and all containers it runs declaratively. Updating the system will update everything, including the containers (of course, you can also pin the containers or packages in the containers to specific versions).

cyphar said 2 months ago:

Sure that's because the package manager is container-aware (and NixOS is very cool -- don't get me wrong), but the distinction is that on Solaris all system tools are Zone-aware (including things like DTrace which would require specifically an in-kernel container concept because you need to be able to aggregate by container and there isn't any in-kernel data to aggregate on in Linux -- and no, cgroup IDs aren't sufficient).

zurn said 2 months ago:

Docker would probably diverge less from LXC, because it was first built on LXC and only later got its own low level implementation using namespaces and the other low-level things. Hard to say if the alternative world would have been better or worse, a lot of LXC/LXD implementation details seem more technically competent than Docker.

lmm said 2 months ago:

There would maybe be more consistency. E.g. currently if I say an application is running in a container, do you expect there is virtual networking in place, or not?

wmf said 2 months ago:

Jails and Zones probably also have bridged, routed, and NATed modes so I'm not sure that example is that useful. It's true that networking is different in Docker vs. k8s but there are valid reasons for it.

loeg said 2 months ago:

> Somewhat tangential note: most developers I have met do not understand what a 'container' is.

The problem is at least in part that the term is basically meaningless. It's too broad and flexible to be descriptive. As you quoted:

> A “container” is just a term people use to describe a combination of Linux namespaces and cgroups.

sanderjd said 2 months ago:

I'm more sympathetic to the mainstream usage. People are (attempting to) use an abstraction: a "container" is an isolated process runtime environment without virtualization overhead. That abstraction seems useful to me. Ideally it would be usable without too much leakiness, in which case its users would not need to be aware of implementation details like cgroups and namespaces. In practice, all abstractions are leaky to some degree and effective use of an abstraction often eventually requires a more sophisticated understanding of the details beneath the veil of the abstraction. But that doesn't mean the abstraction is totally useless or completely a mirage or anything, it's just a leaky abstraction like all others.

If you say that a container is not a first class object but cgroups and namespaces are, I can just as easily say that cgroups and namespaces aren't first class objects, they are just terms people use to describe a combination of system calls. It's just abstractions with different amounts of leakage the whole way down.

didibus said 2 months ago:

I don't know, container is an abstract idea, and that's all. Can you run apps within a contained OS environment?

LXC is one way to do so, runc is another way to do so, docker is a third way to do so, all for Linux. Now if you took some other OS, there'd be different solutions, each with slightly different details and thus properties, but same idea.

I mean, do you ask people what SQL is? And get frustrated if they don't start talking about MySQL specific details like InnoDB and what not?

Don't know, I feel I can't agree with you, do devs feel there's less magic involved in VMs? Honestly, I have less idea what VMs are built on top of than I do for containers.

peteretep said 2 months ago:

> container is an abstract idea

In this context, it's not, it's specifically referring to a process running in a cgroup.

> LXC is one way to do so, runc is another way to do so, docker is a third way to do so

Docker is a suite of tools for managing running LXC or runc, both of which set up processes running under cgroups.

> Now if you took some other OS, there'd be different solutions, each with slightly different details and thus properties, but same idea

You should read the article.

zodiac said 2 months ago:

"container" can mean other things in other contexts though, e.g. I'm running a configuration of Docker on my machine right now such that the CLI talks about "containers" but it is actually running VirtualBox in the background

mav3rick said 2 months ago:

A process in a namespace is running just like another process being managed by your kernel. Based on how you set up networking, you may face an extra hop to get packets. I don't know what other scalability issues will be there, it's literally a process running similar to other processors.

Can you shed light on some of these, maybe I haven't encountered these in my day to day ? (Please note I am not talking about containers running in VMs, which apparently Docker does now).

fulafel said 2 months ago:

Docker usage of cgroups is optional and it's not really an integral part of containers. Contrast to people using cgroups without containers too (unlike namespaces).

k__ said 2 months ago:

Doesn't surprise me if I think about all the people who suddendly need a K8s cluster...

taf2 said 2 months ago:

Thanks I’d have fallen into your category of developers because in large part I never bothered with containers since we have everything running on vms and we’ve already isolated things so I’ve had only partial interest in exploring them ... but now that I know in linux it’s CGroups and namespaces that helps a lot in understanding thanks

Thaxll said 2 months ago:

Overhead is close to 0.

dehrmann said 2 months ago:

> like VMs, but lighter

This is both very right and very wrong.

jjtheblunt said 2 months ago:

widespread in my experience; it reminds me to users of npm not seeming to understand what node is, for example.

ailideex said 2 months ago:

> A “container” is just a term people use to describe a combination of Linux namespaces and cgroups.

And those people should stop because that would be inaccurate. More specifically, a "container" can be any of the following:

- Someone who contains; something that contains. An item in which objects, materials or data can be stored or transported.

- (transport) A very large, typically metal, box used for transporting goods.

- (by extension) Someone who holds people in their seats or in a (reasonably) calm state.

- (computing) A file format that can hold various types of data.

- (object-oriented programming) An abstract data type whose instances are collections of other objects.

- (computing, graphical user interface) Any user interface component that can hold further (child) components.

...

If we are talking about a Docker Container (here Docker Container is a proper noun), on the other hand - then clearly you are still wrong - as Docker have worked at various times on Windows and FreeBSD.

Now trying to grasp at even more straws to find some the "container" definition of the gaps that may give some justification to your claims, we can look at the other proper noun "OCI Container" ... but alas ...

https://www.opencontainers.org/faq

> Will the runtime and image format specs support multiple platforms?

> Yes. For example, take a look at the runtime-specification configuration where it mentions example Linux, Windows and Solaris configurations. There are also multiple implementations of the runtime-specification that you can take a look at.

It seems then that you are just wrong. Plain and simple. I'm seriously concerned with whoever is having you interview candidates.

MuffinFlavored said 2 months ago:

> A sizable fraction will be concerned about 'container overhead' (and "scalability issues") when asked to move workloads to containers. They are usually not able to explain what the overhead would be, and what could potentially be causing it.

For what it's worth, one of the biggest "containerization" recommendations is to not run your database (example: Postgres) in a container, correct? Due to I/O performance decrease?

wmf said 2 months ago:

No. Docker volumes aka bind mounts have little or no overhead. You don't want to run a database in an ephemeral "cattle" container without some kind of HA because you'd lose data.

MuffinFlavored said 2 months ago:

    docker run --rm \
      --name postgresql \
      -e POSTGRES_PASSWORD=postgres \
      -d \
      -p 5432:5432 \
      --cpuset-cpus="0-1" \
      --cpus 2.0 \
      -m=1024m \
      --mount type=bind,source=$HOME/docker/volumes/postgres,target=/var/lib/postgresql/data \
      postgres:12.2
native results on a digitalocean VM:

    $ pgbench -c 100 -j 2 -T 60 postgres
    starting vacuum...end.
    transaction type: <builtin: TPC-B (sort of)>
    scaling factor: 1
    query mode: simple
    number of clients: 100
    number of threads: 2
    duration: 60 s
    number of transactions actually processed: 17660
    latency average = 342.761 ms
    tps = 291.748482 (including connections establishing)
    tps = 291.791293 (excluding connections establishing)
in a docker container:

    starting vacuum...end.
    transaction type: <builtin: TPC-B (sort of)>
    scaling factor: 1
    query mode: simple
    number of clients: 100
    number of threads: 2
    duration: 60 s
    number of transactions actually processed: 13014
    latency average = 466.928 ms
    tps = 214.165822 (including connections establishing)
    tps = 214.199201 (excluding connections establishing)
214tps in docker, 291tps outside of docker

26% decrease, with a bind mount on ubuntu 18.04

takeda said 2 months ago:

Docker introduces all kinds of performance issues. I for example noticed higher latency even with just pgbouncer. It was very visible when running integration tests, which normally took 15 minutes, but as a container it was 45min - 1h.

lazyant said 2 months ago:

Funny, I looked into papers or articles about performance issues with containerized RDBS _binaries_ and didn't find anything relevant. Of course you want the data mounted outside the container so it's not ephemeral.

I ran some casual tests using and found out there is a performance hit in using db binaries inside a Docker container due to Docker networking (different for different types of networking).

takeda said 2 months ago:

I would be more concerned about writes going through additional fs layers and about abrupt termination of a container.

You generally are trusting a database to keep your data safe, so those things will contribute to data loss.

Remember the freakout about PostgreSQL not handling sync() correctly on Linux due to ambiguity in the man page? Having a networked filesystem + additional abstractions (like layers) etc only reduces data durability.

navaati said 2 months ago:

Except you never do that (having your DB directory be in the container rootfs). Nobody does, because then (beside the performance/reliability impact you mention) if the container goes (docker rm) the data goes. You're always gonna use a Volume for this kind of cases, be it K8s volume or Docker volume, and these, as neighbour message mentions, are just bind mounts (or actual mounts in certain cases), so no layers, no overlay, nothing of the sort.

MuffinFlavored said 2 months ago:

check out my response here: https://news.ycombinator.com/item?id=22809524

looks like docker with a bind mount has a 26% decrease in performance versus native

said 2 months ago:
[deleted]
aprdm said 2 months ago:

I think it is more because containers should be stateless and you cannot make a database stateless.

We do run databases that do 100k`s ops/s in containers but we don't run them in kubernetes. We just mount the VM hard drive in it.

Disposition said 2 months ago:

A DB container may and should be stateless, but when configured correctly the volumes specific to the storage engine are persistent. I've been running production databases in Docker since 2014 without any data loss, it makes a lot of system-level administrative work much easier.

With a healthy understanding of how the individual storage engines commit to disk, upgrading, backing up, etc. can be done in parallel and without impact to a running production system thanks to the power of overlayfs.

moonchild said 2 months ago:

I'm a bit disappointed it didn't go into detail into the way jails differ from zones. VMs I understand, but it seemed like the main point of the post was to distinguish containers from the other three.

nickik said 2 months ago:

All the detail you could possible want:

https://www.youtube.com/watch?v=hgN8pCMLI2U

said 2 months ago:
[deleted]
mooreds said 2 months ago:

Note this is from 2017. Previous discussion: https://news.ycombinator.com/item?id=13982620

dang said 2 months ago:

Year added. Thanks!

dirtydroog said 2 months ago:

For my workload I've struggled to see the advantage containers would give me. Maybe someone here can convince me, rather than the current justification of 'docker all the things'.

We have servers, they handle a lot of traffic. It's the only thing running on the machines and takes over all the resources of the machine. It will need all the RAM, and all 16 vCPUs are at ~90%.

It's running on GCP. To rollout we have a jenkins job that builds a tag, creates a package (dpkg) and builds an image. There's another jenkins job that deploys the new image to all regions and starts the update process, autoscaling and all that.

Can containers help me here?

pbecotte said 2 months ago:

If you already have all of that working, why would you change? Containers are valuable for a couple things-

1. Packaging and distribution- it's very easy to set up a known good filesystem using docker images and reuse that. There are other solutions- dpkg plus ansible would be an example.

2. Standardized control- all apps using 'docker run' vs a mix of systemd and shell scripts can simplify things.

3. Let's you tie into higher level orchestration layers like k8s where you can view your app instances as a single thing. There are other solutions here as well.

4. Can use the same image on dev machines as prod instead of needing two parallel setup schemes.

If you already are happy with your infra, certainly don't change it. I think once you know containers they are a convenient solution to those problems, but if stuff is setup they already missed their shot.

nfoz said 2 months ago:

So.... are any or all of these what you would call a process "sandbox"? Do operating systems make it easy to sandbox an application from causing harm to the system? What more could be done to make that a natural, first-class feature?

Like, let's say you found some binary and you don't know what it does, and don't want it to mess anything up. Is there an easy way to run it securely? Why not? And how about giving it specific, opt-in permissions, like limited network or filesystem access.

said 2 months ago:
[deleted]
codeape said 2 months ago:

I do not understand docker on windows.

If I understand correctly, when I run a docker image on Linux then the dockerized processes's syscalls are all executed by the host kernel (since - again if I understand correctly - the dockerized process executes more or less like a normal process, just in isolated process and filesystem namespace).

Is this correct?

But how does docker on windows work?

deg4uss3r said 2 months ago:

My only problem with this article is there is no such thing as "Legos". Jess is brilliant and explains things super well here.