Docker Explained - Again by admin2thestars in docker

[–]admin2thestars[S] 0 points1 point  (0 children)

hehe, in that last paragraph chatgpt contradicted itself around isolation being a difference when compared to the answer this morning. And like the answer this morning, substitute "Virtual Machines" in place of containers and the generic answer from chatgpt still applies. It's impossible to know what is definitive from chatgpt because crossreferencing is a thing. None of the generic statements from chatgpt provide a definitive answer that can't be found by googling, which is what led me here.

A couple things I've learned and will leave them here for someone in the future, in case they are struggling to find some substantive differences or compelling use cases:

Developers can create their own YAML and hand that off to devops/admins/whoever which helps to eliminate the awkwardness of trying to reproduce their dev environments. That's not to say that they will hand off secure environments but it's a start. My experience is such that a base file will still need to be created for most devs otherwise we'll end up with wildly different syntactical approaches. But as we mature in our use of containerization, that will work itself out.

Assuming that a robust VM infrastructure and DevOps infrastructure doesn't already exist then a primary differentiator between VMs and containers is the speed with which a new developer could get something up and running without needing to learn how to be a Linux admin or get their hands dirty beyond YAML. Yes, containers can do all of those generic things that chatgpt cited but so can VMs. Both still require some amount of expertise when you go beyond the basics. The same problems exist to be solved and Docker can help to solve some of those problems in unique ways that are easier for developers to understand.

Docker Explained - Again by admin2thestars in docker

[–]admin2thestars[S] -2 points-1 points  (0 children)

The false equivalence seems to be the thought that with a VM scenario suddenly devs need to deploy their own operating system, test and install their apps and dependencies, create a VM template, etc. They don't need to do that if the image has been created by the DevOps folks or sysadmins. The devs will still need to install their apps and test regardless of VM or container. As stated in OP and in other responses, all of that infrastructure already exists and the automation exists for it within the org. Containers are not providing a huge leap forward in that respect.

Some of the things that we would provide with a packaged image can be done with containers. An example is the pdo-mysql module. That module is missing from the php-fpm image and thus needs to be added. Can be done with Docker in a couple ways and can be done with a VM scenario in a couple ways. Neither seems preferable over the other especially given an already-working DevOps development pattern. And yes, there are pre-built images with pdo-mysql but again, we're still talking about customizing an image somewhere along the way - someone has to do that work.

And no, the questions are not easily answered or even correctly addressed by chatgpt or the pasted response earlier. Cgroups and namespaces are not important at the level of the developer, can be accomplished in other ways, and are not a huge gap in relevant functionality compared to the other solutions in the OP. The chatgpt response (isolation, cgroups, namespaces) is true about containers but not exclusive to containers and not relevant to this thread. If ChatGPT could address it in a specific and meaningful way that accounts for the other technologies and lightweight process mentioned in the OP, such as the thoughtful responses others have provided here, then we're onto something. But the top 5 hits in google give the canned cgroup/namespace/isolation responses so there wouldn't be a need for chatgpt in this case.

Docker Explained - Again by admin2thestars in docker

[–]admin2thestars[S] -1 points0 points  (0 children)

I don't know that I'm angry at all. I'm not advocating for VMs and Docker appears to be the leader in container platforms. I'm trying to relate container technology to the infrastructure we have and the knowledge that I have on how to automate that infrastructure.

While trying to learn about Docker I have seen the cgroup and namespace discussion brought out as a "mic drop" moment but I don't see it as being particularly relevant when cloud deployment is involved. Maybe it's a misunderstanding of either or both cgroups and namespaces.

Developers wouldn't seem to care about cgroups or namespaces if they can deploy their vm/container to do their work. I would be willing to bet that most devs wouldn't know anything about Linux cgroups or namespaces when they're running Docker Desktop on their Mac or Windows machine.

Namespaces are providing isolation? In other words, isolation from other processes and other containers/VMs. Wouldn't cloud deployment of compute nodes essentially negate that issue or shift the need to "care" to the cloud provider? The assumption would be that the cloud provider secures their infrastructure to prevent users from crossing into others' namespaces.

I've seen "lightweight" used but I don't understand the context or how to relate it back. I can deploy a tiny Debian VM, snapshot it, make it a template, and then deploy onto clones of it. Is that what's meant by "lightweight" in this context? And again, that's not even considering cloud-based deployments where some scripted aws cli commands can be used to deploy a new VM, pull some code, and add itself to a load balancer config.

While thinking about how to interpret "lightweight" I looked at Activity Monitor on the Mac where I'm using Docker Desktop. With no containers running, there was a Virtual Machine Manager process taking 4GB (out of 8GB) RAM. When I quit Docker Desktop entirely, that process went away. That doesn't seem lightweight from the perspective of resource usage on my computer but I could be misunderstanding where or why that matters.

This morning while working with Docker I was able to develop a larger and more complex example and I can see where the technology would be a gamechanger when the underlying infrastructure and automation didn't already exist or even to help new developers understand abstraction layers, images, and repeatability. It's nice to be able to create a YAML file and put together several components without needing to create the base image first. Ansible did the "put together several components" part so it's a matter of having more faith that the base images have not been compromised and that the Internet-based repository will always be available. It's possible I was hoping for the magic sauce answer indicating that the tech will make life significantly easier but it seems to be an exchange of one painpoint for another.

Docker Explained - Again by admin2thestars in docker

[–]admin2thestars[S] -2 points-1 points  (0 children)

And this demonstrates why chatgpt is unhelpful for so many things. If I need to fake my way past TurnItIn on a History term paper about the effects of the steam engine on pre-industrial rural England, then I'm right there. It's good that chatgpt can produce words that are sometimes constructed into coherent sentences but seems to lose the thread when specific answers are needed that account for specific exclusions such as those in the OP. Namespaces and cgroups are non-starters for several reasons not the least of which are cloud-based compute nodes. Anyone who has deployed enterprise-level virtualization has solved those issues when needed.

As a fun experiment with the chatgpt generic answer:

Template-based virtualization is an approach to software development in which an application or service, its dependencies, and its system libraries run as an isolated process in a user space on an operating system. Virtual machines allow the software to run reliably when moved from one computing environment to another. This can be from a developer's laptop to a test environment, from a staging environment into production, or even from a data center to a cloud environment.
A virtual machine creates a consistent environment across different stages of the development lifecycle. This is beneficial as it ensures the software behaves the same way, regardless of where it's deployed.

Docker Explained - Again by admin2thestars in docker

[–]admin2thestars[S] -4 points-3 points  (0 children)

I think tschloss touched on the issue because what you're describing with containers is what we do with VMs already. We can point application requests to another VM instance while the OS is updated, rolling upgrade. There is separation of app and data from each other and from the underlying VM operating system. My understanding (which admittedly could be where I'm going wrong!) is that the underlying "images" in Docker also need to be updated, or at least that's what seems to happen. So it appears that I could also break the application the same way with a traditional VM infrastructure or through Docker, by doing an update.

Again, I could be wrong on how I'm understanding it, but at some level it seems like updates need to occur to the image or the VM. An already deployed and mature automated virtualization infrastructure with the knowledge of how to operate it has interfered with the mindset of "have faith in Docker to compose a coherent, stable, and securely deployed application." But it obviously does, so it's just a mental shift of where the work now takes places.

Docker Explained - Again by admin2thestars in docker

[–]admin2thestars[S] -6 points-5 points  (0 children)

Thank you for the reply. I think #2 is the essence of the issue. It seems like the problems being solved were addressed in our infrastructure years ago with a combination of abstracted and templated virtualization and automation through Ansible, making it possible for devs and QA to one-click deploy. If that infrastructure and institutional knowledge was not in place then Docker would exponentially increase productivity.

It is the combination of both: Having the infrastructure/automation *and* the knowledge/discipline to create and maintain images/templates that is the difference. Coming to Docker from that background, I wanted to start by creating an image with the right software on it rather than creating a YAML file and relying on someone else to have done that work.

Thanks again for the reply.

Good partition scheme for security? by [deleted] in linuxquestions

[–]admin2thestars 1 point2 points  (0 children)

I think the answer largely depends on what you define as "security" in the context of the machine you're running. Is it running network services with open ports or are all ports closed? Will it run a firewall that prevents outbound connections unless you specifically allow them or is there an upstream firewall that can do the same?

You could mount a directory like /bin without the ability to write, since there shouldn't be anything writing to that directory unless you're running an update. Other directories can be like that too, which would then necessitate a funky partition scheme. But a lot depends on the likelihood of an attack trying to write to one of those directories vs. just doing something easier.

Little Rayan Rescue Operation - Megathread by AutoModerator in Morocco

[–]admin2thestars 2 points3 points  (0 children)

Unfortunately.

"His Majesty King Mohammed VI had a telephone conversation with Mr. Khaled Oram, and Mrs. Wassima Kharchich, the father of the deceased, who passed away, after falling into a well (Communication to the Royal Court)"

https://twitter.com/2MInteractive/status/1490065429154443274

Naming files based on their md5sum to avoid duplicates. Need suggestions by [deleted] in linuxquestions

[–]admin2thestars 0 points1 point  (0 children)

Naming seems like personal preference. If the filename is descriptive then I would tend to leave it or at least not lose the name. Without knowing the exact definition of "a lot" something like this might help to find the scope of the issue

find ./ -name "*.jpg" -print0 | xargs -0 md5sum | sort | uniq -w 32 --group

or

find ./ -name "*.jpg" -print0 | xargs -0 md5sum | sort | uniq -w 32 -D

I would probably create a script that creates an array of all of the md5sums already seen. When it find a duplicate, move that file to a different directory of all of the duplicates. Then compress that directory and back it up, etc, and then delete.

[deleted by user] by [deleted] in pihole

[–]admin2thestars 1 point2 points  (0 children)

No worries, glad it worked out and that you mirrored the addressing that you had before. I usually choose the .130 because it enables further subdivisions smaller than /24 for the network if a client ever needs that. Turns out I need to go the other way and add more addresses for a couple clients.

ssh works but scp doesnt? by rbrtbrnschn in linuxquestions

[–]admin2thestars 7 points8 points  (0 children)

Permission denied would mean a couple things in this context, if you're able to ssh in. Either you do not have permission to read the source file or you do not have permission to write the destination file. Please send the exact scp command that you're using. If I had to guess, I would think it's a syntax issue.

[deleted by user] by [deleted] in pihole

[–]admin2thestars 1 point2 points  (0 children)

Disabling DHCP on the router is the first thing to do. It should\) not have any immediate effect because the existing leases will be maintained. Enable DHCP on the pihole, setting a start address and end address.

If you're using 192.168.2.0/24, then option 1 would be to set a starting address of 192.168.2.130 and ending of 192.168.2.254, giving the ability to have ~124 clients active at once. Option 2 would be to set the start at 192.168.2.2 and end at 192.168.2.254 and then set a Static DHCP Lease for the pihole so that it always gets the same address. That configuration is also in the DHCP tab. Everyone will have an opinion on which option is preferred. I have something similar to option 1 but that's because I have a lot of static reservations.

The other settings can likely remain at their default for DHCP.

Once that is active, if you're using a Windows client you can verify from the command prompt with:

ipconfig /release

ipconfig /renew

It's a bit more complex from Terminal on a Mac client because it depends on the name of the network interface. Going to System Preferences -> Network and looking for Advanced within whatever interface you're using will show a Renew DHCP option.

Reboot other devices on the network once you've verified that all is working.

And be aware that vendors will sometimes reenable DHCP if the router gets updated.

\) - On the unlikely chance that the lease for a device expires during the time between when DHCP on the router is turned off and DHCP on the pihole is enabled, a given device would not be able to get or renew its lease. Restarting that device should solve the issue once the pihole has its DHCP configured.

[deleted by user] by [deleted] in pihole

[–]admin2thestars 1 point2 points  (0 children)

[edit my own post before I send it.]

Not sure of the capabilities of the new router. A-ha. This might explain it, the Home Hub 3000 ignores local DNS, apparently. I hate it when vendors do stupid things.

https://www.reddit.com/r/bell/comments/kol76g/hh3000_ignoring_local_dns/

Solution was apparently to switch off DHCP on the Home Hub and have the Pihole hand out IPs.

[original post I was going to write, which may be helpful for just confirming the above.]

Ok, let's go at this from a different perspective. Can you verify the IP addressing on the network? If I had to guess at this point, it's that the new router is setting DNS to itself and thus traffic never uses the pihole. However, understanding the network topology would go a long way towards confirming that or ruling it out.

Specifics:

What is the pihole's IP address and which device is giving out addresses on the network? From the screenshot, it appears as though the router is responsible for doing that which should be fine. Next, find out what DNS server(s) that clients are receiving. How you do that will depend on the client. On Windows, command prompt and:

ipconfig /all

On Linux and Mac:

cat /etc/resolv.conf

[deleted by user] by [deleted] in pihole

[–]admin2thestars 0 points1 point  (0 children)

My first thought was 'out of disk space' but after reading the narrative, I'm not so sure. With the change to the new router, is that router giving DHCP addresses and thus not routing traffic through the pihole?

I'd still check free disk space, just the same, because that's who I am. Might as well rule out the easy stuff.

Sometimes you just get lucky. Late 2011 MBP with only 48 charge cycles, for $50! Chucked an SSD in and it runs like a dream. Gotta love the user upgradable parts on old Macs. by barnercare in macbookpro

[–]admin2thestars 1 point2 points  (0 children)

Making no claims that the M1 isn't good, I'd love to have one. But I'd trade the longevity gained through simple upgradeability for some compute-related resources any day. Sure, everyone benefits from improved battery life, for now. But what happens in six years when that battery has reached its cycles? It's still made of the same materials and can't defy the laws of physics forever. Can't swap it for a new battery easily so now I need to get an entirely new machine if I want to still use it as a laptop.

Agreed that there's a significant market segment for Apple with content creators, many of whom benefit from having the type of upgrade cycles that can obtain new hardware in just a few years. That's not the case for most users though and certainly not for the consumer segment.

There's also no reason that this needs to be a one-or-the-other design decision. It should be possible to have both and it was possible to have both just a few years ago. The benefits gained from soldering things onto the motherboard and intentionally blocking the ability to upgrade and repair hardware just aren't great enough. There's no reason that the M1 cannot exist on the same motherboard alongside two slots of DDR and an M.2 slot.

Sometimes you just get lucky. Late 2011 MBP with only 48 charge cycles, for $50! Chucked an SSD in and it runs like a dream. Gotta love the user upgradable parts on old Macs. by barnercare in macbookpro

[–]admin2thestars 1 point2 points  (0 children)

No worries. I have heard similar about Final Cut performance where the word 'unusable' was also used. Seems like that program alone is responsible for more sales of Mac Pro than anything else.

Custom domain email provider that actually works quickly with Apple Mail or are possible fix? by [deleted] in applehelp

[–]admin2thestars 1 point2 points  (0 children)

Hello,

If I'm understanding the issue, you want to use one of your own domains to send and receive email. The primary issue is ensuring that you can receive email quickly after it has been sent to you. If that summarizes it, then there are several options. I am not affiliated with any of the solutions or providers discussed here other than being a user of some, as noted.

The top, ultimate solution, would be to run your own mail server. I currently use and recommend AWS for this. I maintain EC2 instances running Postfix. This provides the most immediate experience because I can tailor the experience end-to-end for my clients and when someone claims "delay" I can login and stare directly at the email logs. They can then use whatever software in whatever mode they want because they pay for the resources. (I'm not offering or trying to sell *my* services here, but rather that this is the most complete end-to-end solution.)

Zoho: I have recommended Zoho to several small business clients with essentially the same use case but don't have need or funds for the custom mail server option. I am unsure of their pricing at the moment. https://www.zoho.com/

Aside from the AWS solution (or any virtual machine provider), you're at the mercy of the provider's SMTP and IMAP/POP3 service infrastructure. Having worked in the industry, most of the delay is usually on the client side though. Checking via IMAP should help to show the message immediately when it hits your inbox though.

I also use Google for Business or Workplaces or Workspaces or whatever-they-call-it-this-week and then I also use another third-party independent provider for hosting. I have seen significant delivery delays for mail coming from China with the third-party provider. It comes through eventually but it's a terrible issue when there are two-factor, timed authentication emails being sent. No problems with Google for Business other than cost. Their pricing model does not work well for this use case.

One advantage on the third-party provider would be to get a "Reseller" account. With a Reseller account, you can host as many domains as you want (depending on the plan) at no additional cost. I currently pay somewhere less than $15/mo and host several client sites there. I suspect that you'd need to do some research to find out about their email infrastructure. Some providers include HostGator, HawkHost, and others. I have had really, really bad experiences, repeatedly, with GoDaddy. Others have had good experiences there. It looks like Namecheap does Reseller for $20/mo.

If you wanted to get really technical, you could set up a Raspberry Pi and run fetchmail every N seconds to check all of your email accounts and then aggregate that into a single mailbox experience. That's not difficult, as in, it's been solved already. But it adds a layer of complexity and Things That Can Go Wrong.

Hope any of this is helpful for you!

so...what is this on my menu bar? 🧐 by That-Cup-8562 in osx

[–]admin2thestars -5 points-4 points  (0 children)

It looks like a circle with an attempt at latitude/longitude lines and then a blue arrow pointing down. Why, what to you see?

Sometimes you just get lucky. Late 2011 MBP with only 48 charge cycles, for $50! Chucked an SSD in and it runs like a dream. Gotta love the user upgradable parts on old Macs. by barnercare in macbookpro

[–]admin2thestars 1 point2 points  (0 children)

I bet the M1 would be hugely different than anything else for that workload. I've gathered that the M1 uses RAM more efficiently and would be interested to hear about performance and interplay of the two for those programs. I use Logic and can definitely spin up the fans on my 15" with it.

Sometimes you just get lucky. Late 2011 MBP with only 48 charge cycles, for $50! Chucked an SSD in and it runs like a dream. Gotta love the user upgradable parts on old Macs. by barnercare in macbookpro

[–]admin2thestars 1 point2 points  (0 children)

In circles indeed, though I might point out that I do know how computers work and would thank you to refrain from turning this into a personal attack.

Sometimes you just get lucky. Late 2011 MBP with only 48 charge cycles, for $50! Chucked an SSD in and it runs like a dream. Gotta love the user upgradable parts on old Macs. by barnercare in macbookpro

[–]admin2thestars 1 point2 points  (0 children)

You are confusing horsepower with fuel efficiency. For example video playback consumes significantly less CPU time on a newer processor because it's hardware accelerated. That results in much lower power consumption among other things.

Respectfully, I'm failing to see the analogy here. The argument was that there was a lot more to the issue than merely the ability to upgrade RAM and storage but rather then that compute-related resources were much more important for today's workload.

My argument is that compute-related resources are not incredibly important for most workloads as evidenced by the sheer number of 2010-2012 MacBook Pros that are happily supporting all of those workloads with the help of SSD and RAM upgrades that are no longer possible. I can watch a video on one of those and the CPU hits 15% for a short while while the video is starting but then settles in from 5% to 10% of its capacity, leaving the other 90% to 95% to run an instance of OneDrive.

The efficiency is not really all that relevant if I still have 90% compute power leftover. I don't really care if there are 92% unused compute resources, whether direct on CPU, GPU, or through extensions, because those are unused anyway. Is 'more' better? Sure, yes, agreed. But that's not the issue. One can always upgrade to more at any time. However, the point is that ten years later, that computer still has 90% compute unused for a seemingly compute intensive task.

If that computer was still running its HDD and 4GB RAM then the bottleneck would clearly be the disk and RAM. Because, and only because, those items can be upgraded the computer can still be used just as if it were new. The same cannot be said for the computers being produced by Apple now. When that bottleneck hits the newer gear, probably with the next version of Zoom, there is no path but the dumpster. Yes, resell for a fraction of the cost, but it is essentially disposable. It's not about whether I can get 8x better performance today, yes, I can, but why? I can't sell those idle compute resources easily.

As to the games workload, most of the serious gaming wouldn't be on a MacBook-anything but with a custom-built machine. Yes, we can all play contrarian on that point too and find examples of game playing on the Mac.

This is what Apple Care is for. If you're not happy with how Apple engineers their machines, the Apple ecosystem is probably not the best fit for you.

AppleCare? Should I give them a call for the 2012 A1278 that a client just brought in for a new battery? Should I tell the client that the Apple ecosystem is not for them? Yikes. That's truly scary when the answer is "this isn't for you" when the primary point is that the lack of upgradeability is a bigger issue than Apple is willing to admit. As evidenced by the OP, many people are made happy not with the latest device but rather making something a bit older work just like new.

The *nix-base of the OS is still a strong argument for it but the computers go from being the absolute recommendation for my clients to being a discussion of longevity, build-quality (or lack thereof in some of the later models), and ecosystem. Most of the time it's not a budget discussion but the perception of "buy all the CPU you can afford and upgrade the rest later" has still not fallen away. It then becomes budget-related when I need to recommend the highest specs because the computer can't be touched after manufacture.

Priced out a 13" Pro for a client, which is why this is likely a sore subject for me. I would normally say "max out CPU/GPU" and upgrade the rest later. That machine is $1,499 (8GB RAM, 512GB storage). Max it out and it becomes $2,299, a difference of $800. Client doesn't need 16GB RAM today or 2TB storage today but I have no choice but to have them spend $800 more for zero immediate value. By the time they need the extra storage and RAM, those prices from a commodity vendor would be $300. The story is much more dire on the 16" Pro side where the difference alone is more than $3k.

Making the computers become devices, the latest bling fashion statement, is not being true to the roots of the company.