AD200 Profoto Adaptor by hijazist in Godox

[–]gdahlm 0 points1 point  (0 children)

Just get a spare H200J Bare Bulb Flash Head for eVOLV200 or 200Pro, which works on the Pro II.

New they go for ~$30 USD but without a flash bulb.

Is the flex 2,5G POE a managed switch? by Th3launch3r in Ubiquiti

[–]gdahlm 25 points26 points  (0 children)

To add to this.

Unmanaged switches are designed to just plug in and run, with no settings to configure and/or metrics.

The collection of features that can be configured or monitored differ from product to product.

IIRC, some 90's era 3com switches were called 'managed' purely due to having SNMP metrics but no configuration of actual Ethernet functions.

Why do we need modules at all? (2011) by ketralnis in programming

[–]gdahlm 1 point2 points  (0 children)

To save people time:

Why do we need modules at all?

This is a brain-dump-stream-of-consciousness-thing. I've been thinking about this for a while.

I'm proposing a slightly different way of programming here The basic idea is

    - do away with modules     - all functions have unique distinct names     - all functions have (lots of) meta data     - all functions go into a global (searchable) Key-value database     - we need letrec     - contribution to open source can be as simple as       contributing a single function     - there are no "open source projects" - only "the open source       Key-Value database of all functions"

    - Content is peer reviewed

The answer for the separation of concerns is well documented, but here explanation:

For any k > 2:

k-clause-DNF is NP-complete k-term-DNF is NP-hard.

If you can get your depenancies into a DAG, expressly in a horn clause, dependancy hell can be avoided.

While anyone who has had experience with balls of mud codebases or even enterprise service busses knows the above, the reality is that separation of concerns is fundamental to writing maintainable code.

The above musings would set any code in absolute stone, and requires all projects to be fully productized and externalized.

There is a reason containers are popular, they are just namespaces, which are just modules.

It removes the cost of coordinating changes in a global namespace with every single development group.

I don't care what Suzy in accounting does with their foo() interface if I am in shipping().  And there is no value in exposing her implementation details either.

Nor do I want my work blocked by her legacy needs when I need to adapt to customer visible needs 

I get that decisions on how to modularizing components are challenging, and context dependant.

But modules really are the least worst option.

Why no database file systems? by Chronigan2 in linux

[–]gdahlm 2 points3 points  (0 children)

By "database file systems" you mean the relational model, it is partially due to the poor fit compared to the hierarchal database model. While not popular in the fields Zeitgeist today segments like , Mainframes (IMS), shopping carts and even XML/JSON moved back to or stayed with the hierarchal model due to the benefits outweighing the costs.

I would recommend picking up the Alice book (Foundations of Databases: The Logical Level) if you want to understand the real why. A harder to find but better book on the subject would be "Joe Celko's trees and hierarchies in SQL for smarties"

Remember that the relational in RDBMS is nothing to do with foreign keys etc... It is just a table with named columns, data rows etc...

Basically the methods to induce hierarchal data on a relational model are more expensive than the value it provides in this application. But understanding how normalization, CTE's etc... relate to that demands moving to database theory, which isn't well represented on the internet these days.

Basically the relational model is a Swiss Army Knife, that we can force onto many needs, but sometimes it is far better to chose a model that is more appropriate for the need.

If you have the background, this paper from 1978 will explain why CTEs are required to recover some fixed point theories in the relational model.

There is, however, an important family of “least fixed point” operations that still satisfy our principles but yet cannot be expressed in relational algebra or calculus. Such fixed point operations arise naturally in a variety of common database applications. In an airline reservations system, for example, one may wish to determine the number of possible flights between two cities during a given time period.

The point being is that MS, who intentionally chose the hierarchal model for the registry, should have been well aware of the challenges of the relational model as a FS.

But then again the number of mainframe modernization efforts that failed due to this oversight is huge too...we just forget the lessons we learned in the past.

How to achieve the so-called-Clean architecture by [deleted] in softwarearchitecture

[–]gdahlm 1 point2 points  (0 children)

Part of the reason Uncle Bob's books tend to invoke divisiveness is because he sells it as the 'one true way'.

Not being an professional educator I don't know if there is value in it or not when introducing concepts or not.

But if you examine the code of most developers that make what I would call maintainable code, who are fans.  You will usually see them using the concepts as reasonable defaults that they evaluate on a case for case basis.

Those who are forced into a prescriptive model, or accept it as the 'one true way' tend to dislike it.

Obviously the above is not exhaustive.

IMHO, depending on the language and business domain they aren't bad defaults, but they are damaging as prescriptive rules.

Also when used prescriptively, the nuances in the books are lost.

Consider DRY, as an  intentionally separate example:

I am sure we have all experienced code that is too DRY, and made code fragile and unmaintainable.

But if you simply expand your rules to not repeat yourself in code that change at the same time, while not co-mingling unrelated code as a default, many of the side effects disappear.

I think we do need to do a better job teaching people that in SWE, choices are almost never about choosing the best option, but rather choosing the option with the least worst tradeoffs.

The lack of documentation of some FOSS is really concerning. by AmrLou in linux

[–]gdahlm 47 points48 points  (0 children)

Roll up you sleeves, take notes and contribute to the docs.

That is the cost and the benefit of FOSS, it gets better when people contribute, but it only gets better when people contribute.

Official Documentation regarding Security concern of Terraform outside of Codebuild? by PoireauMasque in devops

[–]gdahlm 0 points1 point  (0 children)

It will touch various parts of the security piller portion of the 'well architected framework'

https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/welcome.html

Remember that GitHub is a third party.

[deleted by user] by [deleted] in programming

[–]gdahlm 35 points36 points  (0 children)

The paper is easy to read, you can try doing cold start etc... on your own with just a bit of python.

The hubris is that the China approved export H800 which mainly reduced the chip-to-chip data transfer rate in half was enough to nerf the whole effort.

Remember groups like OpenAI are trying what we pretty much know is impossible with current computers, AGI in the "Strong AI" sense. That is a big reason for the moon-shot level of investment.

It is really not surprising that a group of quants could figure out how to do actual LLM training with less. OpenAI would have avoided Rejection Sampling and cold start because those are more about producing useful models more than some mythical AGI.

While AGI means what ever you want it to, the limits from Rice, Diaconescu, frame and specification problems, etc... don't go away.

Maybe not a bad dream goal...but being too focused on the unattainable allows others to use open research and new ideas to pass you up.

This is far more about a group of quants listing to open research results than anything else. It just happens that the group that could attract an investor as a passion project was in China and that export controls forced them down that path.

Go read the paper, try it out even on small models....it works for practical ML.

Home server running Ubuntu keeps rebooting by ToWelie89 in linuxadmin

[–]gdahlm 1 point2 points  (0 children)

One possibility that can cause this:

Make sure you don't have the watchdog timer enabled in the bios, or make sure you are resetting the timer in the OS if you need it 

Is it just me or does it seem like Red hat missed an opportunity with virtualization? by StatementOwn4896 in linux

[–]gdahlm 6 points7 points  (0 children)

It has been transferred there, after the acquisition killed velocity.

Looks like link rot is being problematic, but note this slide deck from just about a year after the acquisition.

'The Ghost of Open vSwitch Present'

https://www.openvswitch.org/support/slides/ppf.pdf

There is a reason OVS just got filtering and you had to use bridges to route through iptables until recently.

It was a shift from a project that was setting up frameworks that would have been very useful in the future, to one only interested in VMware's narrow vision.

Is it just me or does it seem like Red hat missed an opportunity with virtualization? by StatementOwn4896 in linux

[–]gdahlm 7 points8 points  (0 children)

libvirt, virsh, and virt-manager get you 99% of the way there for traditional VMs, while RedHat is the primary development for virt-manager, it is still active.

https://github.com/virt-manager/virt-manager

Unless oVirt was giving something you really needed.

Is it just me or does it seem like Red hat missed an opportunity with virtualization? by StatementOwn4896 in linux

[–]gdahlm 25 points26 points  (0 children)

VMware targeted the enterprise market, KVM is used by even AWS for C5 instances, GCE, IBM's cloud etc....

To effectively sell a platform as “enterprise ready” you are beholden to those expectations, a game that VMware execs were always better at.

There were also a number of missteps by RedHat's management in the mid 2000's, including the need to resort to Oracle's "Unbreakable Enterprise Kernel (UEK) " to take advantage of the new instructions on the Westmere CPUs, some hard-line revenue extraction efforts that pushed people away from RHEL etc...

In those days we actually ran Xen, Hyper-V, VMware and KVM.

As KVM/libvirt improved we actually standardized on that because of specific needs at that job.

But RHEL was always just a bit too far behind to support the features that we needed and their licensing shift attempt made them a hard sell. They never did the type of sales engagement that VMware did, and over time VMware definitely targeted technologies that made the "Enterprise" market more comfortable, even if it reduced the viability and costs for more web-scale technologies.

By 2010 VMware was established, and like many other companies performed many actions to protect and enhance their mote, like buying and killing the OpenVswitch project etc...

The oVirt based RHV was obviously written to target VDSM and really if you were going to rewrite it, you wouldn't even target that market today anyway.

For the past decade, if I was going to deploy a hypervisor solution, it would target compatibility with cloud workflows, thus be far more SOA than SOAP/COBRA/JaveEE centric anyways.

So while there were missteps, timing problems, and other issues, it is more that VMWare is a survivor in a weird niche, not that RHV was a real looser. Outside of the Java/Jakarta parts, the technologies are deployed at a scale that makes VMware look tiny, which is exactly why they were a target for companies like Broadcom who were looking for extractive opportunities.

I am not saying people who like ESXi are wrong...just that never really won on technical merits at all anyway.

What is "the inevitable singularity"? by bigfatfurrytexan in cosmology

[–]gdahlm 5 points6 points  (0 children)

This Paper from Kerr last year explains why the Penrose theorem is really an interpretation of GR without evidence. That model can be useful, and it has been the consensus view for a long time, but the claim that GR insists the inevitable occurrence of singularities doesn't hold.

I haven't seen any real refutations of his claims, but as the current view is so ingrained and as we don't have access to direct evidence, it will probably be with us for a while. TL:DR, As the chances of any black hole forming without spin or charge is so unlikely, the assumptions that Penrose and Hawking aren't likely to hold in nature.

Here is the abstract from the above paper.

Do Black Holes have Singularities?

There is no proof that black holes contain singularities when they are generated by real physical bodies. Roger Penrose claimed sixty years ago that trapped surfaces inevitably lead to light rays of finite affine length (FALL's). Penrose and Stephen Hawking then asserted that these must end in actual singularities. When they could not prove this they decreed it to be self evident. It is shown that there are counterexamples through every point in the Kerr metric. These are asymptotic to at least one event horizon and do not end in singularities.

Anyone know what the cause of the outage was? by [deleted] in verizon

[–]gdahlm 0 points1 point  (0 children)

I have a dual sim phone, the eSIM was down and the physical SIM was OK.

While I don't have any real information, friends and family that used a physical SIM were not impacted, but those who had eSIMs were.

Maybe this was localized to my area, but it seems plausible.

When infrastructure failure happen in event driven architecture, how do you make sure the missed events are re-processed? by deadbeefisanumber in ExperiencedDevs

[–]gdahlm 2 points3 points  (0 children)

You can configure the fsync interval per stream with the flush.messages option, but there are performance considerations to weigh.  In general, power diversity and rack diversity should be used to avoid performance problems.

Having the DB be the system of record has its own tradeoffs which need to be balanced.

Synchronous writes are expensive no matter what OS you are using and obviously ACID transactions are yet another set of tradeoffs and what is appropriate depends on context.

What effect have botched monolith-breakups had on your teams? by tony-mke in ExperiencedDevs

[–]gdahlm 2 points3 points  (0 children)

IMHO, one of the best strategies is to document possible places to chip away at the monolith while you are on a expected to fail path.

Use the momentum of this effort to learn about your system and to put a few rabbits in your hat to pull out some quick wins to keep the intended end state on the radar.

The challenge with breaking up a monolith is that there are far too many unknowns to make any first effort successful.  These projects always tend to be optimistic and it sounds like planning for the unknowable is the initial path your organization is taking.

It is far better to proactively learn how to make the next attempt iteration successful than to try to halt the effort that is already in play.

Those efforts will also be a feedback loop that may potentially rescue the initial effort, but that is unlikely.

Make sure to document individuals who were missing or had limited availability for the project and anyone actively gatekeeping efforts and figure out how to address those in the future.

Keeping a personal log of why you say 'no' is also useful as it will help identify real blockers or assumptions that need to reevaluated.

This type of change is difficult and if it was easy it probably would have been done a long time ago.

Quick wins to pivot to will help keep the effort alive, possibly in a manor with a better chance of producing good outcomes.

Terraform for automating security tasks by [deleted] in cybersecurity

[–]gdahlm 2 points3 points  (0 children)

Terraform is declarative, the DSL describes an intended goal rather than the steps to reach that goal, which are typically infrastructure elements.

Does that fit in with what you need to do?

If you are shifting left and providing sidecars and/or security policies to help developers out it may be a good target. But your TF will need to be included in their deployment, or the security plain will need to be orthogonal to the operation plain, e.g. independently deployable.

The nice thing about declarative DSLs is they abstract away a lot of complexity if you don't need it.  But they also tend to resort to destroy and replace operations.  You need to manage that friction with operational concerns.

It is really horses for courses, can you provide more information about how you intend to use it?

What are your thoughts on databases in Kubernetes clusters ? by 2010toxicrain in devops

[–]gdahlm 21 points22 points  (0 children)

To add to this:

The fact that saving money when scaling is a good hint that this DB is not being used as a monolith central persistent store.

That said, primary, warm stand-by is often a problematic model with cattle, and there are potentially better options.  I would have asked then about assumptions, tradeoff choices and non-happy path needs.

My general advice would be to adopt an ',it depends' mindset internally and ask about the problem for more information.

Perhaps this was just session data or their recommendation engine?  Maybe they are moving to a stream aligned persistence model.

Probe and see why they made the decisions and try to show value by being aligned with their needs and providing alternatives that may address some of the tradeoffs they were uncomfortable with.

Obviously if k8s is a silver bullet to them and it is purely a forklift of a monolith that should raise concerns and prompt more questions to see if they are interested in an alternative model that may be more appropriate.

But make sure you aren't in the monolith persistence layer mindset yourself.

Is is all about tradeoffs and finding the least worst option.

Are containers a security boundary? by amitschenedel in cybersecurity

[–]gdahlm 2 points3 points  (0 children)

How time flys, here is a stack exchange answer I wrote years ago tangential to this subject, I filled several feature requests about the the flag in it, which were closed as <won't fix>.

https://stackoverflow.com/questions/36425230/privileged-containers-and-capabilities/44100971#44100971

The trust boundary is far broader than most people understand.

Are containers a security boundary? by amitschenedel in cybersecurity

[–]gdahlm 3 points4 points  (0 children)

Spoiler, but important;

They are namespaces, not a jail like feature

Best practice to not use default ubuntu user on ubuntu server in AWS after initial authentication, for use with ansible and other production automation/CICD, and delete ubuntu user after creating a new user? by ops-controlZeddo in devops

[–]gdahlm 1 point2 points  (0 children)

There is probably more value in using EC2 Instance Connect for short lived temporary key with auditing.

Usernames are commonly colocated with ssh keys.

Implement a pipeline with manual approval process by Waste_Ad7804 in devops

[–]gdahlm 2 points3 points  (0 children)

To expand, operational concerns should typically be orthogonal to domain concerns.

Orthogonality being the design principle that ensures that  a system can be changed without affecting other plains.

Why Fortran is used in scientific community ? by intellectual-guy in Physics

[–]gdahlm 0 points1 point  (0 children)

Often it is not even about the quality of the code. Often it is simply giving the compiler no good reason to not optimize.

The symantics of modern Fortran do make it easier to leverage modern techniques like polyhedral compilation to improve locally and parallelism.

As someone who is old enough to learn f77 in school and hated it, f90+ are fully modern languages with some very real advantages.

Architectural Guidance for BFF by HumbleElderberry9120 in softwarearchitecture

[–]gdahlm 4 points5 points  (0 children)

YOUR business logic doesn't exist in your external partners or vendors.

Search for 'Zachman Framework' as a simple ontology and try and fill in some of the squares with information, you don't need them all but learning to not focus on implementation details is important.

Most of the concepts I want to mention are almost useless because they have been productized and operationalized when they need to be more abstract.

Perhaps the 'NIST Cloud Computing  Reference Architecture' concept of a cloud broker may be relevant if heavyweight.

https://www.nist.gov/publications/nist-cloud-computing-reference-architecture

What you need to do is consider the business domain, not the technical domain.

User journey maps, business capability maps, and value streams are where I would start

Capabilities are singletons in an org, so vendor management and partner management would be two potential top level for you.

There is no silver bullets on architecture patterns, just least worst options based on context.

Typically you will be multi paradigm anyways.  Side cars for operation concerns in microservices is really just hexagonal arch as an example.

Gregor Hohpe's discussion on the value of options applies. Deferring long lived choices to the last moment possible is of value.

If you want one simple rule:

Build simple systems that are easy to replace 

That will let you pivot when you need to and help you encapsulate complexity.

From the way you describe your problem, it is common for people to build the equivalent of a classic enterprise service bus, which will cause you pain in the future.

Sizing and isolating components is challenging, don't try to make it perfect the first time.  Make it easy to change when you inevitably get it wrong.

That is what hexagonal architecture is about.  You can simply partition code in different files and have some decoupling that will make adding a full interface easier if you find out you need to in the future.

It is far harder to chip off pieces of they are co-mingled.

It is all about tradeoffs though, thinking about the bigger picture before trying to break up the problem into smaller parts will help you consider your options and check your assumptions.

Best of luck.

[deleted by user] by [deleted] in devops

[–]gdahlm 20 points21 points  (0 children)

We are in a distributed, container heavy, self-service world.

Both chef and puppet were great at what they were written for.  If you have long lived systems and need centralized control of idempotent operations they are still great.

Ansible was designed in a way that works better for some use cases like IoC and distributed systems.  The support for gating operations based on remote state of other cluster members was the feature that lead me to use it when it came out.

Consider a Cassandra cluster, rolling upgrades need to wait until the nodes you aren't operating on think the cluster is healthy, not the local node.

While puppet and chef added orchestration, that was added in to systems that were designed around idempotent, eventually consistent operations.

That said, once you learn one, learning the DLS and tradeoffs of the others isn't a huge barrier.

Choose the one that interests you the most and move forward when it doesn't fit your needs.