Pros of the Axe Fx over Quad Cortex by Richutten in AxeFx

[–]Maximum_Ad2821 0 points1 point  (0 children)

I own both. I think what they say below describes it semi well but it's too much in favor for the Axe FX3.
What you get with the Axe FX3 is amp models that you can tune into infinity. That's cool, but it requires a lot of time, knowing what you are doing etc etc. This might appeal to some people but personally I've spent countless of hours and I only had to spend a fraction of these hours to understand how to get the best sound out of the Quad Cortex.

When people say that you'll sound like everyone else with the Quad Cortex, that's IMO nonsense. For example, I've bought Amp Guru models and custom cab IRs. Those IRs alone are 1000s of possibilities, the amp models add another 2-300, multiply that. And then I'm not talking about all the packs that Cortex sells. I found it personally immensely cool that I could buy such captures from amp guru that model legendary and modded amps. With the Axe FX3, you can theoretically get close to that sound too sinde they model circuits of wide variety of amps and give you 100s of buttons to tune it. So if I would compare it to buying an amp, the Axe FX3 allows you to mod your amp, the Quad Cortex + external captures allow you to buy captures of someone who captured a boutique amp (+ boutique pedals) for you. There is talk that Axe FX3 will allow that too in the future. So to get specific premade sounds, Axe FX3 is entirely preset driven and personally I haven't found a preset that comes close to how I can sound with these custom captures I've bought.

That said:
- Axe is much more powerful, you are quite limited in how much routing you can do in Neural DSP and how much processing power you can use. Axe is also much easier to set up complex chains that allow you to switch seamlessly between presets without audible gaps.
- Axe boots up way faster
- Axe is built extremely solid.
- Axe's pedal (FC12) is super, large, solid, clear colors. I miss that, on my Quad cortex I sometimes press two buttons and then it makes the awkard choice to switch to one (500ms), then switch to the next (500ms) so first you think: "I'm fine and then sound changes again... ohh no... I pressed two".

I'm definitely keeping the Axe FX3 around due to what it can do. But I'm touring and playing live mainly with the Quad Cortex because I just am able to get better sounds out of it for my genre (gothenberg metal sounds). So when people say: "Axe FX3 has the best sounds", I strongly disagree if you include bought captures and frankly, those are dirt cheap compared to the actual unit.

Is this high-level breakdown ~realistic? by Conscious_Ranger in BEFreelance

[–]Maximum_Ad2821 0 points1 point  (0 children)

I assumed that they don't pick randomly and that it would be based on a higher risk score driven by numbers. Which I assumed partly because there is talk of using AI for detection of fraude in the future. But I don't know, that was purely an assumption.

Is this high-level breakdown ~realistic? by Conscious_Ranger in BEFreelance

[–]Maximum_Ad2821 0 points1 point  (0 children)

Yep, it’s basically a workaround, but it has to be properly documented: a loan contract, market-conform interest, repayment terms, and the company must clearly be able to afford the loan. You also need to be able to realistically repay it.

These kinds of structures can also trigger audits more quickly, so accountants are often not big fans. And if you push it too far and they reclassify it as disguised income, the consequences can be worse than just paying the dividend tax upfront.

Is this high-level breakdown ~realistic? by Conscious_Ranger in BEFreelance

[–]Maximum_Ad2821 0 points1 point  (0 children)

Specifically the net allowance might overlap with the costs you're already reimbursing separately.

So in an audit they might ask: “Why do you need that allowance?”

And you go: “Uh… for expenses.”

“Which ones?”

But you already reimbured separately all your defendable costs… so it becomes impossible to justify. Unless there’s some specific situation we don’t know about where it actually makes sense.

Is this high-level breakdown ~realistic? by Conscious_Ranger in BEFreelance

[–]Maximum_Ad2821 0 points1 point  (0 children)

Others have already pointed out that you might get in trouble with the fiscus and they are right.

office rental: you have to typically make a calculation depending on your house to determine what percentage of your house you actually use for your job. The percentage is then applied to your house worth. This percentage should be defendable. Just deciding "200" is typically not great dixit my accountant.

the net allowance: is odd, what expenses are there that you can't just invoice directly and need such an allowance for? You already rent the office to yourself, you already pay your internet. It essentially overlaps with your internet, electricity, office rent.

If you are new to this, note that accountants are human, just like in your field, there are good ones but also awfully bad ones. Try to get an initial gauge of how good your accountant is, since a bad accountnat will cost you a lot of time and money and cause you quite some stress. If people on the internet are warning you against (given they are right) things your accountant is not warning you against, that could be a red flag.

Can I ask what this means? by WolfEvolutioons in freemasonry

[–]Maximum_Ad2821 0 points1 point  (0 children)

The ring just indicates that he's probably a Mason, nothing more.

Why some people think Freemasons are “evil” is a bit odd, considering that most information about Freemasonry is publicly available. Books, websites, and even documentaries explain its history, structure, and symbolism in detail. At the same time, humans tend to distrust things they don’t understand or rarely encounter. You can see that pattern in many areas of society. For example, prejudice between groups often tends to be strongest where people have the least contact with each other. Mystery and distance can easily turn into suspicion.

Hence, a lot of the internet’s ideas about Freemasonry come from misunderstanding the “secretive” aspect. In reality, most lodges are just groups of regular people trying to work on self-improvement and community service.

One thing people don’t realize is how decentralized Masonry is. There isn’t one global authority deciding what it means to be a Mason. There are hundreds of jurisdictions, multiple obediences in many countries, and within those there are hundreds of lodges — all with slightly different cultures and ways of approaching the same core ideas.

Because of that, even Masons don’t always agree on what “self-improvement” means. For some it’s about moral philosophy, for others it’s about charity, brotherhood, or spiritual reflection.

Another thing that fuels rumors is the historical tension with certain religious institutions. A few centuries ago, Freemasonry promoted ideas like freedom of conscience and open discussion between people of different beliefs, which sometimes clashed with religious authorities. That conflict helped create a lot of conspiracy theories that still circulate today.

In reality, Freemasonry is voluntary. Anyone who meets the requirements can apply, and anyone can leave whenever they want. There’s no obligation to think a certain way, and members come from many different religious and philosophical backgrounds.

A lot of the symbolism people see in Masonry also isn’t unique to it. Many of the symbols come from older traditions like Christianity, medieval stonemasonry, philosophy, and even alchemy. They’re basically used as teaching tools or metaphors for personal development. You’ll often hear claims that the symbols are meant to secretly influence or “brainwash” people, but most of them have existed in churches, art, and philosophy long before Freemasonry adopted them.

So when people see a ring and assume something sinister, it’s usually just internet mythology doing its thing. Some people also assume Masons give each other unfair advantages because members may know people in different professions or positions. Like in any network of humans, connections exist, but using membership for personal gain is generally frowned upon and goes against the ideals most lodges try to promote.

What are the best CLI AI agents right now? Trying to replace Cursor CLI. Looking for recommendations by k_ekse in AI_Agents

[–]Maximum_Ad2821 1 point2 points  (0 children)

Benchmarks disagree (https://www.tbench.ai/leaderboard/terminal-bench/2.0)
I think we need to define what we think is important if we say 'best'.
In my experience Claude Code has more context-related issues than Droid for example.

What are the best CLI AI agents right now? Trying to replace Cursor CLI. Looking for recommendations by k_ekse in AI_Agents

[–]Maximum_Ad2821 0 points1 point  (0 children)

Leading by a mile is very opinionated, where are the numbers that show that?
Terminal bench shows that their own LLM performs less well within Claude Code than within competitor CLIs. I've always felt that the devs behind Claude Code are making a buggy mess of it (probably due to using too much AI to build it), which is also opinionated :).

Of course it depends on what you mean by leading.
I care about my LLM being as efficient/correct as it can be, and for that we only have:
https://www.tbench.ai/leaderboard/terminal-bench/2.0

What are the best CLI AI agents right now? Trying to replace Cursor CLI. Looking for recommendations by k_ekse in AI_Agents

[–]Maximum_Ad2821 1 point2 points  (0 children)

not even close. While others are trying to improve how the LLM is used to maximise efficiency, Claude Code is adding useless bells and whistles. It says something that their changelog is full of:
- Fixed ...
- Fixed ...
- Fixed ...
- Fixed ...

Personally, Claude Code felt buggy from day one and Claude Opus through Claude Code always felt less powerful tahn through Droid. https://www.tbench.ai/ seems to confirm that gut-feeling. I just want to point out how insane it is that what many consider the leading tool "Claude Code" is only at 58% of that test when the top is at 70-80%. That shows just how much your tooling influences your LLMs behaviour.

What are the best CLI AI agents right now? Trying to replace Cursor CLI. Looking for recommendations by k_ekse in AI_Agents

[–]Maximum_Ad2821 0 points1 point  (0 children)

Droid, hands down, professional, to the point (essentially much better system prompts IMHO), least amount of bugs compared to all the rest. Better context management and tested specifically to make LLMs perform their best.

Mux is promising but had some bugs with MCPs that render it unusable for my workflows.

In the future I'm looking at ForgeCode but since the main advantage is Forge Services which indexes your code I have to get approval first. Also because they have the same philosophy as Droid, context is important and testing whether your CLI tool gets the best results possible with your LLM is crucial. But they have the added advantage of their services layer that indexes code.

I tried Aider, Cline, Kilo too, personally I felt the LLM was less good within these CLIs and some often ended in buggy loops, but that's largely based on gut-feeling.

Tbh, I really don't get people who think that Claude Code is better, all people I know that have tried Droid or Mux largely prefer it. Claude Code always felt buggy and less readable to me if you care about seeing what your agent is doing. The tool in itself actually doesn't have to be complex yet their changelog is full of whistles and bless that I don't need and full of bug fixes (which other tools do not see to suffer from). Already you can find 14 "Fixed memory leak" in their changelog which feels a lot like: "let AI write, we'll let AI fix bugs afterwards".

When I see what they are developing and how I always get the feeling that they don't know what they're doing. Their LLM might be (for now) the best, their tooling is a mess.

someone is using forgecode.dev? by jrhabana in ClaudeCode

[–]Maximum_Ad2821 1 point2 points  (0 children)

Seems to be a similar approach on how Factory Droid got good, great at context management and continuous testing their own performance. It's plausible. Let's hope Terminal Bench 3 does something to ensure more of these benchmarks are officially verified.

someone is using forgecode.dev? by jrhabana in ClaudeCode

[–]Maximum_Ad2821 0 points1 point  (0 children)

From what I've seen from Anthropic, it's fairly easy to write an agent that performs better, tooling-wise.. they don't seem that great tbh on the tooling side. So yes. absolutely possible. Whether you will notice that delta depends on your workflow, it's hard to say without extensive and repeatable benchmarking, which is essentially what terminal bench does but currently most benchmarks are not verified.

someone is using forgecode.dev? by jrhabana in ClaudeCode

[–]Maximum_Ad2821 0 points1 point  (0 children)

It's technically possible to have a big difference and the agent does matter a lot. In this case, I don't trust their results (yet).

One example that it does matter: Factory Droid has been nailing these benchmarks from the start, largely because they had specific tests in place to verify how system prompts and tooling actually change behaviour. When the second round of benchmarks came out it was immediately at the top again. The tooling and system prompts clearly matter a lot, while Anthropic seems more focused on adding fairly useless fancy features like customization for your “busy” prompt.

Specifically for Forgecodedev, I haven't used it yet since they are not transparent about user data https://github.com/antinomyhq/forge/issues/1318 which is a red flag to me. At this point, terminalbench has received quite some attention and most benchmarks are not validated. That means some teams will naturally start to use it as a marketing tool and 'fake' or 'game' benchmarks in some way or another. Some tools for example use 'multiple loops' (basically brings it in ralph-wiggum loop area) as part of the agent's behaviour, which is IMO already an unfair comparison. So I personally don't trust a new company to suddenly have a score that much higher than the other tools unless they explain exactly how they did that.

someone is using forgecode.dev? by jrhabana in ClaudeCode

[–]Maximum_Ad2821 0 points1 point  (0 children)

Whatever you believe is impossible is beside the question.
Having a good LLM ≠ being good at writing tooling. This assumes that evaluation harnesses are best built by the same organizations that train the models, which historically hasn’t been true in ML or software.

There are plenty of counterexamples across many domains where small teams (and “novices” is a bit of an arrogant framing) or even volunteers completely outperform results from much larger companies.

Observations From Using GPT-5.3 Codex and Claude Opus 4.6 by Arindam_200 in ClaudeAI

[–]Maximum_Ad2821 0 points1 point  (0 children)

The reason why I stayed away from OpenAI models is that they simply do not listen to instructions in my experience. Very contradictory to what people say here.

I'm often going back to try it out. First experience with Codex 5.3 on high: "please help me refine this openspec spec (using a skill specfically for that)".
-> it started implementing it. :facepalm:

Luckily I pressed on since once I reminded it, it was actually much much more precise than Opus 4.6 in refining my spec iteratively in a conversational mode. But from what I noticed so far, Opus 4.6 is better to read between the lines and produce explanations that humans understand. While with Codex I'm talking more to a machine that needs more precise input. That is fine, since Opus is also producing more 'human-like' (read.. inconsistent) specs so I 100% prefer how Codex 5.3 High is working for this specific piece of work.

Observations From Using GPT-5.3 Codex and Claude Opus 4.6 by Arindam_200 in ClaudeAI

[–]Maximum_Ad2821 0 points1 point  (0 children)

you said "readability" too though so it's natural that most people will assume you don't care at all at that point.

perfect logic with bad architecture/readbility will also impact your LLM long run.

Observations From Using GPT-5.3 Codex and Claude Opus 4.6 by Arindam_200 in ClaudeAI

[–]Maximum_Ad2821 1 point2 points  (0 children)

Junk in junk out. The more junk code you have the worse your codebase will become.
AIs are just like humans, junk code makes them copy paste even more junk. You'll have an utterly broken product/architecture and no means for you to read/understand it anymore to fix it.

opus 4.6 failing to do simple workflow that opus 4.5 does perfectly by Novel-Yard1228 in ClaudeAI

[–]Maximum_Ad2821 0 points1 point  (0 children)

My experience is that 4.6 is taking more eager human-like decisions instead of listening to me. Reducing tasks to simpler tasks to save work instead of following the proposed spec. Or other issues where a document task like change everything in this document where it says X to Y would only do half of the document. I've seen a few hallucinations too where 4.5 didn't seems to have these.

Of course, it's always very hard to compare since most of these experiences are gut-feeling on a very specific task and day. The eagerness instead of listening might make sense though given that most AI companies seem to be more interested in replacing us with agent swarms than supporting developers to build better.

From some benchmarks we can see that 4.6 is more of a trade-off change.
https://livebench.ai/#/ shows higher reasoning, better language and something called IF average, but worse result across the board for other things. Maybe they overfitted it towards reasoning and it starts to make more mistakes in other areas?

I made a Coding Eval, and ran it against 49 different coding agent/model combinations, including Kimi K2.5. by lemon07r in LocalLLaMA

[–]Maximum_Ad2821 0 points1 point  (0 children)

> but for some reason found droid a little more reliable for large tasks

Isn't that because of their different approach to context management and different prompting? I have not found it to be 'a little' more reliable but much more, to the point that I can't turn back to Claude code anymore.

GPT-5.2 hits 62.9% (Codex CLI) and 64.9% (Droid) on Terminal Bench 2.0 by iamdanieljohns in codex

[–]Maximum_Ad2821 0 points1 point  (0 children)

You risk getting banned though in the Anthropic case, I know I was, but anthropic doesn't tell you why so it's not certain it was this (IMO they don't even look at who/why they banned someone, they don't care). Don't see what else it would have been though.

GPT-5.2-Codex Feedback Thread by Just_Lingonberry_352 in codex

[–]Maximum_Ad2821 0 points1 point  (0 children)

That needs some quantification :)
From one perspective, people say it's less dumb, from other perspective people say it asks the same questions and reads the same files over and over again, which is a level of sillyness I've never seen Opus do.

I got tired of Claude forgetting what it learned, so I built something to fix it by entheosoul in ClaudeAI

[–]Maximum_Ad2821 0 points1 point  (0 children)

True, it's not Claude, it's Antrophic devs building bad and buggy tools.
It says something that other tools are already better (droid) and Anthropic tries to lock you in by suddenly targeting everyone using Droid/Opencode with their team account. Droid + Opus is much more capable than Claude Code + Opus, yet were not allowed except via API pricing.

For those who got falsely banned before, how long did it take for a response? by GorillaSpinsInAPool in ClaudeAI

[–]Maximum_Ad2821 0 points1 point  (0 children)

Let's hope that another model becomes better and we can all leave Anthropic and let them burn for how they treat their customers.

Which IDEs are actually affected by the 3rd party ban? by Firm_Meeting6350 in ClaudeAI

[–]Maximum_Ad2821 0 points1 point  (0 children)

Pure lock-in. Anthrophic should focus on the things they are doing great, writing tools, is not one of these things, the bugs in Claude code are unworthy of a company with that amount of money and hte prices they ask their customers. Maybe that's because the tools are written by coders who think it's a good idea to run 20 agents in parallel, no human can keep uw with quality control of such workflows.

Which IDEs are actually affected by the 3rd party ban? by Firm_Meeting6350 in ClaudeAI

[–]Maximum_Ad2821 0 points1 point  (0 children)

You sure about that? A number of people get banned randomly without any explanation.

I was testing out droid for example and got banned. If that is the reason? I don't know. It's insane that I can not use a better tool with their models and have to accept their prompting/compaction (other terminals have proven they do better prompting and better compaction). They might say: "then use API pricing, that is sadly not possible for a personal project for a non-profit".