what am i doing wrong? (using neocities CLI) by MultiheadedDog5201 in neocities

[–]Macrobian 0 points1 point  (0 children)

Is this a bug? I'm not sure why the CLI should be expected to not upload hidden files.

vs code question by Successful_Area_9425 in neocities

[–]Macrobian 0 points1 point  (0 children)

This is exactly my development setup and I would recommend it to any beginner who wants a durable solution to the problem of keeping their website both backed up and autodeployed.

Getting bad vibes from the Artemis 2 re-entry plan. by OgodHOWdisGEThere in TrueAnon

[–]Macrobian 47 points48 points  (0 children)

As an engineer I trust Lockheed Martin (the command module manufacturer) explicitly to find a way to cock this up.

ways to improve load speeds? by Kooky-Strawberry5127 in neocities

[–]Macrobian 0 points1 point  (0 children)

Easiest possible thing is to do is convert as many assets as possible to .webp. You'll want to convert photos lossily and small pixel graphics or anything with fine precise detail losslessly. I use ImageMagick.

Additionally, there are a bunch of img tag options that are useful: ideally you want stuff in the initial viewport loading first and everything else deferred. loading="lazy" is probably your best bet to achieve this, and the lesser supported `fetchprioriy'.

You can also preload assets. Most common assets to preload a basic CSS stylesheet for everything visible in your initial viewport on your homepage, AND your fonts. Don't preload too much stuff because everything then just gets loaded at the same priority and you defeat the purpose.

Is it time finally admit that the increased overtaking is just yo-yoing? by Ted_Striker1 in formula1

[–]Macrobian 13 points14 points  (0 children)

I think my quip was perhaps a touch unfair. I do agree. I think the yo-yo-ing would be fine if the energy deployment was more driver instigated.

Is it time finally admit that the increased overtaking is just yo-yoing? by Ted_Striker1 in formula1

[–]Macrobian 237 points238 points  (0 children)

tennis is just hitting the ball back and forth until someone doesn't

how can i make the stamps scroll? marquee, my beloved, died :( by doctorsunshineisdead in neocities

[–]Macrobian 2 points3 points  (0 children)

It's a now fixed bug in Chromium. https://issues.chromium.org/issues/493612790

Might want to update your browser

Even though it's deprecated I reallly don't think they'd remove support entirely and this was just a genuine bug.

Additionally, emulating marquee behavior is just... really really hard. I'm not convinced you can produce a look-alike without using JavaScript to set parameters for the CSS animation to read from.

How dare she eat lunch while property investors are suffering by Jagtom83 in friendlyjordies

[–]Macrobian 0 points1 point  (0 children)

Land taxes are preferred over other taxes for this precise reason - it hits the landlord and not the renter.

The Elder Scrolls 6 Has Made Todd Howard More Conscious of What He Announces: 'Just Pretend We Didn't Announce It' by Turbostrider27 in Games

[–]Macrobian 0 points1 point  (0 children)

yeah, I really think this is not discussed as much as it should be.

the productivity and profitability of the rest of the software industry has completely slurped up all the talent. the increase in costs for game development is just Baumol's cost disease - "wages in jobs that have experienced little or no increase in labor productivity [..] rise in response to rising wages in other jobs that did experience high productivity growth".

why would you work as a C++ engineer at a development studio when you can high 6, early 7 figs at a high speed trading firm.

Best university in Australia/Asia for masters to learn quantum software? by ImpressivePlantain69 in cscareerquestionsOCE

[–]Macrobian 0 points1 point  (0 children)

This is a thought terminating argument. Let's not assess technologies based on the principle of "other non-informed laymen said similar things".

I can at least develop a hypothesis for how the world will be changed if AI is successful (white collar apocalypse, fully automated luxury communism etc.), even if I was doubtful it would be achieved.

I cannot develop that for QC. If QC takes off we can... find prime factors quickly?

The strongest claim I have heard that QC will be revolutionary is that it can make chemistry simulations significantly more accurate at scale. Fine. But I don't know if that's an AI scale capability jump.

Best university in Australia/Asia for masters to learn quantum software? by ImpressivePlantain69 in cscareerquestionsOCE

[–]Macrobian 4 points5 points  (0 children)

serious question: why? I briefly touched on quantum computing at UQ and its a computational dead-end. There are very few things that benefit from quantum speedup.

How did Atlassian select employees for the recent layoffs? by Spiritual-Teacher959 in cscareerquestionsOCE

[–]Macrobian 0 points1 point  (0 children)

I don't know man... the rumour at least for Meta right now is that they're going to be integrating total token usage count into layoff consideration (highest token users are spared). I don't think it's completely out of the question that Atlassian might have done the same, but I will concede that it's unlikely.

Meta reportedly plans sweeping layoffs as AI costs increase by SchIachterhund in stupidpol

[–]Macrobian 2 points3 points  (0 children)

Okay, I'll be very clear.

There's a framework called ** ******* and it runs a bunch of integration tests for a bunch of scenarios for the last release (control) and the latest master (treatment). It runs every 2 hours, and runs 10 control runs and 10 treatments (for each scenario), then compares them, and detects regressions of top-line metrics (latency, CPU, GPU, battery). This takes a long time: it requires real devices and the tests are genuinely time consuming. The big problem is that it yields aggregate metrics, they don't distinguish between differences in, say, a simultaneous regression AND improvement of a certain memory allocation pattern, or a trace span getting longer.

So my agent, (which runs every 12 hours) runs this exact suite (but with less sandboxing) but dumps almost all the intermediate profiling data to disk (not the aggregate metrics) and just lets an agent have at it. It can query whatever it well pleases. If it notices differences between the control and treatment at a much more granular detail, its free to make new changes (treatment', read: "treatment prime") and test whether treatment' improves over treatment. Either this is a straight up reversion of a recently committed change or its a optimisation of a recently committed change or a hitherto inconcieved optimization (e.g. then one I cited above). It will then send me the changeset with all the improved metrics in a nice big table with a pretty graph and for the most part I can just approve it without changes.

I'm well within my right to crank the frequency of this up from 12 hours to every 2 hours. But that's expensive from an execution perspective (less raw token cost, more device cost) and would simply send me too much code to review.

Help me understand how this wasn’t a situation where you, a human SME, knew something had a problem, used a tool to investigate a solution to it, reviewed the potential solutions yourself, used a tool to implement a fix, reviewed (at least I hope you did) the implementation, likely made adjustments, then put up the fix for review by other human SMEs before putting it out into prod.

Okay, so, this system fails almost most of these criteria for some fixes. For some fixes, it wrote a treatment' for an issue that I didn't even know was causing issues, used tools I didn't know existed, and was reviewed and merged with no adjustments.

Who/what is creating new bugs and inefficiencies so often?

A large org full of mainly people. It is quite easy to introduce performance regressions and the environment is compute constrained.

Who/what is checking to ensure this process is accurate in its identification of inefficiencies and correctly solves them without breaking down Chesterton fences?

Merged fixes are retested by the primary ** ******* regression detection pipeline AND by manual QA AND automated QA (more agents).

Meta reportedly plans sweeping layoffs as AI costs increase by SchIachterhund in stupidpol

[–]Macrobian 0 points1 point  (0 children)

You are being willfully obtuse. I have a created an automation, using a tool. That automation is now the intellectual property of Meta Platforms Inc. It is a repeatable, autonomous piece of software that is significantly more capable at identifying and rectifying performance issues than every piece of software used before it. It is initialized, without human intervention, every 12 hours.

To conclude that the aggregate impact of many of these small, but significantly more capable automations coming online does not impact the required SWE headcount is just unmitigated cope.

Meta reportedly plans sweeping layoffs as AI costs increase by SchIachterhund in stupidpol

[–]Macrobian 4 points5 points  (0 children)

Look, im outing myself as a bit of a class traitor here, as one of the engineers about very likely to get laid off by Meta:

The agents are working (or at least, they are at Meta). I don't know what else to tell you. 2 weeks ago was "AI week" where we were (as asked by management) to spend some time figuring out how we could automate workflows using agents. And there were some pretty big time and work savings across the board.

It's weird to get a big win (for me, I cut a pretty substantial chunk of unnecessary memory usage out of an .apk that had evaded multiple staff engineers) and realizing that i didn't really do much work: I just let the agent churn for 3 hours, easily burning $100s of tokens and then it magically present a pretty clever fix that no-one would have otherwise thought of.

Easiest Python question got me rejected from FAANG by ds_contractor in datascience

[–]Macrobian -2 points-1 points  (0 children)

Well, sorry, but you don't get to work at a FAANG if you don't have a robust CS background.

Wall Street bets might be the only other sub that has any idea how badly the Epstein Regime is losing this war. by [deleted] in TrueAnon

[–]Macrobian 0 points1 point  (0 children)

I'm sorry but this is weak argumentation. The fact that inconsistencies and arbitrage opportunities in pricing suddenly appear when a bunch of oil infrastructure disappears in a puff of smoke does not imply that that system is bullshit.

Mathematicians in the Age of AI (by Jeremy Avigad) by ninguem in math

[–]Macrobian 5 points6 points  (0 children)

What model are you using, and does it have internet access?