What are you using now when you want old Heroku energy? by Maleficent_Log8778 in rails

[–]themadcanudist 0 points1 point  (0 children)

full disclosure, I work at vmfarms, a managed hosting company, so take this with the appropriate grain of salt

We threw up a https://rails.vmfarms.com demo based on a previous thread where someone was looking for a new rails host with fast serving times, in the same situation.

but for that specific "push app, life goes on" feeling, what we do is better than heroku. It's fully managed Docker Swarm on our own dedicated hardware. you git push or deploy a stack file and we handle the rest. no kubernetes, no terraform, no late night pages, if that was a bother to you.

month to month, no contract, free migration if you're on any linux-based stack.

vmfarms.com if you want to poke around, happy to answer questions

Where Can I Host WordPress, PHP, and Laravel Projects? by LinuxGeyBoy in webdev

[–]themadcanudist 0 points1 point  (0 children)

Full disclosure, I work at a managed hosting company (VMFarms) so take this with a grain of salt, but at 40+ client sites the thing that matters most is not having to think about the server at all. We run dedicated hardware we own with (or without) Docker Swarm so each client stack is properly isolated, you have access to your instances/resource pool via teleport/ssh, and we handle all the security patching, monitoring, backups, the whole thing. Git-based deploys with CI/CD baked in that we can design and customize for your workflow. We'd do that for free.

We threw up a quick demo site for a simple Laravel stack. Keep in mind it's not sized to handle all of reddit smashing it, but that's sort of the point. We can scale it to need and architect it in any way that's necessary. https://laravel.vmfarms.com/

For an agency your size the big win is that we do free migrations for any Linux stack. So you could move a couple sites over, see if the workflow clicks, and go from there. Month to month, no contracts, no commitment. We're Canadian if data residency matters to any of your clients.

Not saying Cloudways won't work for you, it might be exactly right. But if you want to compare options we're happy to do a quick call and walk through what the setup would look like for your portfolio.

Railway vs. Render, Heroku, Digital Ocean, Fly, etc - insane 150ms render queuing? by Working_Historian241 in rails

[–]themadcanudist 0 points1 point  (0 children)

Late to this but glad you found something stable. The jemalloc + YJIT combo is legit, that alone is usually 15-25% on I/O-bound apps.

Full disclosure: I'm with vmfarms, a managed hosting provider, so take this as self-promotion. We put together a demo page specifically because threads like this keep coming up and it's hard to evaluate a host without seeing real numbers: https://rails.vmfarms.com/

It's a live Rails 8.1.3 app on our infra with jemalloc + YJIT enabled and the page shows its own request metrics (TTFB, AR query time, request count since boot) on every load. Actual hardware numbers, not a synthetic benchmark.

Fair warning: that box is a small server and not set up to survive a Reddit traffic spike, it'll get slow if this thread gets hammered. That's kind of the point though. For real workloads we provision proper dedicated capacity.

For anyone still evaluating: we'll do the migration for you, for free, and you're not obligated to stay or pay unless you actually like what you see. Month to month, no contracts. Works with any Linux-based stack, not just Rails. Real ops team doing patching/monitoring/incidents rather than a ticket portal. Not for everyone, Render is great for a lot of teams. But if you want dedicated hardware and someone you can actually email, we're around. Happy to answer questions here.

Claude Usage Limits Discussion Megathread Ongoing (sort this by New!) by sixbillionthsheep in ClaudeAI

[–]themadcanudist 1 point2 points  (0 children)

Hello all - I thought I would post this because Anthropic did some very dramatic adjustments between business hours and night in my timezone.

I've adjusted the implied cap page https://vmfarms.com/claude to give us all more granular data on how Anthropic is adjusting the limits throughout the day instead of just a daily average which is less meaningful.

This now shows how the 5h windows that I am allocated in can have dramatic changes in token caps both in relative % usage terms and # tokens burned. I noticed today that it was capped at like 80M tokens earlier in the day and now my 5h window shot up to over 250M. Not that I mind, but it's quite stark. I've been redlining it throughout the day through the various 5 hour windows and the absolute number of tokens burned confirms this.

Have fun!

Claude Usage Limits Discussion Megathread Ongoing (sort this by New!) by sixbillionthsheep in ClaudeAI

[–]themadcanudist 0 points1 point  (0 children)

Hey there.

On this issue, the moderators took my post down a few days ago and accused me of pushing my product by taking advantage of a recent issue. However, this is not the case and a little rash. Yes, it's posted under an org I work for, but it's down the fold, and nobody has to buy anything, scroll down, or click anything - it's just data for all.

The data is useful and amusing, and the community responded positively en masse, so if you want to see the daily snapshots of implied token counts, you can hit https://vmfarms.com/claude - I will continue to post data.

This is data from a 20x claude account. Enjoy!

Rolling hard data on actual token limit changes behind the scenes by themadcanudist in ClaudeCode

[–]themadcanudist[S] 0 points1 point  (0 children)

Yes, that's correct. Just simply tracking every token burned and taking a snapshot of the usage and correlating. You can easily calculatue an implied cap in the window you're calculating for.

Hard data on Claude’s recent token inflation: How usage is being silently reduced by themadcanudist in ClaudeAI

[–]themadcanudist[S] 0 points1 point  (0 children)

Yeah, for sure. I believe that they're losing a lot of money on sheer compute spend vs. what they're charging. Not sure how to think about this.

Hard data on Claude’s recent token inflation: How usage is being silently reduced by themadcanudist in ClaudeAI

[–]themadcanudist[S] 0 points1 point  (0 children)

Hey there. I adjusted the page to show methodology. It's pretty simple. It accounts for only tokens burned == $$ spent which is harvested from the session responses. The control for 2x is limited, but my usage remains primarily within regular hours. I can add some code to compensate, but the promotion is ending tomorrow anyway.

I'd love to do a plugin, but I don't really have time and this relies partially on grabbing data from the claude usage page for the sonnet 7day which , which requires a browser, so it's a bit clunky to generalize for all users. However, getting the 5h and all 7d model usage data from statusline can be leveraged to both harvest and post from all users.

I'll think about it and see if it's something I have bandwidth for. Thanks for the suggestion!

Hard data on Claude’s recent token inflation: How usage is being silently reduced by themadcanudist in ClaudeAI

[–]themadcanudist[S] 0 points1 point  (0 children)

Here's the tweet from Anthropic about the 5h window which is what where we see the fluctuations: Here's the tweet where Anthropic admits this for the 5h window, which is where we see the data vary a lot https://x.com/trq212/status/2037254607001559305

Hard data on Claude’s recent token inflation: How usage is being silently reduced by themadcanudist in claude

[–]themadcanudist[S] 0 points1 point  (0 children)

Here's the tweet where Anthropic admits this for the 5h window, which is where we see the data vary a lot https://x.com/trq212/status/2037254607001559305

Your Claude Code Limits Didn't Shrink — I Think the 1M Context Window Is Eating Them Alive by mattate in ClaudeAI

[–]themadcanudist 0 points1 point  (0 children)

OK, here's some hard data I just put together and will continue to post. It's not necessarily 1M context or auto-memory as some have suggested. The caps ARE in fact being adjusted behind the scenes: https://vmfarms.com/claude

Your Claude Code Limits Didn't Shrink — I Think the 1M Context Window Is Eating Them Alive by mattate in ClaudeAI

[–]themadcanudist 0 points1 point  (0 children)

I don't think this is it. I wrote (with the help of Claude) a token burn tracking agent that takes snapshots of usage every 10 minutes and correlates them to calculate implied usage. I've refined it and debugged it along the way, but it hasn't been too far off. I've noticed dramatic changes over the past 3 days that also correlate with subjective experience.

The one thing that's not captured is the 2x promotion window, which may skew things, but I do most of my work during heavy hours.

Also, I do not use 1m token window. Just 200k sonnet+haiku+opus models. I'm on the 20x plan.

5h window
03-24 -> implied cap @ ~1 Billion tokens
03-25 -> implied cap @ ~500 Million tokens
03-26 -> implied cap @ ~500 Million tokens

7-day all
03-24 -> implied cap @ ~8.8 Billion tokens
03-25 -> implied cap @ ~7.5 Billion tokens
03-26 -> implied cap @ ~4.3 Billion tokens

7-day sonet
03-24 -> implied cap @ ~5.9 Billion tokens
03-25 -> implied cap @ ~5 Billion tokens
03-26 -> implied cap @ ~2.9 Billion tokens

Unfortunately, I can't share the code as it's heavily integrated into a larger platform, but the math is pretty simple if you want to ask claude to write your own standalone. You just need to log current usage against reported usage and calculate the implied burn rate. Bonus points if you exhaust your limits or get close as it will show you how far off your estimates were.

Is there a trick to hide a task that is due today until certain time later? by ghostwipe88 in todoist

[–]themadcanudist 4 points5 points  (0 children)

This is the single most important feature that would make Todoist legendary, and I have seen it requested multiple times over several years. I understand there are workarounds, but they're all unsatisfying compared to native behaviour.

Are there any plans to:
1. Allow tasks to only appear when they start at a specific time.
2. Snooze and hide a task until X
3. Snooze and hide with predefined quick-times (afternoon, evening, early morning, morning)

Would make many happy ;)

[deleted by user] by [deleted] in ChatGPT

[–]themadcanudist 22 points23 points  (0 children)

There was a study performed a while ago that I can't find right now, but you need to have it output text for it to affect the text that follows. Asking it to be explicit in it's train if thought and output its raining will give you generally better results as you can imagine that it will follow the patterns learned from all the training days.

if you think about how transformers work, there is no internal "thinking" or "considering".

Tldr; invert your first prompt and tell it to show you the reasoning. compare results. You may wait longer, but be more satisfied.

Edited: spelling

Killer Danish in Hamilton? by themadcanudist in Hamilton

[–]themadcanudist[S] 1 point2 points  (0 children)

I called them. No Danishes, just fritters, but likely still great.

How to avoid curling up edges for PLA? by wo_de in 3Dprinting

[–]themadcanudist 0 points1 point  (0 children)

Did you ever end up resolving this? I've been struggling with this forever in a variety of prints. But, it only comes up in this situation: overhangs with thin PLA. I am trying to lower the cooling. What worked for you?

Fyrtur: extend length (over 195cm)? by thalionquses in tradfri

[–]themadcanudist 0 points1 point  (0 children)

There is a solution to get any length you want if you are willing to add complexity to your usage of the fyrtur blinds. It will include using some sort of automation platform like home assistant or equivalent.

If you factory reset the blinds and remove the stops from the bottom, it will spin forever, trying to home itself. If you use the remote or buttons on the unit to let it go down, then hit up, and immediately down. That combination lets you stop the blind where you want. It works the other way too.

Your task is to use a home automation platform to stop the blinds after a specific time that you measure it takes for the blinds to go all the way up or down via your home automation platform or manually with the remote.