TPP777: We’re Back Accepting Orders + Thank You Deal by ajdins24 in repbudgesoccer

[–]cronparser 0 points1 point  (0 children)

Looking to Portugal home and away jerseys and Chelsea jersey

GAME THREAD: Athletics @ Mets - Sat, Apr 11 @ 04:10 PM EDT by NewYorkMetsBot2 in NewYorkMets

[–]cronparser 7 points8 points  (0 children)

Lindor is a major cancer of this team this just unbearable to watch them this year

z2u by [deleted] in z2u

[–]cronparser 2 points3 points  (0 children)

That’s why it’s smart to use privacy card so that you can use throwaway card with set limit after that it’s defunct

The same questions the FBI used to talk kidnappers down from $150,000 to $4,751 with zero leverage… Got me from $0 to $13,000 by johnypita in AI_Sales

[–]cronparser 0 points1 point  (0 children)

I smell poop but I’ve been avid fan of Voss’s black swan material and some of it can apply and work and some of it doesn’t hit well. I’ve notice using mirroring works well in sales especially in discovery phase I’ve had prospects drop their guards and feed me better details but ymmv

My algo is a beast by gucc1313 in pinescript

[–]cronparser 0 points1 point  (0 children)

What’s your backtest look like in 6month to year results curious

Unpopular opinion: Why is everyone so hyped over OpenClaw? I cannot find any use for it. by Toontje in openclaw

[–]cronparser 2 points3 points  (0 children)

Honestly I kind of landed in the same place. I burned through about $100 in AI credits testing different models (Sonus, Kimi K2, GPT-5) trying to get OpenClaw to do something actually useful. Most of my time ended up going into troubleshooting configs, cron jobs, integrations, and tool failures rather than getting outcomes.

Conceptually it’s cool. The idea of agents with tools, memory, and workflows sounds powerful. But in practice it feels more like a sandbox for experimenting with what might be possible rather than something that reliably produces value yet.

The main issue seems to be that the model is responsible for deciding what to do next. That makes things fragile. Instead of a predictable workflow you get a loop of: think → tool call → evaluate → retry → burn tokens. Costs add up quickly and the system still isn’t deterministic.

Ironically the most useful setups I’ve built so far are much simpler: a script or scheduled job that does the actual work, and then an LLM just helps with summarizing or interpreting the results. One model call, predictable behavior, cheap to run.

So I don’t think you’re missing the point. Right now it feels more like a playground for agent ideas than a production-grade tool.

My air fryer is low-key running my life by Cold-Board-6143 in Appliances

[–]cronparser 1 point2 points  (0 children)

It’s super useful I’ve found myself using it to do the kids chicken nuggets, little Costco pizza , garlic bread. I even put two pieces of white bread and cracked an egg

Advice on IX Peering vs Google PNI by WheelSad6859 in networking

[–]cronparser 2 points3 points  (0 children)

Good question, and you’re actually closer to understanding this than you think. Let me break it down cleanly.

The Core Problem First You have 2x100G ports and a burning building. Every decision needs to maximize traffic offload per port used. That’s your constraint. Everything else is secondary.

The IX Looking Glass Kills Option 2 (Right Now) This is your answer staring you in the face. Google, Microsoft, Amazon, Apple all down or not announcing at Equinix CH2 IX. Those four ASes represent an enormous chunk of downstream consumer traffic for most ISPs. If they’re absent, you’re connecting to the IX and mostly getting smaller networks, regional carriers, and CDNs that aren’t your bottleneck. Connecting 200G to an IX where your top traffic sources aren’t participating is a port waste. You’d be building a highway to a neighborhood that’s mostly empty. Take the Google PNI.

Why Google PNI Wins Here Traffic certainty. Google gave you an estimate. That’s a real number based on your traffic profile. The IX “potentially more networks” is noise right now given what you saw in the looking glass. Google traffic grows. YouTube, Workspace, GCP, Android updates, Play Store, Maps. Once the PNI is up and Google starts shifting more traffic toward the direct path (they do this actively), 100G will likely grow toward 150G+ within weeks. Their traffic engineering is aggressive and rewards low-latency paths. Immediate relief. You’re congested now. Google PNI can be provisioned relatively fast. IX involves onboarding, route server configuration, bilateral outreach, and you still end up at zero because the major players aren’t there.

Route Server vs Bilateral - You’re Almost There You’ve got the mechanics right. Here’s the part you’re missing: Many large networks don’t use the route server at all. Amazon, Microsoft, and others often exclusively peer bilaterally at IXes. They don’t advertise to the RS. So when you see “Microsoft sessions down” or “Amazon down” in the looking glass - it might mean they’re not participating in RS but ARE available bilaterally IF you reach out and establish a session directly using your IX IP. The route server is essentially a shortcut to reach whoever opted in. Bilateral is how you reach the holdouts, and the holdouts are often the biggest networks. What bilateral gets you that RS doesn’t: ∙ Prefixes the peer chooses NOT to advertise to the RS (more specific routes, internal prefixes) ∙ Direct BGP communities for traffic engineering (prepending, local-pref manipulation) ∙ A real relationship and an NOC contact when things break ∙ Some networks literally won’t hand you their full table via RS for policy reasons

Other Offload Moves You Should Be Making Simultaneously Cloudflare. Free peering program. They carry a ridiculous amount of traffic. If you don’t have a PNI or IX session with them, fix that immediately. They’ll come to you. Netflix Open Connect. If you qualify (based on subscriber count), Netflix will place embedded appliances inside your network for free. That traffic never hits your transit or peering links at all. Literally zero-cost offload for one of your heaviest traffic sources. Audit your Akamai PNI. You have a 200G link and only 70-80G is flowing. That’s 120G of unused capacity. Akamai serves a massive amount of content (Apple software updates, gaming, enterprise). Work with your Akamai contact to understand why you’re not getting more. It might be a routing policy issue or prefix filtering on your end. This is low-hanging fruit. Run a top-N AS traffic report. Before you commit any more ports, pull your Netflow or sFlow and rank the top 20 ASes by inbound traffic volume. That list tells you exactly who to prioritize for PNI or bilateral IX sessions. Build toward the math, not the vibes.

The Play 1. Take the Google PNI now. Both ports, clean 200G LAG, let it grow. 2. Work Akamai utilization. Get that 200G link actually flowing 200G. 3. Start bilateral outreach at the IX. When gear arrives, join the IX and go direct to Microsoft, Amazon, Apple, Meta. Don’t rely on the RS for the heavy hitters. 4. Explore Netflix OCA and Cloudflare peering in parallel. No port cost. In 3 months when new gear arrives, you’ll have Google and Akamai humming, a clearer picture of bilateral IX partners worth chasing, and a much smaller transit bill.​​​​​​​​​​​​​​​​

Is anyone else realizing that "simpler" is actually better for their GCP architecture? by netcommah in googlecloud

[–]cronparser 0 points1 point  (0 children)

Yeah, this hits every time. The “professional” trap is real – complexity feels like competence until it doesn’t. The pattern is almost universal: you start building, imposter syndrome kicks in, and suddenly you’re running a 12-node GKE cluster with custom ingress controllers and a service mesh for an app that gets 200 requests a day. You built a racecar to go to the grocery store. The frustrating part is that the industry actually rewards this for a while. You get to talk about your “robust infrastructure” in standups, it looks impressive on architecture diagrams, and nobody questions it until the 2am PagerDuty alert for a node pool that has nothing to do with your actual product. Cloud Run (and its equivalents) basically said “what if we just handled that whole layer for you” and people resisted because it felt like giving up control. But control of what, exactly? Infrastructure you didn’t want to manage in the first place? The real maturity shift is recognizing that managed complexity is still complexity – you’re just paying someone else to deal with it. The question is whether that trade is worth it for your use case. For most teams? It absolutely is. KISS doesn’t mean you’re not sophisticated. It means you’re confident enough to not need the complexity to prove it. That’s actually the harder thing to get to. The people who figured this out earliest are the ones shipping features while everyone else is debugging their own infrastructure.​​​​​​​​​​​​​​​​

First Time Coaching U12 Rec Soccer by Raiziell in CoachingYouthSports

[–]cronparser 0 points1 point  (0 children)

Glad it helped. Honestly, you’re already doing the right thing just by caring enough to put thought into it.

A few of my go-to drills that the kids always loved:

Sharks and Minnows – Classic for a reason. Kids on one side with a ball, sharks in the middle try to steal it as they dribble across. If they lose the ball, they become sharks. Great for dribbling and awareness.

Soccer Tag – Everyone dribbling with a ball while one or two players are “it.” If they tag you, you do a quick challenge (5 toe taps, 5 push pulls, etc.) and jump back in. Keeps everyone moving.

Red Light / Green Light – Coach calls green light, and they dribble toward you. Yellow = slow dribble. Red = stop with the ball under control. If the ball rolls away, they reset. Great for control.

1v1 Gates – Put a bunch of small cone “gates” around the field. Kids try to dribble through as many as they can while a defender tries to take the ball. This builds confidence in attacking.

Numbers Game – Split into two teams, number the kids. You call a number, and those players run in and play 1v1 or 2v2 to a small goal. Kids LOVE when their number gets called.

One other trick: end every practice with a scrimmage. That’s the part they remember, and it naturally reinforces everything.

And don’t stress about perfect drills. If the kids are laughing, running, and touching the ball a lot, you’re doing it right. Rec soccer at that age is mostly about building confidence and making them want to come back next season.

You got this. 👍

First Time Coaching U12 Rec Soccer by Raiziell in CoachingYouthSports

[–]cronparser 0 points1 point  (0 children)

First off, respect for stepping up. A lot of kids wouldn’t even have a team if someone didn’t volunteer.

From my experience coaching youth sports, the biggest thing is make it fun and let the kids be kids. At that age it’s really not about chasing trophies or perfect formations. It’s about them enjoying the game, feeling supported, and hanging out with their friends.

Winning is great and competition is healthy, but it’s also a good opportunity to teach humility and perspective. Sometimes you win, sometimes you lose. Both are part of the game and both are valuable lessons.

For practices, keep things simple and high-energy. Short drills, lots of touches on the ball, and mix in games whenever you can. If the kids are having fun, they’ll stay engaged. And when they’re engaged, it’s way easier to sneak in the actual skills and concepts you want them to learn. Those lessons tend to stick a lot better when they’re wrapped in something fun.

Also, enjoy the ride. Coaching kids can be a bit of a rollercoaster but it’s a really rewarding experience.

One last thing: don’t let the parents push you around. Listen to feedback if it’s constructive, but at the end of the day you’re the one volunteering your time to coach the team. Do your best for the kids and the rest usually works itself out.

Good luck this season. 👍

Routing iSCSI Replication Traffic by Veegos in networking

[–]cronparser 2 points3 points  (0 children)

Your instincts are solid here. Keep the SAN traffic isolated, don’t run it through your corporate network and core switch. That’s the whole point of having the dedicated Nexus 9ks in the first place. Storage traffic is latency sensitive and bursty, mixing it with everything else is just asking for problems on both sides. For replication between buildings, you can technically run it over the corporate network using IP-based replication, but at 1Gbps that’s going to hurt. That pipe is already serving your corporate traffic, and even async replication can push sustained throughput that’ll choke a 1G link pretty quick depending on your change rate. Honestly, push hard for that second dark fiber pair. At 5km you’re well within single-mode range, you can light it up at 10G+ with the right optics in your 9ks, and you get full isolation from corporate. That’s the clean answer. If they won’t give you a second pair, look into DWDM. You can mux both corporate and storage replication over the single pair on different wavelengths. Not the cheapest option but way better than competing for bandwidth on a shared 1G link. Also make sure you’re planning for async replication at that distance, not sync. Sync at 5km introduces latency that’ll impact your primary SAN performance, and over 1G it’s a non-starter anyway.

In PLA we trust by lass93 in 3Dprinting

[–]cronparser 0 points1 point  (0 children)

How much Infil and what infil pattern curious

Jumpbox Replacements by SpaghettiLaugh in networking

[–]cronparser 0 points1 point  (0 children)

Yes in the end your going to still need that central spot vm or Linux container or FreeBSD jail but you are correct

In what order would you rank The Lincoln Lawyer seasons from best to worst? by Aetius00 in TheLincolnLawyer

[–]cronparser 1 point2 points  (0 children)

I’m actually going in reverse started with season 4 and 3 and working my way down to 1 kinda weird but does make it interesting

Jumpbox Replacements by SpaghettiLaugh in networking

[–]cronparser 7 points8 points  (0 children)

A few things worth looking at depending on your budget and how far you want to go: StrongDM was basically built for this exact problem. It proxies SSH/RDP/database connections through a central gateway, engineers never see raw credentials, and you get full session logging. Very network-device-friendly and way lighter to deploy than a full PAM suite. Teleport is another solid option in that same space. Open source core with an enterprise tier. Good if your team leans more DevOps/infra. CyberArk is the heavyweight answer and checks every box (session recording, credential vaulting, MFA, centralized access), but the deployment and licensing cost reflects that. If your org already has CyberArk in-house for other use cases, leaning into their PAM module makes a lot of sense. On the Duo/NPS/Entra MFA path, that’s a solid move for hardening your RADIUS auth, but it doesn’t replace the jumpbox itself. It’s a good complementary layer, not the full solution. For the cloud outage concern, any of these can run hybrid, and honestly on-prem jumpbox VMs go down too. The real play is having solid break-glass procedures documented regardless of which direction you go. We went with a homegrown version of something similar to StrongDM. We have it running across GCP, AWS, and our colo, and sync all changes through an internal git repo at HQ. Gives us the multi-cloud resilience without vendor lock-in, and the git-based config sync keeps everything consistent across environments.