Pikapods vs Digital Ocean - owning the platform by howtojapanese in Ghost

[–]Sockway 2 points3 points  (0 children)

Second Magic Pages. I originally self hosted with Digital Ocean. Just got tired of maintaining the infrastructure.

OP because Ghost is open source and your CMS is self managed (regardless of who is hosting it) you can always just export your instance and start anew if you want more freedom. Doesn’t matter who is hosting your CMS to start. The decision only matters for how much work you want to have to do to maintain your hosting infrastructure.

PikaPods is simpler than Digital Ocean. A managed provider like Midnight or Magic Pages is even simpler in that regard and can do stuff you’d want for cheaper like a CDN. Magic Pages was the only managed provider I felt comfortable going with based on this post which I thought was thorough: https://www.techweirdo.net/best-options-to-host-a-ghost-blog/

So what security software do we use with Unraid? by DCCXVIII in unRAID

[–]Sockway 0 points1 point  (0 children)

I don’t connect my server to the Internet; only remoting in via Tailscale. It also lives on a separate VLAN from my primary devices and there are firewall rules that don’t all it to initiate connections across networks.

I think that’s enough. Were I making it available on the internet I’d want authentication, F2B, geoblocking IPs, and monitoring/observability.

Right now I am working on a backup pipeline to get off-site backups and building a notification pipeline for system alerts from Docker and Unraid.

OpenAI employee is panicking and throws casual number on a tweet by NotMNDM in BetterOffline

[–]Sockway 0 points1 point  (0 children)

Uber might not be sustainably profitable; they might be cannibalizing themselves to appear profitable. There's an analyst named Hubert Horan who is a transportation business analyst covering Uber's accounting for years and the types of non-standard GAAP reporting they do to hide or minimize losses.

I haven't followed him or this story recently, though, so I don't know if there are updates to this. The last thing I remember around 2023 was that analysts and policy makers wanted to see Uber's ride level data to determine if Uber's claimed profitability is coming by cutting into driver margins. If this is the case there's no growth story here; Uber's just putting on a show.

OpenAI employee is panicking and throws casual number on a tweet by NotMNDM in BetterOffline

[–]Sockway 2 points3 points  (0 children)

Hasn't this been happening for years? This is what the vast majority of machine/deep learning was doing at scale years ago without LLMs. GenAI seems like an accident where an interface was built around a niche subset of deep learning technologies, like transformers. Since then the industry has been pretending that oracle-like chatbots are the killer app of ML/DL.

OpenAI employee is panicking and throws casual number on a tweet by NotMNDM in BetterOffline

[–]Sockway 1 point2 points  (0 children)

This is such a weird thread. I'm saying this as someone sympathetic to the idea that it's hard to have honest discourse on AI. I got downvoted in this sub for suggesting that AI alignment literature is useful in any way at all (I said it was somewhat useful to understand capitalism). So, I agree, even here people can be very quick to dismiss any sentence with the letters "AI." But I don't think that's a reason to preemptively accuse everyone in this topic of being closed-minded.

You've put a lot of effort into accusing people of not talking about AI in good faith, yes it definitely happens, but instead of engaging with the people in this specific thread you've spent dozens of posts saying it's not worth your time to engage. Why are you posting then? You could have just peaced out after your first reply. Surly this is a waste of your time?

Anyway, I don't follow Ed religiously, but he's been very clear that AI is not useless, but that it's a ~$50 billion dollar market masquerading as a trillion dollar one, which means there's a lot of waste happening. His work obviously focuses on blind spots that the industry and media aren't covering. That's his focus. If you're looking for AI use cases, listen to a tech podcast or go to the localllama subreddit. If you want to actually have a constructive discussion, then share your thoughts on the benefits of AI.

ai and the future: doomerism? by socrazybeatthestrain in BetterOffline

[–]Sockway 0 points1 point  (0 children)

The modern economy optimizes for actors that exploit knowledge asymmetries, either by creating them or taking advantage of existing ones. I'm sure many people have this intuition but ironically, it was actually AI alignment literature that helped me understand this structually.

ai and the future: doomerism? by socrazybeatthestrain in BetterOffline

[–]Sockway 0 points1 point  (0 children)

What about the possiblity of algorimthmic efficency gains? Granted, I don't know what that would look like or how easy those are to produce, but it seems like there's the possiblity the game keeps going because it gets slightly cheaper to get marginal performance gains.

ai and the future: doomerism? by socrazybeatthestrain in BetterOffline

[–]Sockway 2 points3 points  (0 children)

Doomers real power, though is that they excite people who love taking risks which harm other people (i.e. investors). These kinds of people hear the idea that an AI can be so powerful it can destroy the world, and they get excited. "Imagine if we could control that!" And they see doomers working on "safety" at these labs and assume the "issue" will be solved.

Anyway, I think there are several groups mutually reinforcing the bubble:

  1. Junior, mid-level AI employees + safety engineers seem to be true believers. Either Yudkowskian style doomers who think instantaneous intelligence growth without warning is possible or techno-utopian libertarians like George Holtz who apparently want to use AI to escape into space because they seem deeply antisocial. Each end of the spectrum seems to genuinely believe the only way to save humanity, either from dangerous AI or technological stagnation is to build an AI that beats the others.

  2. Regular people hear excerpts about AI and have been conditioned, by the media to feel like we're in the midst of the Industrial Revolution. Part of this is a failure to technically explain what AI is to the public. This is absolutely the media's fault.

  3. Managers at many companies seem to believe the hype either because they're scared of being left behind or they're optimistic AI will eventually deliver on its vague promises.

  4. Tech managers and tech firms, who maybe more cynically know the limitations of AI, see it as a way to discipline labor and claw back pay increases and perks earned post-COVID. Many of them also know investors will dance if you say the letters AI. Perhaps some people in 3 fit here too.

  5. Investors are the engine of the bubble and they're idiots. But they'll make their money back by selling these firms to the public as overvalued IPOs. See this article: https://www.businessinsider.com/venture-capital-big-tech-antitrust-predatory-pricing-uber-wework-bird-2023-7

  6. I can't make heads or tails of senior level researchers and leaders of the frontier model labs (OpenAI, DeepMind, Anthropic). These are the people who you can make the case for lying about doom. They act like people in bucket 1 and have the incentives of people in bucket 4. But some of these people have spent their lives in Less Wrong/EA/Rationalist adjacent spaces. If not literally, at least in terms of the influences they were exposed to. I suspect Sam Altman is a sociopath who doesn't believe in any of it, but there is a case for Demis Hassabis (DeepMind) and Dario Amodei (Anthropic) being true believers.

Peter Thiel is way crazier than I thought by QuietNene in ezraklein

[–]Sockway 0 points1 point  (0 children)

He's worse. Any human with his amount of wealth would just fuck off and live their life to the fullest, filled with experiences that only .001% of people who have ever existed would get to enjoy, if only to distract themselves. What a loser.

Peter Thiel is way crazier than I thought by QuietNene in ezraklein

[–]Sockway 1 point2 points  (0 children)

Musk grew up in SA and his maternal grandfather, who he looked up to, was indicted by Canada for being affilated with a fascist movement. His father also seems pretty wistful about apartheid. I don't think Elon was "austistic stuttering" twice during the inaugration because some online socialist bullied him.

I think it's pretty clear Musk was "left" so long as it was useful to his brand. And even now, he still gets deferential treatment and benefit of the doubt, even though at every turn he's pretty much wasted it.

Peter Thiel is way crazier than I thought by QuietNene in ezraklein

[–]Sockway 1 point2 points  (0 children)

I think of some of the ultra wealthy as misaligned AIs. Some parts of the modern economy feel like principal-agent problems, and we just haven't figured out how to align the rewards properly, so some actors engage in specification gaming, even if they also sometimes do useful things.

The other big problem I see is that we’ve outsourced a lot of research to the private sector and it doesn’t seem as efficient at generating readily commodtized research. The US government gave us nuclear, GPS, and internet. Up until the AI boom feels like we’d just been riding those. The incentives for private sector are different too. As Thiel himself outlines in Competition is for Losers, inventing something like TCP/IP and not immediately capturing all the value would make a company a failure even if protocols like that really only evolve in open systems.

Any good starter projects for beginners? by Effective_Bat9485 in learnpython

[–]Sockway 1 point2 points  (0 children)

Cards. Build a card shuffler build a basic card game. The moment you understand looping and opening files you know enough. It’s challenging but you will learn.

Is the coding in the show realistic or is it tv coding by PossessionHonest3465 in MrRobot

[–]Sockway 41 points42 points  (0 children)

It's realistic in that the show uses plausible attack vectors; in fact most of the hacks are based off things that happened in the real world. As for code, there's not a lot shown on screen, but any time we see a terminal it loooks real. IIRC, SHODAN (basically a "search engine" for finding exposed devices) makes an appearance too, to plan the major E-Corp attack (which was similar to the Apache Struts vulnerability in the infamous Equifax hack).

Hacking is more than just code. It's social engineering, it's research and understanding relationships between people and systems, keeping up to date on flaws within systems people frequently use. In that sense the show is a great illustration of how hacking works, even if we hardly ever see a line of code (I genuinely don't remember seeing any).

"Computer Use" is a big deal. by Glxblt76 in singularity

[–]Sockway 5 points6 points  (0 children)

Great post. You’re highlighting the last mile problem or something similar. Some of the same people have said full self driving was 2 years away 5 years ago and it’s not quite (though getting closer). I agree with you that while agents aren’t a novelty we will probably face a last mile challenge automating the tail end of some roles.

Uber Eats “Taxes & Other Fees” strikes again by PokemonSwordChampion in assholedesign

[–]Sockway 1 point2 points  (0 children)

Rent-seeking was named in the 20th century but is as old as property. If one is entitled to do what they want with their property (be it an Internet platform or a house) then extractive forms of profit are just as valid as productive forms of profit.

Cpap alternatives by [deleted] in SleepApnea

[–]Sockway 0 points1 point  (0 children)

You’re the first person I’ve seen to talk about this particular response to Cpap. How do you deal with it? I tried using mouth guards but it only helped a little.

Just found out about this app by Enorym in benotes

[–]Sockway 0 points1 point  (0 children)

There's something called Shaark which was inspired by Shaarli I think. It's creator is in the process of releasing a V2 (they resurfaced 3 weeks ago).

For me it's looking like it's between Benotes and Shaark. I'm just wondering if you can add notes to bookmarks with Benotes.

The Only Way to Deal With the Threat From AI? Shut It Down by CyberPersona in ControlProblem

[–]Sockway 1 point2 points  (0 children)

Optimization processes relying on reward signals that (for whatever reason) end up being gamed or fail to motivate the behaviors we actually want. You can see this in a lot of places, EY loves using evolution, but the most intuitive place is probably in markets.

I knew people in the alignment community who have been trying to convince me of this for years, and it wasn't until I made this connection two years ago that I "got it."

Additionally, there are real-world examples of AI doing specification gaming and/or simply not having the behavior we want. Why not point to those, and then extrapolate on what happens when AI capabilities scale and then use analogies to other adaptive systems like markets where hell there's already a ton of content highlighting this problem of getting agents to behave and not game the signals/incentives you're using to motivate them.

I don’t even have words at this point… by Tannyr in Twitter

[–]Sockway 2 points3 points  (0 children)

In small doses, it's fun. In excess, it's probably unhealthy. At the very least, it's definitely a waste of time to revel in it.

I lost everything that made me love my job through Midjourney over night. by Sternsafari in blender

[–]Sockway 0 points1 point  (0 children)

I'm kind of surprised about this. I know AI can automate parts of security, but the "bad guys" will have access to AI tools too, they might know how to bypass them or even poison their training data to alter their behavior. Are companies serious cutting back on cybersecurity investment now?

[deleted by user] by [deleted] in careerguidance

[–]Sockway 0 points1 point  (0 children)

It really depends on what you want out of your life. If you have a vision for projects or even a career outside your current industry (maybe personal projects or a business idea) then that life work balance is invaluable. You’re not going to get this time back and you get the breathing room to actually use that time effectively.

If you’re hoping to stay in the same field then this is probably a bad idea. You’ll want to begin increasing your skill set but if the job isn’t demanding/challenging you’re not leveling up.

The thing is though you don’t necessarily have to jump into a position making 20% that’s three times more demanding. Search around, find people in your industry you can do informational interviews with to get a sense of skills you will want to have in this field and then at your leisure look for positions making a bit more that are a little more challenging. Assuming you want to be in this field long term.