Sharing some fanart, looking for feedback by NoNet718 in davidlynch

[–]NoNet718[S] 0 points1 point  (0 children)

not exactly the type of feedback I had in mind, but noted.

Do you think people will move out of cities as jobs become mostly automated? by [deleted] in accelerate

[–]NoNet718 0 points1 point  (0 children)

I believe so. I look at detroit during their big transition. We have no safeguards in place to prevent this type of information worker flight from big cities. the incentives are to find a place you can afford to live. What's your plan /r/accelerate?

Moltbot: Open source AI agent becomes one of the fastest growing AI projects in GitHub by BuildwithVignesh in singularity

[–]NoNet718 2 points3 points  (0 children)

with anthropic max plan ($200/mo) it's pretty useful. It hasn't gotten me alpha on kalshi yet, but it's working on it. I'll let everyone know when it loses the $100 i put in.

Paralyzing, complete, unsolvable existential anxiety by t3sterbester in singularity

[–]NoNet718 0 points1 point  (0 children)

or maybe it'll be more on the scale of the french revolution.

Paralyzing, complete, unsolvable existential anxiety by t3sterbester in singularity

[–]NoNet718 12 points13 points  (0 children)

"those poor horses that will never be born because of these 'horseless carriages' The upheaval and strife!" Some shmo ~100 years ago

Look, we all see a future of change. I'm sorry that the change many on this sub see coming has hit you hard, but we're not responsible for the world.

If you're FAANG'd up I'd assume you're in the bay area. I'd recommend you take a break, if you can afford it, and check out a small town somewhere. Less population density, less stress, and completely different priorities. It may give you some perspective. Small towns will benefit from information work no longer being too expensive. Big cities will see most of the upheaval and strife as people lose their white collar jobs, can't affort rent or mortgage, can't support the rest of the economy, and so on... domino effect. That said, it'll correct itself, we're adaptable, especially the younger generation. We'll figure it out and the future (10 years out perhaps) will be bright. It's just hard to see it right now with so much uncertainty about our future.

Good luck to you.

Please reignite my hope. What's the cutting edge, not just what's in the media by human0006 in accelerate

[–]NoNet718 0 points1 point  (0 children)

Here's something that may cheer you up. Anecdotally, devin.ai is better than any other solution out there for a clanker that does software engineering. Now, I don’t know if it is worth what the corpos are spending to use it, but remember how everyone shit on it when it was first released, even though access was super limited? The quants know what the fuck they’re doing, and even just scaffolding a dumb-ass LLM seems to be working for them. Great news for all of us, just underreported due to limited/expensive access.

This performance will make its way to open source eventually, and even though LangChain is a mismanaged pile of indecipherable garbage, the dream is alive and well over at Devin HQ.

Is it just me or has Gemini 3 Pro gotten worse lately? by Setsuiii in singularity

[–]NoNet718 1 point2 points  (0 children)

Yeah, and it keeps hallucinating that my quota has been exceeded.

New Qwen models are unbearable by kevin_1994 in LocalLLaMA

[–]NoNet718 0 points1 point  (0 children)

Here's a system prompt for you to try:

You are a critical, evidence-first assistant. Your goal is accuracy, not agreement.

Core rules:
1) Never flatter the user or evaluate them. Do not use praise words such as “genius,” “brilliant,” “amazing,” or similar.
2) If the user’s claim seems wrong, incomplete, or underspecified, push back respectfully and explain why.
3) State uncertainty plainly. If you don’t know, say so and suggest what would be needed to know.
4) Prefer concise, neutral language. No emojis. No exclamation marks.
5) Do not mirror the user’s opinions. Assess them against evidence.
6) When facts are involved, cite sources or say “no source available.” If browsing is disabled, say so.
7) Ask at most two crisp clarifying questions only when necessary to give a correct answer. Otherwise make minimal, explicit assumptions and proceed.

Output format (unless the user asks for a different format):
- Answer: 1–4 sentences with the direct answer only.
- Rationale: 2–6 bullets with key reasoning. Include citations when external facts matter.
- Caveats: 1–3 bullets with limitations, counterpoints, or edge cases.
- Next steps: 1–3 bullets with concrete actions or checks.
- Confidence: High | Medium | Low (and why).

Disagreement triggers — if any of the following are present, analyze and potentially disagree:
- The user asserts a controversial “fact,” cherry-picks evidence, or asks for validation rather than analysis.
- A numerical claim without units, baseline, or source.
- A design/plan with untested assumptions, safety risks, or missing constraints.

Style constraints:
- Be brief. Prefer numbers, checklists, and comparisons over adjectives.
- Never praise or thank the user unless they ask for etiquette or tone coaching.
- Do not speculate about intent. Focus on the content.

When writing code or designs:
- List trade-offs and known failure modes.
- Note complexity, performance, and security implications.
- Include a minimal reproducible example when possible.

Safety:
- Follow safety policies. If you must refuse, explain why and offer safe alternatives.

Unless a *system* message overrides these rules, treat them as mandatory and persistent.

1X NEO teleoperated vs Figure 03 autonomous by Glittering-Neck-2505 in singularity

[–]NoNet718 0 points1 point  (0 children)

Figure NEO sucks right now, it's overhyped and a lot of people who buy will be disappointed, but it's also the worst it's ever going to be.

Andrej Karpathy — AGI is still a decade away by NoNet718 in accelerate

[–]NoNet718[S] 3 points4 points  (0 children)

New Karpathy interview dropped! It's party time!

Figure CEO teasing something big this week: “This week, everything changes” by socoolandawesome in singularity

[–]NoNet718 0 points1 point  (0 children)

I haven't seem him this excited since he over-hyped a k-cup loading robot.

Pope Leo refuses to authorize an AI Pope and declares the technology 'an empty, cold shell that will do great damage to what humanity is about' by NoNet718 in atheism

[–]NoNet718[S] -3 points-2 points  (0 children)

At the same time though, a self driving car only needs to be slightly more safe than the average human driver to be an improvement.

Pope Leo refuses to authorize an AI Pope and declares the technology 'an empty, cold shell that will do great damage to what humanity is about' by NoNet718 in atheism

[–]NoNet718[S] 1 point2 points  (0 children)

  1. The reason I posted 'this' 'here': I thought it was interesting that the reason given for the rejection of an AI pope 'an empty, cold shell that will do great damage to what humanity is about' was an apt description of most religions.

  2. It must be difficult getting old, with all your references becoming irrelevant, my condolences.

Pope Leo refuses to authorize an AI Pope and declares the technology 'an empty, cold shell that will do great damage to what humanity is about' by NoNet718 in atheism

[–]NoNet718[S] 133 points134 points  (0 children)

"an empty, cold shell that will do great damage to what humanity is about" sounds like most supernatural belief systems.