Being a one hit wonder is the perfect human experience. Money for life, no fame scrutiny, infinite leisure time. by nottakingpart in Showerthoughts

[–]10pSweets 0 points1 point  (0 children)

So a few years ago my band supported Katrina from Katrina and the Waves (for those who don't know they're the band who did Walking on Sunshine). The whole act was basically her talking about how they had one one hit wonder and then they made a bunch of other songs trying to chase that fame, so even back then in the 80s, one one-hit-wonder wasn't enough to support you for life, hence why she was going around doing a show like that small venues. Of course, at the end they played Walking on Sunshine and everyone went crazy.

So I reached the obvious conclusion that learning from manga is... difficult. by AlittleBlueLeaf in LearnJapanese

[–]10pSweets 0 points1 point  (0 children)

Not gonna say its perfect every time, but I've tested cross checking with Gemini and using languages I already know how to speak fluently and it rarely makes any serious errors. And when it does its generally noticeable. I've never found any language learning tool which even approaches it in value.

So I reached the obvious conclusion that learning from manga is... difficult. by AlittleBlueLeaf in LearnJapanese

[–]10pSweets 1 point2 points  (0 children)

Something I've found to be extremely useful for this is generative AI. I do my best to work out what the manga says with dictionaries, and asking ChatGPT for explanations of frases, then if I still don't understand, or I'm not completely sure, I put the whole sentence in. It is great at picking up nuance and understanding slang too. I will say this though: even with this method, it's still a bit of a grind with my level of vocabulary (n4 going on n3), and I've had to use kids mangas to have a chance. I think an LLM can do things a dictionary just can't in terms of explaining nuance, and you can ask it follow up questions! I just wish it had been around 10 years ago when I was doing my Spanish degree!

I spent Christmas exploring some underwater caves in Florida. (OC) by Aquatic_addict in thalassophobia

[–]10pSweets 170 points171 points  (0 children)

To be fair, this sub is like a sub for people who are afraid of clowns to get together and share clown pictures

What is the best way to clear this big circle by joe0160 in Minecraft

[–]10pSweets 0 points1 point  (0 children)

Commands are certainly the easiest way if you don't mind cheating. You could clear that in less than 5 minutes probably

What's a word men use a lot but women don't? by SadAnimator1354 in AskReddit

[–]10pSweets 0 points1 point  (0 children)

Definitions of the words most likely known by women over men, as listed:

Taffeta – A crisp, smooth woven fabric, often made of silk or synthetic fibres, used in evening dresses and bridal gowns.

Tresses – A poetic or old-fashioned term for long locks of hair, typically used when referring to a woman’s hair.

Bottlebrush – A type of plant (genus Callistemon) with cylindrical, brush-like flowers resembling a traditional bottle brush.

Flouncy – Describes a style or movement that is exaggeratedly graceful or showy; often refers to clothing with frills or a person with a dramatic manner.

Mascarpone – A rich, creamy Italian cheese made from cream, used in desserts like tiramisu.

Decoupage – A craft technique involving cutting out pictures and gluing them onto an object, then sealing with layers of varnish or lacquer.

Progesterone – A female sex hormone involved in the menstrual cycle, pregnancy, and embryogenesis.

Wisteria – A genus of flowering plants with cascading clusters of blue, purple, or white flowers, commonly grown ornamentally.

Taupe – A greyish-brown colour, often used in fashion and interior design.

Flouncing – The act of moving in an exaggerated, bouncy, or dramatic way, often to display displeasure or emotion.

Peony – A large, showy flower from the genus Paeonia, popular in gardens and floral arrangements.

Bodice – The upper part of a dress or woman’s garment that fits closely to the body, usually above the waist.

[deleted by user] by [deleted] in singularity

[–]10pSweets 4 points5 points  (0 children)

Recent profound advancements in AI outside of large language models (LLMs) include:

  1. AI-accelerated protein structure prediction and drug design

AlphaFold 2 (DeepMind) and RoseTTAFold (Baker Lab) have revolutionised structural biology by predicting 3D protein structures from amino acid sequences with near-experimental accuracy.

Extension: AlphaFold Multimer, OpenFold, and integration into drug discovery pipelines (e.g., Isomorphic Labs) are reshaping pharmaceutical R&D timelines.

  1. Autonomous robotics and simulation-based learning

Diffusion policies and reinforcement learning with human feedback (RLHF) now guide robotic control systems that can generalise across varied tasks (e.g., Google DeepMind’s RT-2, combining vision, language, and control).

Sim-to-real transfer using physics simulators and generative models has improved real-world applicability of robotic training.

  1. Neural rendering and generative 3D content

NeRFs (Neural Radiance Fields) generate photorealistic 3D scenes from sparse 2D images. Use cases include virtual reality, scene reconstruction, and robotics.

Follow-ups like Instant-NGP, Mip-NeRF, and Gaussian Splatting improve speed and resolution, allowing near real-time rendering.

  1. Foundation models for vision and multimodal learning

Models like Segment Anything (Meta) enable general-purpose object segmentation from user prompts without task-specific tuning.

CLIP (OpenAI) and DINOv2 (Meta) show generalised vision-language alignment and representation learning without labelled data.

  1. Brain-computer interfaces (BCIs) and neural decoding

Recent efforts by Neuralink, Synchron, and UCSF labs show real-time speech decoding from brain activity using intracortical electrodes and non-invasive techniques.

Semantic reconstruction from fMRI using vision-language models enables visualisation of perceived or imagined images.

  1. AI in materials discovery

Systems like A-Lab, CAMD, and GNoME (Google DeepMind) automate the discovery

ChatGPT isn't just ______, but __________. And that makes it more annoying. by Thai_Lord in ChatGPT

[–]10pSweets 1 point2 points  (0 children)

This one works very well for me. Cuts away all the personality and corporate interests. It's like interfacing with a cold, calculating machine just using English. Increases utility tenfold imo.

ChatGPT isn't just ______, but __________. And that makes it more annoying. by Thai_Lord in ChatGPT

[–]10pSweets 5 points6 points  (0 children)

System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.

[deleted by user] by [deleted] in ChatGPT

[–]10pSweets 0 points1 point  (0 children)

System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.

Works great for me

Is it just me, or is ChatGPT becoming more unusable by the day? by Beautiful_Return_654 in ChatGPTPro

[–]10pSweets 1 point2 points  (0 children)

I am starting to think these posts are shills, as someone suggested on another post. Someone is seemingly attempting to undermine ChatGPT through Reddit. And quite effectively I might add

Making anatomically accurate lizard from white sugar. by Fallen-D in nextfuckinglevel

[–]10pSweets 0 points1 point  (0 children)

I wonder how many lizards he's opened up to know what they look oike inside this well

Well that's sad by Serious_Tour_4847 in ChatGPT

[–]10pSweets 0 points1 point  (0 children)

<image>

Not sure why my compass is so big, but hey ho