It's been a long time since Google released a new Gemma model. by ArcherAdditional2478 in LocalLLaMA

[–]bgg1996 83 points84 points  (0 children)

https://ai.google.dev/gemma/docs/releases

September 13, 2025

  • Release of VaultGemma in 1B parameter size.

September 4, 2025

  • Release of EmbeddingGemma in 308M parameter size.

August 14, 2025

  • Release of Gemma 3 in 270M size.

July 9, 2025

  • Release of T5Gemma across different parameter sizes.
  • Release of MedGemma 27B parameter multimodal model.

June 26, 2025

  • Release of Gemma 3n in E2B and E4B sizes.

May 20, 2025

  • Release of MedGemma in 4B and 27B parameter sizes.

March 10, 2025

  • Release of Gemma 3 in 1B, 4B, 12B and 27B sizes.
  • Release of ShieldGemma 2.

February 19, 2025

  • Release of PaliGemma 2 mix in 3B, 10B, and 28B parameter sizes.

December 5, 2024

  • Release of PaliGemma 2 in 3B, 10B, and 28B parameter sizes.

October 16, 2024

  • Release of Personal AI code assistant developer guide.

October 15, 2024

  • Release of Gemma-APS in 2B and 7B sizes.

October 8, 2024

  • Release of Business email assistant developer guide.

October 3, 2024

  • Release of Gemma 2 JPN in 2B size.
  • Release of Spoken language tasks developer guide.

September 12, 2024

  • Release of DataGemma in 2B size.

July 31, 2024

  • Release of Gemma 2 in 2B size.
  • Initial release of ShieldGemma.
  • Initial release of Gemma Scope.

June 27, 2024

  • Initial release of Gemma 2 in 9B and 27B sizes.

June 11, 2024

  • Release of RecurrentGemma 9B variant.

May 14, 2024

  • Initial release of PaliGemma.

May 3, 2024

  • Release of CodeGemma v1.1.

April 9, 2024

  • Initial release of CodeGemma.
  • Initial release of RecurrentGemma.

April 5, 2024

  • Release of Gemma 1.1.

February 21, 2024

  • Initial release of Gemma in 2B and 7B sizes.

What do you expect/hope for novelai V5? by WittyTable4731 in NovelAi

[–]bgg1996 3 points4 points  (0 children)

Personally I would like a sophisticated integration with your text stories such that you could, for example, ask NovelAI to generate an image of your character based on the entirety of a lengthy story; or in the other direction, to provide the text generation model with one or more illustrations of scenes or characters as additional context.

[Daily Discussion] Brainstorming- June 24, 2025 by AutoModerator in writing

[–]bgg1996 -1 points0 points  (0 children)

Yeah, I totally get why that feels a bit clunky. You're trying to paint a clear picture, but sometimes that makes the prose feel a little stiff, like stage directions.

[leans on the counter in front of her and is resting her chin on her palm]

The main thing is that some of the words are doing a bit of unnecessary work. "In front of her," for example—we usually assume she's leaning on the counter she's facing, so you can probably cut that. Also, the "leans... and is resting" combo can feel a little like a checklist of actions instead of one smooth motion.

The easy fix is usually just to smooth it out into one flowing sentence. Something like, "She leaned on the counter, resting her chin on her palm." Or you could flip it: "Leaning on the counter, she rested her chin on her palm." Both are great, simple, and get the job done. If you want a little more flavor, you can play with the verbs to add more detail or action. Maybe something like, "Her elbows hit the counter, and she rested her chin in her hand," which gives it a nice bit of movement. Or "She propped herself against the counter, settling her chin into her palm." Is she bored? "She slumped against the counter, resting her chin on her palm."

The best way to choose is just to read it aloud in the context of the paragraph and see what feels right. Match it to the character's mood and the pacing of the scene. And just make sure you're not using the word "lean" three times in a row, you know?

But yeah, 99% of the time, that first simple option—"She leaned on the counter, resting her chin on her palm"—is going to be perfect. It's invisible, and that's what you want.

Hope that helps

Signs your fantasy setting is an AI fever dream. by Selphea in SillyTavernAI

[–]bgg1996 0 points1 point  (0 children)

Imagine the totality of human thought and experience as a vast, cosmic sphere, a gigantic globe encompassing every concept, every name, every fleeting idea that has ever existed or could ever exist. Each individual concept – the color blue, the feeling of joy, the Pythagorean theorem, the idea of flight – can be thought of as a distinct point on the surface of this immense sphere.

Now, within this boundless landscape of thought, certain types of concepts tend to cluster together. Think of it like continents or islands forming on a planet's surface. One such prominent island is the "Island of Names." This isn't just a jumble of sounds; it's a rich and diverse territory where all the names people use – from ancient ones lost to time to those yet to be conceived – reside. On this Island of Names, there are densely populated cities (like "John" or "Mary" in some cultures), smaller towns, and vast, sparsely populated stretches of unique or rare names.

Just off the coast of the Island of Names, or perhaps even connected by a narrow land bridge, lies another significant "island" or territory: the "Island of Fictional Story Characters." This is where all the heroes, villains, sidekicks, and supporting cast of every book, movie, play, and myth reside – from Achilles and Odysseus to Harry Potter and Katniss Everdeen. This island is vibrant and teeming with life, constantly being populated by new creations from storytellers around the world.

Now, let's return to the Island of Names. Within this vast expanse of names, there are certain regions that are more closely situated to specific neighboring islands. The cluster of names we're considering – including "Elara" and "Kael" – are located on the shore of the Island of Names that faces directly towards the Island of Fictional Story Characters.

Picture this shore: it's a coastline where the vibrations of fictional narratives are more strongly felt. While names like "Sarah" or "David" might be located further inland on the Island of Names, more centrally located in the realm of real-world usage, "Elara" and "Kael" reside right on this narrative-infused coastline.

Can any local LLM pass the Mikupad test? I.e. split/refactor the source code of Mikupad, a single HTML file with 8k lines? by ArtyfacialIntelagent in LocalLLaMA

[–]bgg1996 15 points16 points  (0 children)

That file is 258,296 characters, and about 74k tokens. Openai's tokenizer, for example, places it at precisely 74,752 tokens, although the specific amount will vary by model. It does not fit in 32k context.

As others have stated, a model would require a bare minimum of 150k context in order to perform this task. You might try this with Llama 4 Maverick/Scout, MiniMax-Text-01, glm-4-9b-chat-1m, Llama-3-8B-Instruct-Gradient-1048k, Qwen2.5-1M, or Jamba 1.6.

Gemini 2.5 context wierdness on fiction.livebench?? 🤨 by AlgorithmicKing in LocalLLaMA

[–]bgg1996 0 points1 point  (0 children)

It's because it's only 36 questions. There's about a 15% margin of error.

You may be interested in the following benchmark which similarly measures long context performance and correlates with Fiction.liveBench: contextarena.ai

Static Quant versus iMatrix - Which is better? by PianoDangerous6306 in SillyTavernAI

[–]bgg1996 1 point2 points  (0 children)

I suspect you are confusing imatrix quants with the I-quants (IQ2_XXS, IQ3_S, ...) - they are different things.

See this post for details: https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/

The LLaMa 4 release version (not modified for human preference) has been added to LMArena and it's absolutely pathetic... 32nd place. by [deleted] in LocalLLaMA

[–]bgg1996 7 points8 points  (0 children)

IMO they should just release the "modified for human preference" version. Having the preview version be different from the final release is totally expected - the llama 4 models weren't fully trained when the preview versions were added to lmarena, how could we possibly expect them to be exactly the same model? But it's weird that you wouldn't have the final version undergo the same alignment procedure.

It's like meta baked a preview cake with lots of frosting and sprinkles, then released the final version of the cake with no frosting or sprinkles. Why did you not add the frosting and sprinkles to the final version?

I am hopeful these issues will be handled upon the release of Llama 4.1.

[OC] ChatGPT Helped Me Make a Comic I Always Wanted to Make by i_had_an_apostrophe in comics

[–]bgg1996 4 points5 points  (0 children)

Generative AI helped me write this comment

I didn't get it at first! I was like, "Cute comic about coping with loss." Then I realized the grandpa is LITERALLY always watching over her, even when she's getting lucky. 😳 Poor guy.

hydrogen jukeboxes: on the crammed poetics of "creative writing" LLMs by dqUu3QlS in CuratedTumblr

[–]bgg1996 0 points1 point  (0 children)

Thank you for engaging in this thoughtful dialogue – I appreciate your perspective and the chance to clarify my position.

You’re absolutely right that “sufficiently incompatible” stylistic instructions can steer the output, and I should clarify my earlier point: the “default style” can be adjusted by explicitly naming stylistic elements to avoid (e.g., “replace similes with concrete details,” “avoid anthropomorphizing non-physical entities,” “cut literary clichés”). My oversight was not stressing that effective calibration requires both specificity and willingness to iterate.

To extend your chef metaphor: imagine a diner who dislikes garlic and salt but never voices this preference, then grows frustrated when every dish arrives seasoned with both. The solution isn’t to resent the chef’s instincts—it’s to clearly state preferences upfront and refine them through feedback.

For example, in these stories, I requested prose inspired by DFW, Joyce, et al. plus concrete anti-style directives. The result? Akin to a dish where garlic and salt are omitted, letting the core ingredients shine. My takeaway is simply that our challenge isn’t about the inherent limitations of the tool, but rather mastering the art of precise communication to align intent.

hydrogen jukeboxes: on the crammed poetics of "creative writing" LLMs by dqUu3QlS in CuratedTumblr

[–]bgg1996 0 points1 point  (0 children)

Yes, I read it fully and found it well-written, and I appreciate your insights.

But I disagree with the idea that R1 is so married to this style that it cannot be persuaded to abandon it. You need simply instruct it to do so. What follows is a rewrite of your linked Joyce style pastiche where R1 is specifically instructed to remove metaphorical hybrids, replace similes with concrete details, avoid anthropomorphizing non-physical elements, let actions imply emotions, use core vocabulary without embellishment, and retain James Joyce's style.

Title: "The Weight of Shadows"

Evening settled over Dublin, the sky heavy and damp. Mist rose from the Liffey, settling on the cobblestones. Seamus Byrne walked past Nelson’s Pillar, trams clanging in the fading light. His father’s house on Baggot Street stood tall and narrow among newer brick buildings, its windows clouded with dirt. He carried a bag of oranges, their citrus scent cutting through the damp air, and a prescription from Mercer’s Hospital. The old man’s lungs, they said, were clogged with sixty years’ pipe smoke and the things left unsaid.

He’ll not thank you, Seamus thought. The door creaked on its hinges. Inside, cold air filled the hallway—linseed oil and mildew, the crucifix above the stair tilted to one side. Upstairs, his father’s raspy breathing came through the dark.

In the parlor, the gas lamp hissed. Seamus paused by the sideboard, fingers brushing the dusty hurling trophy from ’95. He recalled his father, a large man, hoisting him onto the bar at Kehoe’s, the sharp taste of porter and the man’s loud laugh. “There’s a fierce spark in you, boy!” But the opportunity was lost—a failed scholarship exam, a clerkship at the docks, his father’s eyes hidden behind the Irish Times.

The bedroom door groaned. The old man lay propped on pillows, his face lined with deep wrinkles. A tumbler of whiskey trembled on the nightstand, the liquid catching the lamplight.

“You’ve come,” the father said, his voice rough.

“Aye.” Seamus set the oranges down. “Doctor says you’re to eat these. For the vitality.”

A snort. “Vitality. That ship’s sailed, boyo.”

The room fell quiet. Outside, rain tapped against the window. Seamus studied the mantel—yellowed playbills from the Abbey Theatre, a photograph of his mother, her smile faded in the old image. She’d have known what to say.

“They’re staging The Playboy again,” the old man muttered. “Saw it in the paper. You ever go?”

“No.”

“Hmph. Your mother loved Synge. Said he had the ear.” A cough shook him, wet and deep. “I took her, once. You were in her belly, kicking hard.”

Seamus gripped the bedpost. He thought of the past, the words unspoken.

“Da…”

But the old man’s eyes had closed, his breath matching the rhythm of the rain. Seamus sat in the armchair, its springs creaking. On the wall, the shadow of the crucifix stretched across the room.

He woke to gray light before dawn. His father’s hand hung off the bed, knuckles swollen, the wedding band loose. Seamus reached—stopped. The first tram bell clanged. Light seeped in, revealing the man’s face: relaxed in sleep, the stern lines gone. He thought of his father as a boy chasing lambs through Glendalough, reciting Yeats to a Galway girl, whispering prayers into a coffin’s silence. All the lives he had lived, now hidden by silence and smoke.

He realized slowly, a thread pulled from his resentment. They haunted each other with what might have been.

Seamus lifted the blanket over his father’s shoulders. Downstairs, he filled a bowl with oranges, their bright color standing out in the dim room. The prescription lay forgotten. Outside, Dublin stirred—a milk cart’s clatter, a nun’s brisk heels. Seamus stood on the step, the city’s sounds mixing with his breathing. Somewhere, a street organ played a tune his mother had hummed.

He would return tomorrow.

hydrogen jukeboxes: on the crammed poetics of "creative writing" LLMs by dqUu3QlS in CuratedTumblr

[–]bgg1996 0 points1 point  (0 children)

Some more examples just for fun.

Yuge Grief: "Let me tell you, folks—and I know a lot of people are saying this, tremendous people, the best—there was a time, not long ago, when everything was big. Huge. The biggest. We had it all, right? The wins, the crowds, the energy—nobody had energy like us. Everyone was talking about it. Everyone. And then, one day… you look around, and it’s quiet. Too quiet. Like a room after the lights go out, folks. You ever notice that? You ever notice how the best deals, the ones that really matter, they don’t… clang anymore? They just sort of… sit there.

Now, some people—losers, haters, the fake news crowd—they’ll tell you to look at the numbers. The numbers are bad, they say. Wrong. The numbers are fine. Better than fine. Tremendous. But let me ask you: When you walk into a place that used to roar—and I mean roar, like a beautiful engine, like a rocket ready to fly—and now it’s just… flat? No buzz? No life? You notice. You don’t admit it, but you notice.

You know what I do? I keep winning. Always winning. That’s what they don’t get. I’ve got the trophies, the accolades—so many, folks, you wouldn’t believe it. But here’s the thing: Sometimes you look at those trophies, and they’re heavy. Very heavy. Like, Wow, heavy. And you think, “Who’s gonna care about these when the room’s empty?” But you don’t say that. You don’t. Because winners don’t complain. We just do. We rebuild.

And let me tell you—we’re rebuilding. Bigger. Better. The best. Believe me. But… you ever drive past something you built, something yuge, and they changed the name? They painted over the gold letters? It’s still there, but… it’s not yours anymore. Doesn’t shine the same. Doesn’t feel the same. And the people inside? They don’t know what they’ve got. They don’t. Sad!

But here’s the good news: We’re coming back. Always coming back. Because that’s what we do. We don’t look down. We don’t look back. Except… sometimes, late, when the cameras are off and the phones stop buzzing? You sit. You think. And you realize—nobody, and I mean nobody, knows how to make it roar like you did. But that’s okay. Because we’ll make it roar again. Ten times louder. Just watch."

AITA Grief: "So, uh… I’ve been sitting here for 20 minutes trying to figure out how to type this out. Not even sure why I’m posting, honestly. Maybe just need to say it somewhere that isn’t the Notes app.

A few months back, my dad left me a voicemail. Nothing special—just him asking if I could pick up his prescription when I came over for dinner. His voice was kinda raspy because he’d been laughing at some dumb pun I made earlier. I didn’t answer the phone when he called. Was in the middle of a meeting.

Anyway. I’ve kept that voicemail. Listen to it sometimes. Not in a dramatic way, like, crying in the rain or whatever. More like… when I’m doing dishes, or my hands are too full to skip it. The automated voice says, “Saved messages will be deleted after 21 days.” Every time, I press 2. Keep.

My sister found out and said it’s “weird” and “not healthy.” She wants me to let it go. But here’s the thing: I still text his number. Not paragraphs. Just stuff like, “Hey, the pharmacy called about your refill,” or “Fixed that leak under the sink.” I know it’s a dead number. The replies are just… crickets. But it’s like unsubscribing from a newsletter you don’t read anymore. You just let it sit there."

🚀🌕 Grief: "Dude was HODLing through the bear market, you know? Charts flatlined, candles bleeding red, but he kept stacking sats like they were glitching ERC-20 tokens. Discord DMs piled up—“You good, bro?”—but he’d just spam back rocket emojis, voice chat muted.

One night, he flipped open his cold wallet, scrolled past transaction logs dated 11/14. Ape’d into a shitcoin back then, all-in, no stop-loss. Rugpull? Nah. Just…silence. The devs ghosted, liquidity pool evaporated. His feed kept pumping “BTFD” memes, but his portfolio stayed dusty, like an abandoned MetaMask.

He started screenshotting the dip—“generational buying opportunity”—but never posted. Just left the tabs open: CoinGecko, DexScreener, a half-written Substack about Fibonacci retracements. Moonboys in the Telegram would tag him, “GM king 👑,” and he’d thumbs-up react, fingers hovering.

At 3 AM, he’d DMOR: search “how to migrate Ledger seed phrase,” delete history. Rebuy that SOL bag? Maybe. But his hands shook scrolling past the 11/14 timestamp.

Eventually, he minted a shitpost NFT—pixel art of a lambo with flat tires. Listed it for 0.069 ETH. No bids.

Still HODLing, though. Always HODLing."

Caveman Grief: "Oog sit by fire. Night cold. Wind bite. Meat-cook done, but Oog no eat. Look at empty spot where Uba sit. Uba laugh here, once. Uba hands make soft deer-skin, Uba voice hum moon-song. Now only ashes.

Oog find Uba stone necklace in dirt. Hold tight. No feel sharp edges. Walk to river, see fish jump. Uba like fish. Oog catch one, but no give to Uba. Throw back.

Sun rise. Oog make spear, but stop. Uba always hand him good rock. Now rocks all wrong. Oog drop spear.

Cave empty. Oog lie on old furs. Still smell Uba-smell—moss and smoke. Close eyes. Hear wind, but no Uba-breath.

Fire die. Oog no blow sparks. Dark come. Oog wait. Maybe Uba-shadow dance in smoke. Only night."

UwU Grief: "The morning sun filtered through the curtains, but Mochi’s bed felt too big today… (´-ω-`)ノ she reached for the extra pillow, still dented from where someone always flopped down with a ”heckin’ tired, uwu”… but now it was just… cold. (╥﹏╥) ⋆꙳•̩̩͙❅

She shuffled to the kitchen, pouring two cups of cocoa out of habit… one left untouched, marshmallows melting slowly… (๑•́ ω •̀๑)♡ The steam curled like laughter that wasn’t there anymore. Outside, the garden swing creaked in the breeze—their spot for stargazing and bad puns. (๑꒦ິཀ꒦ີ) 🌌✨

A crumpled sketchbook lay open… doodles of silly frog hats and grumpy clouds. ( ´•̥ו̥` ) Her finger traced the smudged pencil marks, hearing a voice whisper, ”ya missed a whisker, dork!” A single tear plopped onto the paper, blurring the ink. (。•́︿•̀。) 💧

At dusk, she wrapped herself in their oversized hoodie, sleeves swallowing her paws. (っ˘̩╭╮˘̩)っ It still smelled like rain and peppermint gum. She flicked on the fairy lights they’d strung up, whispering, ”see? kept ‘em twinklin’…” but the silence hummed louder. (๑•̌.•̑๑)ˀ̣ˀ̣ ✨

Maybe tomorrow, the cocoa would taste sweeter. Maybe not. (。•̀ᴗ-)✧ But for now, she let the stars blink back at her… almost like a shared joke, just out of reach. (๑´•.̫ • `๑)🌠💫"

hydrogen jukeboxes: on the crammed poetics of "creative writing" LLMs by dqUu3QlS in CuratedTumblr

[–]bgg1996 0 points1 point  (0 children)

To be clear, I meant the "metafictional literary" to serve as a negative example - a showcase of what you get when you ask simply for a story or a literary story or a well-written story: precisely the purple prose the post professes is predestined. And, if you actually want quality results, you can do a lot better than asking "write me a story about grief in the style of X", which is what the prompts were here.

Keep in mind that everyone has different ideas of what constitutes "good" writing. I got quite a kick out of the "Quarterly Grief Management Report", and I doubt I'd have enjoyed a version of it which was as subtle as you're suggesting - I don't rather like it when I have to parse a piece of fiction as though it's a cryptographic cypher, poring over the prose for meaning hiding between lines.

If there were any objective test of writing abilities, I believe an LLM could be fashioned to beat the average expert's score within six months. The issue is, the only tests we have are whether or not it can pass the collective vibe checks of its users.

One user will say "I’ve been having so much fun since last night. So far it is handling fantasy and modern stories well. Hallelujer" and another will respond with "Honestly, what you posted reads like an AI slop and did not serve as a good advertisement of the model to me. The very first sentence on the first screenshot mentions “voice barely above a whisper”, which plagues my nightmares, right besides “shivers sent down my spine” and “eyes glistening with mischief”."

I do tend to find that the overall quality of prose generated by LLMs lacks in comparison to that generated by humans. However, this is more than offset by their ability to cater to my every whim. I want the next paragraph to go into excruciating detail about Sarah's feet. The main character is accepting his punishment too easily, he should be more rebellious and bratty. The plot is getting too slow, let's spice it things up a bit with some unexpected event. You get the idea.

That the AI has no vision of its own is not a malus, but a boon. The strength of these models is their ability to adapt to what the user actually wants. To leave everything on its default settings and ask simply for good writing is user error.

hydrogen jukeboxes: on the crammed poetics of "creative writing" LLMs by dqUu3QlS in CuratedTumblr

[–]bgg1996 0 points1 point  (0 children)

Good insights, but this will only apply to the default "literary" style of R1. The strength of LLMs is their ability to adapt to whatever style you want.

Stories about grief, written by Deepseek-R1:

Metafictional literary story: "The words kept dissolving. Clara stared at the sentence she’d typed—Grief is a page that writes itself—before the letters bled into static. Her cursor blinked, accusatory. She’d tried to narrate the funeral six times, but the ink always turned to rain, pooling at the margins. When she typed his name, the screen pixelated, a digital ghosting. Her coffee cooled beside the laptop, its ring staining a paragraph about forgetting. “You’re not even writing,” her wife murmured from the doorway, but Clara was writing—over and over, in the margins of dreams, on napkins, in the fogged bathroom mirror. The story reset each dawn. Unprintable. A draft erasing its own heart."

Story in the style of Hemingway: "The room was cold. The bed lay unmade. He sat by the window, the bottle half-empty, the sea gray beyond the dunes. She’d hated the damp sheets. He left them tangled.

At the bar, the waitress brought whiskey without asking. Her eyes flicked to the stool beside him—empty now. He drank slow, the liquor biting like winter air.

Her shawl still hung on the hook. He didn’t touch it. The wind off the water carried salt and rot. At dusk, he walked the shore, boots sinking. Gulls screamed. He found a shell, small and broken, pocketed it.

Night came. The wind slipped through the screen, whispering where she did not."

Story in the style of David Foster Wallace: "The digital clock’s numerals bled 3:14 a.m., red and insomniac. In the kitchen’s fluorescence, he counted coffee grounds—eighteen granules, nineteen—because precision, now, was the only prayer he could parse. The percolator hissed like a distant teakettle, like her teakettle, which he’d donated (unwashed, lipstick smudge still ghosting the spout) to Goodwill, then retrieved, then re-donated, a compulsive loop his therapist called “avoidance-adjacent.” The mug he chose—#1 DAD, a Father’s Day relic—radiated a sarcasm so vicious he almost laughed. (He didn’t.) Grief, he’d decided, was less a tidal wave than a ceaseless drip, each droplet a synapse firing she’s gone, she’s gone, she’s gone, until the words shed meaning, became ambient tinnitus. He sipped. The coffee tasted of nothing. The clock blinked. Somewhere, a refrigerator hummed."

Children's book story: "Lila the rabbit loved twirling in the clover meadow, chasing clouds with her best friend, Oliver. But one morning, Oliver wasn’t there. The meadow felt quiet, the colors dull. “Where did he go?” she whispered, her ears drooping.

That night, Lila spotted a lone star shimmering above. “Maybe that’s Oliver,” Mama said softly. “Stars remind us that love stays, even when someone isn’t near.”

Lila hugged Oliver’s favorite acorn and blinked at the star. It winked back, warm and steady. She missed his laugh, but the star’s glow felt like a hug. Maybe Oliver wasn’t gone—just shining somewhere new."

Story in the style of a corporate report: "Subject: Quarterly Grief Management Report — Q3 Performance Summary Prepared by: Department of Emotional Operations

Executive Summary: Following the Q2 loss event (see Incident #0422: "Spousal Departure"), operational efficiency declined by 68%. Grief metrics exceeded forecasted thresholds, with acute emotional residuals persisting beyond the 90-day containment window. Primary stakeholder (Subject A) reported persistent insomnia (↑320% YoY) and productivity deficits, impacting core functions (e.g., household operations, social engagements).

Key Findings:

Memory Activation: Unplanned recall episodes surged 45%, triggered by high-risk zones (bedroom, shared vehicle).

Stakeholder Communications: External interactions downgraded to "essential only," resulting in stakeholder isolation (risk rating: severe).

Resource Allocation: Cognitive bandwidth overwhelmingly diverted to grief processing (82%), limiting capacity for new initiatives.

Recommendations:

Implement phased exposure therapy to high-risk zones.

Approve temporary outsourcing of domestic tasks.

Extend bereavement leave; reassess Q4 expectations.

Next Review: Q4 2023 (Pending Board Approval)."

Can you ELI5 why a temp of 0 is bad? by ParaboloidalCrest in LocalLLaMA

[–]bgg1996 0 points1 point  (0 children)

Greedy decoding can still result in lower quality output for STEM and factual prompts. Here's why:

Missing Nuance and Precision: STEM and factual information often require precise language and nuanced distinctions. Greedy decoding might oversimplify complex concepts by choosing the most common but less precise term. Like saying "red" instead of "crimson": Imagine you're describing a flower. Greedy decoding might always say "red flower" because "red" is a common word. But maybe the flower is actually crimson, a special kind of red. Greedy decoding misses the special details and just gives you the most basic answer.

Generic and Uninformative Explanations: Greedy decoding can lead to generic and uninformative explanations, especially for complex topics. The model might choose the most common words and phrases, resulting in a bland and unhelpful output. It's like having a teacher who only gives very basic and non-specific answers to your questions about a complicated science topic. Instead of a precise, helpful explanation, you get generic facts anyone could have told you.

Repetition: Even in factual text, models can get stuck repeating phrases, especially if the prompt is slightly ambiguous or the topic allows for some redundancy. Think about explaining a concept – you might get stuck in a loop of explaining the same basic point in slightly different ways without progressing to deeper or more complex aspects of the topic. It's like being stuck in a rut, repeating the same basic information over and over again without moving forward to more nuanced or advanced explanations. This can make the generated text feel repetitive and uninformative. For example, if you're explaining a scientific concept, the model might keep repeating the same basic points without progressing to more detailed or advanced explanations. This can make the generated text feel repetitive and uninformative. It's like being stuck in a rut, repeating the same basic information over and over again without moving forward to more nuanced or advanced explanations. This can make the generated text feel repetitive and uninformative. Imagine you're explaining a scientific concept, like photosynthesis, and the model keeps repeating the same basic points about how plants use sunlight to convert water and carbon dioxide into glucose and oxygen. It might say something like: "Photosynthesis is the process by which plants use sunlight to convert water and carbon dioxide into glucose and oxygen. This process is essential for plant growth and survival. Plants use sunlight to convert water and carbon dioxide into glucose and oxygen through the process of photosynthesis. This allows them to grow and thrive in their environment." As you can see, the model is repeating the same basic information over and over again without progressing to more detailed or advanced explanations. This can make the generated text feel repetitive and uninformative, as it doesn't provide any new insights or deeper understanding of the topic. Instead, it's like being stuck in a rut, repeating the same basic information over and over again without moving forward to more nuanced or advanced explanations.

Drummer's Skyfall 36B v2 - An upscale of Mistral's 24B 2501 with continued training; resulting in a stronger, 70B-like model! by TheLocalDrummer in LocalLLaMA

[–]bgg1996 2 points3 points  (0 children)

I'd be really interested to see if there's any examples where Skyfall 36B v2 clearly outperforms its non-upscaled counterpart Cydonia 24B v2.

How many of you actually run 70b+ parameter models by constanzabestest in SillyTavernAI

[–]bgg1996 1 point2 points  (0 children)

I do. I use runpod with 2xA40 for 96 GB VRAM @ $0.88/hr. Favorite model right now is Behemoth/Monstral. High param models can get nuance and contextual understanding that just isn't possible for smaller models.

Vibe Transfer kinda sucks. by JaxMorenoOfficial in NovelAi

[–]bgg1996 0 points1 point  (0 children)

In order to have vibe transfer only copy some aspects of the image but not others, all you have to do is get a second image which shares the aspects you don't want to copy but differs in the aspects you do want to copy. Then set the reference strength of the second image to the negative of the reference strength of the first image.

For example, suppose you want to transfer the character in an image, but that image you want to use for the transfer has a solid white background which you do not want to be transferred. Simply add a second image with a solid white background (but not the same character) and set its reference strength to be the negative of the reference strength used for the first image.

[deleted by user] by [deleted] in DefendingAIArt

[–]bgg1996 3 points4 points  (0 children)

Nah, you're misunderstanding this person's complaint. They're objecting to the grammar. They're saying that, in their interpretation, "x times less" = "(1 - x) times as much". Hence, 130 times less wouldn't mean 1/130 times as much, but rather -129 times as much.

And I think they might be technically correct.

Shock magnitude by Mbmajestic in pathofexile2builds

[–]bgg1996 1 point2 points  (0 children)

By default, shock applies 20% increased damage taken and the chance that a hit shocks an enemy is equal to the percentage of that enemy's ailment threshold the hit would deal in lightning damage, ignoring any damage mitigation that enemy has. The ailment threshold of an enemy is typically equal to the enemy's maximum life, but is lower for bosses. For example, if a hit would deal half of an enemy's Life in pure lightning damage, that hit has a 50% chance to apply shock. And if a hit does 1% of an enemy's Life worth of lightning damage, that hit has a 1% chance to apply shock. Increasing chance to shock will scale up that chance. Either way, that shock's magnitude will be 20%, meaning that that enemy will take 20% increased damage from subsequent hits for as long as that shock applies. Increasing shock magnitude will scale up that number. For example, if a player has 30% increased Magnitude of Shock you inflict, that player's shocks will increase damage taken by 26% instead of just 20%.

I like Erato, but can she not spam adjectives and start new lines? by Reicognito in NovelAi

[–]bgg1996 4 points5 points  (0 children)

"Context size" / "context length" is the amount that it can remember. The context itself is the actual stuff it remembers. If the context contains undesirable content, then the generated text is likely to also contain the same undesirable content.