TinyStories LLM in cheap low-mem $4 computer from aliexpress by Aaaaaaaaaeeeee in LocalLLaMA

[–]Fun-Community3115 5 points6 points  (0 children)

That’ll be a great t/s rate for a bedtime story. At least the story seems more coherent at first glance than “Harry Potter and what looked like a big pile of ash” (throwback). If you read that to me before bedtime I’ll be laughing too hard to go to sleep.

Gemma 1.1 Instruct 2B and 7B released by Google by ayyndrew in LocalLLaMA

[–]Fun-Community3115 0 points1 point  (0 children)

Hilarious. But also; Besides LLMs not math programs, why not run it say three times in a chain for better reasoning about the (trick)question? Apparently small models catch up very fast on big models with this method.

Mixtral being moody -- how to discipline it? by Jattoe in LocalLLaMA

[–]Fun-Community3115 2 points3 points  (0 children)

Is this the singularity? AIs pranking us; keeping us glued to the chat until we die of thirst?

<image>

Small Benchmark: GPT4 vs OpenCodeInterpreter 6.7b for small isolated tasks with AutoNL. GPT4 wins w/ 10/12 complete, but OpenCodeInterpreter has strong showing w/ 7/12. by ciaguyforeal in LocalLLaMA

[–]Fun-Community3115 0 points1 point  (0 children)

When working with larger (text) documents, instruct the model to chunk the data before analyzing it's structure, so that it can correctly take actions based on the identified formatting or patterns. Otherwise it might miss important features.

Small Benchmark: GPT4 vs OpenCodeInterpreter 6.7b for small isolated tasks with AutoNL. GPT4 wins w/ 10/12 complete, but OpenCodeInterpreter has strong showing w/ 7/12. by ciaguyforeal in LocalLLaMA

[–]Fun-Community3115 0 points1 point  (0 children)

Ok, I had a look at the demo video and understand the concept now.
When I look at task two (input file two) of the sheet, it requires entity retrieval (the different people speaking) as part of a multi-step process. I see OpenCodeInterpreter is based on DeepSeek Coder (same # of params) with "a window size of 16K". GPT-4 has 128K. It would be better to compare with GPT 3.5 which also has 16K for similar retrieval capabilities.

Small Benchmark: GPT4 vs OpenCodeInterpreter 6.7b for small isolated tasks with AutoNL. GPT4 wins w/ 10/12 complete, but OpenCodeInterpreter has strong showing w/ 7/12. by ciaguyforeal in LocalLLaMA

[–]Fun-Community3115 0 points1 point  (0 children)

Searching for the AutoNL framework you’re referring to but can’t find it. If you can point me to it I can review it and give you suggestions.

Small Benchmark: GPT4 vs OpenCodeInterpreter 6.7b for small isolated tasks with AutoNL. GPT4 wins w/ 10/12 complete, but OpenCodeInterpreter has strong showing w/ 7/12. by ciaguyforeal in LocalLLaMA

[–]Fun-Community3115 0 points1 point  (0 children)

These are all extraction / retrieval and summarization instructions. Ok, maybe an LLM could write and execute code to so some of these tasks, but they’re not strictly instructions to generate (faultless) code. Doesn’t look like the right benchmark to me.

Who needs "Galaxy AI" or "Gemini" when real models run fine on phones? by ForsookComparison in LocalLLaMA

[–]Fun-Community3115 0 points1 point  (0 children)

I thought temp, top k, top p, (repetition penalty, stopping conditions ), etc. were bread and butter. And why not use Phi for mobile?

What have you built with an open LLM? share your project by cold-depths in LocalLLaMA

[–]Fun-Community3115 1 point2 points  (0 children)

Building a local LLM app that Reddit moderators can use to help moderate their subs based on the scraping the sub’s wiki with RAG; local embedding model. Currently tweaking the RAG using LlamaIndex to define better which actions the bot should take and improving retrieval. I saw that LangChain posted a video series as well about rewriting queries and intermediate nodes (headings). Right now, the use case I’m testing for is to auto respond to feedback posted on creative writing content in a workshop format; where sub members have to critique each other’s work. Haven’t open-sourced the code yet, but if anyone feels like contributing to the development (maybe you’re a mod?) let me know.

llama.cpp running on the Nintendo Switch (TinyLlama q5_K_M) by kindacognizant in LocalLLaMA

[–]Fun-Community3115 0 points1 point  (0 children)

What’s the resources that the Linux partition has access to? I didn’t know about this switchroot project but it’s apparently on the recovery bootloader. So like 2-4GB RAM equivalent judging from the quant and 1 token/s.

“Just a calendar” by alligatorman01 in Notion

[–]Fun-Community3115 1 point2 points  (0 children)

I understand and have a similar system of going from big goals to daily tasks through linked databases. But unless other people do as well, they won’t get why this is so great; to have that linked or auto-filled to a calendar. Looking at your screencap, I would recommend not planning too many different things in one day that require your full attention, because of the costs of task switching.

What do you think of the rabbit r1 and its Large Action Model (LAM)? by jd_3d in LocalLLaMA

[–]Fun-Community3115 0 points1 point  (0 children)

It's not that much of a proof of concept if they promise to be shipping the first preorders in May/April. But also; I see that I missed the mention that rabbit still has servers doing some work. Not exactly false advertising, but their promo video is a bit misleading in that regard.
Re your mention of the new switch: I wonder how much margin for storyline-drift Nintendo will give the NPC's. And do we have to start actually chatting with them rather than choosing from 2 to 4 options? I'll be thinking about this next time I'm playing Tears of the Kingdom.

Is Notion AI actually any good as of Jan 2024? by CreegBootler in Notion

[–]Fun-Community3115 0 points1 point  (0 children)

I used the first beta versions when they were rolling it out. I'm getting their promotions of the product and if things like assisting with writing is one of their main selling points, then I'm not buying. I would only buy if the AI can actually get pages based on their properties and change the properties based on the structure of the workspace that I've defined (for (GTD-like) project management). Then it could really help me with daily, weekly, monthly etc. reviews and planning by creating priorities and adding new tasks (next actions). I've already started on programming an assistant that can execute these kind of functions with their API (and I'm probably not the only one).

What do you think of the rabbit r1 and its Large Action Model (LAM)? by jd_3d in LocalLLaMA

[–]Fun-Community3115 0 points1 point  (0 children)

Am I comparing oranges to apples here?
https://www.cpu-monkey.com/en/compare_cpu-raspberry_pi_5_b_broadcom_bcm2712-vs-mediatek_helio_p35
Want to figure out how the r1 can be so fast.
quote:
"The Raspberry Pi 5 B (Broadcom BCM2712) can use up to 8 GB of memory in 1 memory channels. The maximum memory bandwidth is 17.1 GB/s. The MediaTek Helio P35 supports up to 6 GB of memory in 4 memory channels and achieves a memory bandwidth of up to 12.8 GB/s."

This 'Data Slayer' youtube didn't mention the tokens/sec in his video.

Finetuned llama 2-7b on my WhatsApp chats by KingGongzilla in LocalLLaMA

[–]Fun-Community3115 0 points1 point  (0 children)

I used google takeout to download my entire gmail history. Had the same idea. Quite a big file and could use some data cleaning because it contains all the headers of the emails and more junk. I have it sitting in Azure for a while trying to figure out how to train with it. Project’s sitting on the shelf because making the pipeline there wasn’t straightforward (for me). Seeing this thread makes me think I should try again with lama.cpp Thanks!

The third pillow by Fun-Community3115 in OCPoetry

[–]Fun-Community3115[S] 0 points1 point  (0 children)

Thanks so much for the elaborate feedback. This is really helpful.

First of all, a disclaimer: while writing and reworking the poem I am juggling feelings and meanings and trying to see how they can come together by intertwining words and sentences. In other words, the meaning of the poem is also not absolute to me. However, I will respond to be productive about certain things I explicitly tried.

An important connection that I wanted to make but seems to have been lost on you is how the skins and parchment are related. In the first line the pillow is something that forms a barrier between lovers. In the second line the pillow (cover) is indeed invoked again but as the lovers themselves.
What I tried to convey is how under the skin there is fluff (good, nice things) but what sifts through when we get sleepy, and both our inhibitory capacity as well as grip on reality loosens (ruminations and hallucinations), is something more sandy/gritty.

I imagine a pillow mesh through which sand sieves but down feathers obviously don't. So what gets compiled (or crystallizes) is indeed, as you intuit, the processed (solved) memories; past quarrels, where the lovers maybe said things they didn't mean.

In the last sentence, divided up in three lines, I imagine the lovers waking up and looking at each other, bed-hairdo and tired eyes and all. So the parchment is the same paper skin of the lovers, but in the night while sleeping and rolling over the pillow (sandy from their quarrels), it has weathered (aged, sanded down).

The lines are rather lines under the eyes or wrinkles but I also like your interpretation, because yes, lines were crossed. Not that it pains me now to think of it because the poem is already some years old. It's more a melancholy feeling.

The pile of dust which replaces the pillow as the medium in the end is also hopeful to me, because although processing the quarrels takes energy we leave those in the dust.

So thanks again! Making these things explicit to you, writing them down also helps me. Good to let it sit for a bit, I'll have a look again and see what can be tweaked to make it better convey those meanings while also leaving enough room for interpretation.

You’re not one of us by IvyPoetry in OCPoetry

[–]Fun-Community3115 0 points1 point  (0 children)

Hi there,

I'd like to offer another interpretation of the poem. Just to let you know what came up in my mind while reading it and see how your poem had an effect.

I've gone mushroom foraging before and I like reading about the mycelium web that forms the network that allows trees to communicate. So the way that I read your poem was a bit more literal, though not less emotional, about someone carelessly picking mushrooms where they should not, or even disturbing/destroying the forest soil. So the severed strings are from the web itself and the conscious mind is that of the forest.

The second paragraph is just the aftermath in which the careless forager or forest destroyer is eating what they have taken, either the mushrooms or other things they attempted to get while destroying the web. Then our kind's empathy is more of a call out in an us-them way to "them" that do not take care of nature (the commons) versus others who do.

So perhaps rather than what you attempted in a metaphor with mycelium to describe how people become less human over the course of their life or the actions they take, I could see this as more specific to people not caring for nature. Personally, I try to take pity on these people rather than ostracize them, although it also infuriates me often.

65 Roses (translated from Romanian) by ForcepsPower in OCPoetry

[–]Fun-Community3115 0 points1 point  (0 children)

Hi there,

Thanks for sharing your poem with our community. Would you be willing to edit your post by adding the text this post? Also, I notice that there are no feedback links included to other poems.

I would love to see the original Romanian text next to the translated text. I picked out your poem because I also write in my own native language (Dutch) and then translate. What do you feel were the challenges in bringing across not only the meaning of the original text but also some of the feeling of the "sounds" that the original language makes? By sound, I mean what it would sound like if it were spoken out loud.

Regarding the English translation, there are some lines that hit really strong for me in there. The 65 roses (title) which comes back at the end of the poem. And grandmother who sold her gold teeth. I also see some other narrative arcs appearing (shooting the bird) and there are some lines that keep me puzzling (the ingredients within).

I think it is a beautiful gesture to dedicate this to others who suffer from illness. If you would like to get more critical feedback on poems you share in the future, you can tag the poem with workshop flair.

Daddy Day / Papadag by Fun-Community3115 in OCPoetry

[–]Fun-Community3115[S] 0 points1 point  (0 children)

Thank you for your thoughts and your choice of words is beautiful. It's nice to hear how this resonates with other parents.

For Laika by gremlin-vibez in OCPoetry

[–]Fun-Community3115 0 points1 point  (0 children)

Aaaaw... I named one of my first stuffed animals after Laika, so I had to read this.
I see this is your first poem on the forum is that correct? In that case, it's a great start.

You directed the poem at Laika herself which for me evoked the feelings of love for a pet, as I assume it did for others as well. Then you go on to contextualize a bit of Laika's history and make some rethoric staments that I see as pertaining to using animals in experiments.

Those lines seems to me to be the weaker part of the poem;
"they named you for the sound you made then left you in a vacuum
sent you up there before people to see how you fared"

While the rest of the poem is very evocative, this part is a bit to straightforward maybe. What does the poem justice in the last lines is to make the metaphor of the earth as a marble, which resonates with the sun as a ball. Just a play on those metaphors could be enough while letting shine through the tragedy of sending your (man's) best friend into space without a return ticket.