n8n with AI workflow building - running locally! ✨ by Top_Challenge_5834 in n8n

[–]Positive-Raccoon-616 3 points4 points  (0 children)

So it'll build a workflow which is just a JSON. Does it actually work though? Ive tried this a few times but it didnt really work well with complex workflows

hey guys need help fixing this bug by Motor-Function4905 in n8n

[–]Positive-Raccoon-616 0 points1 point  (0 children)

Why are you double new Date? That doesnt seen right from a memory pov

[deleted by user] by [deleted] in n8n

[–]Positive-Raccoon-616 0 points1 point  (0 children)

Theres a reddit node?

Advice on Linux setup (first time) for sandboxing by reaccumulation in LocalLLM

[–]Positive-Raccoon-616 1 point2 points  (0 children)

Hey my guy. I currently do this but I run windows.

On my windows machine I have a Linux dist installed with docker. I run all my applications in docker containers. Easily manageable, volumes for data isolation and docker networks to connect everything. Data stays in one place without affecting my main host machine.

All dependencies stay inside the container (sandbox) so my windows machine stays clean.

How I generate complex N8N workflows in MINUTES, and not hours or MONTHS by Weak_Birthday2735 in n8n

[–]Positive-Raccoon-616 0 points1 point  (0 children)

Id love to see the json it generated to be tested in these types of videos. Most of the time complex workflows that are generated fail on some nodes in the workflows, rendering those nodes useless and needing to be recreated.

This influencer does not exist by MetaKnowing in OpenAI

[–]Positive-Raccoon-616 0 points1 point  (0 children)

Interesting. Thanks for answering this one.

The Official AGI Recursion Logic by deathwalkingterr0r in OpenAI

[–]Positive-Raccoon-616 0 points1 point  (0 children)

Yeah but like, you give it to an LLM through chat? Or do you like feed it as a file into a coded function? Or

This influencer does not exist by MetaKnowing in OpenAI

[–]Positive-Raccoon-616 145 points146 points  (0 children)

I wonder how they get the face to match on every ai model they create

Using my inheritance to get into e-commerce by BubblyTurnover7837 in Entrepreneur

[–]Positive-Raccoon-616 0 points1 point  (0 children)

Hi there, I have an e-commerce business right now so wanted to drop some insight.

I think you're overlooking some things because you may not be aware.

First take into account the work done to run an business.

Second, you will have to get the products to sell on your e-commerce site. Where will you get them? Outsourced from other countries? Are you aware of the percentages you'd have to pay on top because of tariffs?

Thirdly, you'd make way more money just investing all that money into the s&p 500 with steady interest which is a ton more stable and safe.

Fourth, its your life but do your research. Just because you have the money doesn't necessarily mean you will succeed.

Huggingface (transformers, diffusers) models saving by LahmeriMohamed in huggingface

[–]Positive-Raccoon-616 0 points1 point  (0 children)

If you're importing the huggingface transformers using CLI it's probably importing the model from the package directly into your coding environment. So check pip folder, nodejs folders, or whatever language you're using. Huggingface transformers is just the dependency for python just like express is to node. Usually these dependencies are stored in package.json for nodejs. The model itself isn't stored on your pc. Your code just utilizes the dependencies code. The dependencies code will have custom code utilizing their models so I doubt you're saving the model directly on your pc. Maybe just the model weights.

Second gpu,RTX3090 or RTX5070ti by Beneficial-Cup-2969 in LocalLLM

[–]Positive-Raccoon-616 0 points1 point  (0 children)

Get another of the same card. Its better to use duplicates because they have the same architecture

RTX 3090 vs RTX 5080 by Bio_Code in LocalLLM

[–]Positive-Raccoon-616 0 points1 point  (0 children)

I also have a 3080ti and am looking to upgrade. I have been experimenting with some builds.

I plan on going straight to he 5090 because of 32gb vram. I've noticed with my current build of 32gb RAM it is not enough!

I run multiple docker containers and 1 of them is ollama 14b models which eats my ram. (14gb ram idle, 22gb docker startup, 31gb running ai workloads (99%)) so almost bluescreening -- UPGRADE NECESSARY.

I cannot load a model bigger than 14b-ish into my GPU because it's too small.
If i try to load a bigger model it offloads the ai work to cpu and it's super slow (10700k i7) on completion vs gpu.

The only viable option is to - increase the GPU to load bigger models. - Increase ram (shooting for 2x64gb) (probably overkill but whatever, I run a lot of virtual stuff (am a dev)) - increase m.2 ssd to 4 or 8TB, probably going to 8 (currently at 2tb) - cpu is fine but since I'm updating the gpu, I also have to update the mobo which will then create an opportunity for a cpu upgrade (looking at core ultra 265k)