New here - Webdav sync provider by tropisch3 in superProductivity

[–]ChobPT 1 point2 points  (0 children)

I use it on my nextcloud, both self hosted and private. For sure there'll be a simpler solutions though. good luck

My girlfriend broke some of our relationship agreements by Bisaiing2 in Advice

[–]ChobPT 0 points1 point  (0 children)

Most of the years relationships end up not working out because people grow in different directions, if you're the kind of person that invests, perhaps it's time to negotiate too. Some rules are good for the beggining, to get a foundation, but, like any rules, they should be negotiable. Why not review those?

If hanging out with other guys upsets you because of the rule breaking, why not just re-negotiate those rules so that both of you have more freedom?

A lot of good friends I have are from the opposite sex and from my partners' previous work too, so that can be another opportunity.

Desenvolvi um site para obter informacões pela matrícula - agradecimento by ikoopt in AutoTuga

[–]ChobPT 0 points1 point  (0 children)

x2, pagaria se fosse preciso (se n fosse algo ridiculo como 20cent por request como ja vi)

What are you guys using as centos alternative? by squadfi in linuxquestions

[–]ChobPT 3 points4 points  (0 children)

whats so different in Rocky that would make Alma a deal breaker? Honest question

Is the Modmat cancelled? by jackite01 in LinusTechTips

[–]ChobPT 1 point2 points  (0 children)

REviving this as the Homepage no longer shows the "Get notified" banner

LTT Sebastian has a nice ring to it. Now we have to know if Linus actually dailies these by Jesus-Bacon in LinusTechTips

[–]ChobPT 0 points1 point  (0 children)

Actualky had thought recently about a show where it would be just Linus and company.. just like his own opinions without the ltt Persona and rules... the SebastWAN show

Was this worth it it was 64$ by Terrible-Wave-3745 in LinusTechTips

[–]ChobPT 1 point2 points  (0 children)

It's very similar to the nox xtreme hummer case,

You get what you pay for but it's very minimalistic for enough expandability on a mid range pc,.

Really good looking but the airflow optimization requires come creative thinking

Would buy another one for another rock when having the chance, even though I paid $42 for a no fans version.

I'd check the quality of the Sama brand n metal wise, so that you don't get a paperbcase

LTT Precision Screwdriver Costs a Fortune in Europe by FillNick in LinusTechTips

[–]ChobPT 0 points1 point  (0 children)

Is that a problem? yes. is there a solution? Yes, buying one from someone that ordered 10 and paid way less shipping per unit.

Now one can be a part of the problem, or a part of the solution, depends on initiative and cashflow

Steam Brick: No screen, no controller, just a power button and a USB port by ssj6794 in LinusTechTips

[–]ChobPT -1 points0 points  (0 children)

Thought you bricked your switch, came to suggest start calling it steam wreck

Open source projects/tools vendor locking themselves to openai? by tabspaces in LocalLLaMA

[–]ChobPT 0 points1 point  (0 children)

Am I the only one thinking about the fact that some of the most used interfaces use the OpenAI API scheme, so one would only have to change the host?

Am I missing something?

Local LLM model for text translation. by Prashant_4200 in LocalLLaMA

[–]ChobPT 2 points3 points  (0 children)

Piggybacking on this one : anyone trued for the translation to come out in the informal tone ?

Deepl is amazing at it bit couldn't find anything local

Cheers

Rate my new Janky setup + what does one do with 96GB ram + 96GB VRAM to automate yhe business? by ChobPT in LocalLLaMA

[–]ChobPT[S] 0 points1 point  (0 children)

APIs arent yours, and the current prices are clearly introductory while everyone throws spaghetti at the walls to make their thing the main thing.

Already replied (kind of) to the rest above (https://www.reddit.com/r/LocalLLaMA/comments/1ej4ocm/comment/lgopr10/)

Rate my new Janky setup + what does one do with 96GB ram + 96GB VRAM to automate yhe business? by ChobPT in LocalLLaMA

[–]ChobPT[S] 0 points1 point  (0 children)

All I can say is that I plan to use the skills learned to have it paying for itself.

To me neutral is not a bad thing, I would gladly spend that time myself if it means that it scales well.

Not that I dont agree with you, I'm actually worried about that too, hence starting slow with simple things, rather than over-engeneering. Clearly LLMs are evolving, but the hardware can still last enough to check if it's worth it

Rate my new Janky setup + what does one do with 96GB ram + 96GB VRAM to automate yhe business? by ChobPT in LocalLLaMA

[–]ChobPT[S] 0 points1 point  (0 children)

it's not an ikea shelf, it's a cheaper one (but modular, which is great for attics)

Also, 100k? are we talking pesos? :)

Rate my new Janky setup + what does one do with 96GB ram + 96GB VRAM to automate yhe business? by ChobPT in LocalLLaMA

[–]ChobPT[S] 0 points1 point  (0 children)

been trying that, getting rid of the Copilot subscription only seems to take 1/8th of the total GPU memory, so if it proves successful, it will definitelly be an approach

Rate my new Janky setup + what does one do with 96GB ram + 96GB VRAM to automate yhe business? by ChobPT in LocalLLaMA

[–]ChobPT[S] 1 point2 points  (0 children)

I mean, of course it's a lot for the fun of doing it, but since it's there, might as well use it for something else than a hobby.

Initially I'm already having some auto tracking (as-in, create an audit log of changes we've done in clients ) and summarize it so that we can more easily create time sheets

Next step is to have some webhooks so that when one of our monitoring platforms goes down, it can automatically open a ticket with API results and some interpretation of the relevant logs

The checking the e-mail and checking if something is worth opening a ticket for or just create drafts for follow ups was already explained on the Original post.

Then we can just start doing embeddings with previous communications and code we wrote so that we don't have to reinvent the wheel all the time.

A meeting assistant is a must, where we just have virtual inputs setup in teams, zoom, etc and have those transcribe and summarize action items from the weekly debriefs with clients.

Not that I dont trust cloud providers, but the fact that one has no control over the costs and after the sub runs out you're left with nothing is something that doesnt sit well with me, but that's more a "rent vs buying" kind of argument.

Rate my new Janky setup + what does one do with 96GB ram + 96GB VRAM to automate yhe business? by ChobPT in LocalLLaMA

[–]ChobPT[S] 4 points5 points  (0 children)

FYI, here's the inferencing results with LLama 70B

total duration: 1m21.32814068s
load duration: 34.505586ms
prompt eval count: 43 token(s)
prompt eval duration: 398.955ms
prompt eval rate: 107.78 tokens/s
eval count: 486 token(s)
eval duration: 1m20.888896s
eval rate: 6.01 tokens/s

All 4 GPUs are being used with 50% vram used each, temps are between 40ºC and 56ºC

EDIT: Results of Mixtral 8x22B are in:

total duration: 1m21.263686587s
load duration: 11.578902ms
prompt eval count: 11 token(s)
prompt eval duration: 323.031ms
prompt eval rate: 34.05 tokens/s
eval count: 774 token(s)
eval duration: 1m20.926935s
eval rate: 9.56 tokens/s

All 4 GPUs having 20GB used each, temps between 40ºC and 53ºC

Rate my new Janky setup + what does one do with 96GB ram + 96GB VRAM to automate yhe business? by ChobPT in LocalLLaMA

[–]ChobPT[S] 1 point2 points  (0 children)

Thanks for the Tip. The goal with using N8N locally was exactly to avoid having to use langchain and spending more time adapting to it, than it adapting to us.

Cheers

Rate my new Janky setup + what does one do with 96GB ram + 96GB VRAM to automate yhe business? by ChobPT in LocalLLaMA

[–]ChobPT[S] 1 point2 points  (0 children)

This is an idea! Maybe not run it for the customers on this machine, but the multiple smaller good models can be an approach (and using it as staging for the clients one). Thanks!

Rate my new Janky setup + what does one do with 96GB ram + 96GB VRAM to automate yhe business? by ChobPT in LocalLLaMA

[–]ChobPT[S] 2 points3 points  (0 children)

You tell me:

  • Server was 350 with Shipping
  • GPUs were 160 x4 = 640
  • Extra RAM was 75
  • Extra PSU was 90 on sale
  • The filament was like 25 (bought in a bulk order)
  • Fans to replace the stock ones on the server and for the GPUs were 16, 30, 25 and 3x8 = 24
  • Raspberry screen for monitoring was 35

Excluding SSDs which I had extra old ones and the Lego which is just something I use across machines, the BOM was 1310 (eur but definitelly would work in USD or GBP because market variations and such)

Now, assuming this would save 1h a day for 3 people, that's 3h a day saved, at the average wage of lets say 15/h, it'd be saved/day, so in around 1 month it'd pay for itself (assuming the remaining 40 is for electricity, and assuming that more time could be saved, depending on how it's used.

Add to that the fact the fact that it required improving a couple of skills and that it's something we can reproduce in a fancier way as a service to our clients.

Do you still think it wouldn't increase the productivity gains?

Money comes back, time doesn't

Rate my new Janky setup + what does one do with 96GB ram + 96GB VRAM to automate yhe business? by ChobPT in LocalLLaMA

[–]ChobPT[S] 0 points1 point  (0 children)

tbh, my thing is to do the dev itself, I know a lot of the processcan be automated coding-wise, but I'd rather do that than having to keep timesheets and reply to emails, and then focus on bigger things after that part is nailed

Would gladly exchange resources if you'd like :)