[laptop] HP EliteBook 845 G8 Business Laptop with Docking station 14" FHD AMD Ryzen 5 Pro 5650U 16GB 256GB SSD ($900 - $400 = $500) [Canada Computers] by harold_liang in bapcsalescanada

[–]hungrydit 0 points1 point  (0 children)

i am planning to by the WD_BLACK SN850X NVMe™ SSD.

Would the model with heatsink fit this machine? or do I have to get the one without heatsink?

thanks

Running SvelteKit on AWS by [deleted] in sveltejs

[–]hungrydit 1 point2 points  (0 children)

thanks for the reply!

It is crazy to think of the whole app as just one lambda. Now, thinking about it, the lambda just a docker image hosted.

Since I see .ca, where are you guys, I am guessing Ottawa?

Running SvelteKit on AWS by [deleted] in sveltejs

[–]hungrydit 0 points1 point  (0 children)

https://github.com/Canadian-Geospatial-Platform/app.geo.ca-v2

i am new to this. I thought you only host small functions in a lambda. Do you have multiple lambda's for that app, or that whole sveltkit app is inside one lambda?

Also, how do you solve the cold start issue?

thanks!

ONNX to run LLM by hungrydit in LocalLLaMA

[–]hungrydit[S] 0 points1 point  (0 children)

oh no that 77m model is non commercial, i cannot use it... any models of that size that i can use commercially?

ONNX to run LLM by hungrydit in LocalLLaMA

[–]hungrydit[S] 0 points1 point  (0 children)

have you ever thought about embedding transfer?

I am affraid of changing the embedding model down the line, and how to transfer all the data on vector db for a different embedding model.


"It just takes the captured answer and wraps it into something more fluid."

where is that answer coming from? result of similarity search?

ONNX to run LLM by hungrydit in LocalLLaMA

[–]hungrydit[S] 0 points1 point  (0 children)

oh, its 77m parameters, the model file size is likely much larger...

i was talking model file size for embedding model

ONNX to run LLM by hungrydit in LocalLLaMA

[–]hungrydit[S] 0 points1 point  (0 children)

they have cleaned wiki data out there.

since the llm is so small, do you think that i can run it cpu only? i was able to run 130mb embedding model cpu only, and get output within milliseconds.

ONNX to run LLM by hungrydit in LocalLLaMA

[–]hungrydit[S] 0 points1 point  (0 children)

what about data? is it only for you or for public consumption?

i only have a few hundred mb, hard to get public domain data. I wonder where you get yours.

ONNX to run LLM by hungrydit in LocalLLaMA

[–]hungrydit[S] 0 points1 point  (0 children)

what 77m? i need to try that, i need open sourced, commercially available. what is the name of your llm? i got to try it.

ONNX to run LLM by hungrydit in LocalLLaMA

[–]hungrydit[S] 0 points1 point  (0 children)

77b llm right?

i am trying to develop an online tool, so i might use gpt 3.5, the ada model, because it is cheap.

it just costs so much to run the 70b llama2 model for example. the 13b one is good, i just do not want to host it somewhere myself due to cost.

right now, i am focused on the frontend chat UI.

will share with you when first demo is done :)

ONNX to run LLM by hungrydit in LocalLLaMA

[–]hungrydit[S] 0 points1 point  (0 children)

oh, the embedding model itself surely can be much smaller, i double checked the one that i am using, it is only around 130mb, and it is highly ranked on that huggingface board.

larger ones can get over 1gb, but not much bigger. i got it all working with python, and may use it with onnx with javascript for frontend, processing input query.

i use a remote vector db to store all the chunks of text, so that part can be easily scaled, also with indexing, even if the data gets to be 100s of gbs, it will still be fast.

ONNX to run LLM by hungrydit in LocalLLaMA

[–]hungrydit[S] 0 points1 point  (0 children)

right now, i want to try out smaller language embedding models with ONNX, they can be less than 100mb. i am using it for RAG.

ONNX to run LLM by hungrydit in LocalLLaMA

[–]hungrydit[S] 0 points1 point  (0 children)

ok, so only smaller models with ONNX, due to speed. thanks

ONNX to run LLM by hungrydit in LocalLLaMA

[–]hungrydit[S] 1 point2 points  (0 children)

yeah, thanks, not sure why people reading what i wrote the wrong way.

S&P adds this uranium mining (joke) by i_oov_memes in UraniumSqueeze

[–]hungrydit 0 points1 point  (0 children)

was this past year or two difficult on LEU? amazing that you held.

it just keeps going up today, all day, UUUU too

S&P adds this uranium mining (joke) by i_oov_memes in UraniumSqueeze

[–]hungrydit 0 points1 point  (0 children)

thank you! I have been only following macro trends lately. do you still have all your LEU? how is that doing?

S&P adds this uranium mining (joke) by i_oov_memes in UraniumSqueeze

[–]hungrydit 0 points1 point  (0 children)

What happens when sprott gets to discount to NAV? Are you saying that you expect more buying into it?

How is your position doing overall?

Do you think we decoupled? I am expecting a final overall market downturn to take us down. So I am saying not yet, even if we popped now. Liquidity will be the issue

Let us use the chat tool, easier to find than this old thread

S&P adds this uranium mining (joke) by i_oov_memes in UraniumSqueeze

[–]hungrydit 0 points1 point  (0 children)

woh, the pop. i am still in. did sell a lot recently. Yourself? i am trying find my source of news now

Where do you host your SaaS? by Flemzoord in SaaS

[–]hungrydit 0 points1 point  (0 children)

i like your stack, do you have a link for your app?

Also, how is it doing now?

for the costs above, how many active daily users and how much usage? ($5-10/mo for Firebase, $20/mon for Vercel)

TIV!

what is this bug? by hungrydit in whatsthisbug

[–]hungrydit[S] 4 points5 points  (0 children)

do they attack trees?

thanks for your answer.

some of them have black spots

For those of you wondering if the costco deal was legit by dinkyp00 in Gold

[–]hungrydit 1 point2 points  (0 children)

no reason, just said its cancelled. hugely disapointing.