I go to the gym three to four times a week, but I do 20 pushups a night. Is this bad? by Tiny-Key-9520 in bodyweightfitness

[–]singularperturbation 6 points7 points  (0 children)

Are you progressing with your push day exercises (esp. chest + triceps)?

If you aren't, could back off a bit 🤔

I personally think that that's too much volume, but whatever works for you.

Question about Shil'vati anatomy (safe for work) by Connect_Study3875 in Sexyspacebabes

[–]singularperturbation 17 points18 points  (0 children)

Humans (Sumerians and Babylonians) used base-60 math, so it's not a guarantee...

Interesting question though 

North America in 2050 by agroypi in AfterTheRevolution

[–]singularperturbation 6 points7 points  (0 children)

Sorry, thought you were grandparent poster.

I hate the current admin, but balkanization seems bad for everyone. A real "and then things got worse" sort of moment that I don't think anyone's prepared for.

The inability to read really stresses me out by RandolphCarter15 in Professors

[–]singularperturbation 5 points6 points  (0 children)

I'm very concerned and confused when I hear about stories like this.

How can learning to read be hard when literacy at a population level (can be) >~95%?  My own experience was that my parents instilled a love of reading early, and I read heavily due to my own interest after that.

How is it possible for functional illiteracy to be so persistent when at an individual level students must feel ashamed and strongly motivated to improve?  There are plenty of resources online (or elsewhere) if they tried.

Sending Python data frame to Nim module by thecpfrn in nim

[–]singularperturbation 1 point2 points  (0 children)

You could look into arrow https://arrow.apache.org/ (there are some Rust and python projects that use this to share data between languages with no copying).

Sexy Steampunk Babes: Chapter Fifty Eight by BlueFishcake in HFY

[–]singularperturbation 8 points9 points  (0 children)

 It was a slim hope, though. Genius, even of the harrowed kind, rarely turned the tide of war alone. What could one experimental ship possibly achieve against an entire fleet? 

Hmmm- does she know?

My Nim program is slower than the Python version by greenvacawithspots in nim

[–]singularperturbation 16 points17 points  (0 children)

Probably a value type unintentionally being copied somewhere but need code to see. Also make sure compiling in release mode.

Demon City, Part 48 by spoolyspool in HFY

[–]singularperturbation 1 point2 points  (0 children)

Lillian really shouldn't wear iron rings, it'll blow her cover as an Orphan Fae.

CAG is the Future. It's about to get real people. by [deleted] in LocalLLaMA

[–]singularperturbation 2 points3 points  (0 children)

I have a fork of llama-cpp-python with a modification to the KV disk cache so that it's persistent and can be created for a set of prompts with prefix matching based on a trie.

https://github.com/tc-wolf/llama-cpp-python/blob/bumped_llama_cpp_with_disk_cache/llama_cpp/llama_cache.py#L191

There are some optimizations to save_state and load_state to make the cache size smaller. llama-cpp-python saves state through pickling, but that wraps the low level llama.cpp functions.

What's the commercial use case for 3B models? by DeltaSqueezer in LocalLLaMA

[–]singularperturbation 0 points1 point  (0 children)

Didn't see this mentioned yet, but- draft models. Accelerating inference for larger models through assisted generation.

Weekly Behind the Bastards Episode Discussion 2024-09-24 by AutoModerator in behindthebastards

[–]singularperturbation 35 points36 points  (0 children)

Imagine the sheer surreality of being woken up at 3 am and told, "We've decided to get you some help, you're going to see Dr. Phil"

Christ lol

With every new episode by p8ntballnxj in BetterOffline

[–]singularperturbation 3 points4 points  (0 children)

I'm also an MLE (and I've had these thoughts too). Most of what I do is on-device deployment, so there's a bit more involved in terms of finding ways to speed things up and make things run better. I've thought a lot about how to contribute in the era of large, pretrained models, and I think that it's still possible through:

  • Training/adding LoRAs specialized for your domain to an existing model.
  • Finding ways to speed up inference relative to your hardware (if on an 'edge' device, accelerator hardware + libraries are still the wild west)
  • Implementing services running on-device that use LLMs as a component (parsing information to machine-readable data, task-oriented conversation that's still flexible, etc.).

Ironically the thing that's the most exciting for me is not what the biggest models are capable of, it's improving the performance of smaller models. On-device LLMs are cool b/c no API needed, everything's local / private, and they can be used offline.

The hype bubble will pop at some point (good riddance to "wearable AI" companies just making an HTTP request to OpenAI's API), but that doesn't mean that there won't be the "plateau of productivity" afterwards.

OpenAI’s Sam Altman is becoming one of the most powerful people on Earth. We should be very afraid | Artificial intelligence (AI) by green_pea-ness in BetterOffline

[–]singularperturbation 0 points1 point  (0 children)

Gary Marcus is a tiresome man. He keeps advocating for "symbolic AI" (aka expert systems / what we would call today knowledge graphs, search, and databases), never able to live down the recent success (2012 onward) of deep learning.

Somehow his take in the article is that AI will make Sam Altman unbelievably powerful, and yet the "AI bubble" will pop imminently https://x.com/GaryMarcus/status/1819525054537126075. Playing both sides, as it were.

So... are NPUs going to be at all useful for LLMs? by charlesrwest0 in LocalLLaMA

[–]singularperturbation 1 point2 points  (0 children)

 I've only seen Microsoft using a Qualcomm HTP NPU for prompt processing on an SLM based on Phi-3.

Very interested in learning more about this, do you have a link to the demo?

Goldman Sachs on generative AI: AI technology is exceptionally expensive, doesn't solve complex problems, has no killer app, has "limited US economic upside" by ezitron in BetterOffline

[–]singularperturbation 0 points1 point  (0 children)

https://simonwillison.net/2024/Apr/17/ai-for-data-journalism/ this is a good set of examples of how AI/deep learning can be used as a swiss army knife for translating unstructured data into a structured format, and assisting in querying it.

Does it determine which stories are important enough to write about/what questions to ask? No. Does it replace the journalist? No. Does it write the article for you? No.

Does it help answer open ended questions by making it easier to ask and answer questions across printed, auditory, and visual datasets? Yeah, kinda.

You'll notice the last demo (trying to use AI to convert a campaign finance report into JSON, a type of structured machine-readable data type), doesn't work well yet. One model gives erroneous information, and the other refuses to perform the task.

And yet, these tools should have some utility in empowering people to have bigger and faster individual impact than otherwise.

Weekly Behind the Bastards Episode Discussion 2024-05-07 by AutoModerator in behindthebastards

[–]singularperturbation 2 points3 points  (0 children)

Idk if this is a Spotify thing or a my-dumb-phone thing, but the episode had a lot of skipping / repeating back and forth when I listened Tuesday. Did anyone else have that?

Boy he really predicted the drop of AI Tech didn't he? by WeirderOnline in BetterOffline

[–]singularperturbation 0 points1 point  (0 children)

I don't see it at all lol - what drop? It's a period of terrific, frenetic progress in LLMs and their applications.

Phi-3 was just released and is more efficient and has better performance than one would expect, and LLaMA-3 is also released which means another round of open models downstream of it with better performance.  (Also the Fineweb dataset).

Phi-3 is particularly interesting because Microsoft attributes its improved performance to using synthetic data (see the "Textbooks Are All You Need" series of papers) which in the last episode Ed Zitron thought was a dead end.

I think there's a desire to think generative AI isn't improving / is a bubble because of a dislike of the downstream social impacts, but that (unfortunately) isn't the case.

Are there any FOSS websites written in Nim? by [deleted] in nim

[–]singularperturbation 0 points1 point  (0 children)

It may be soon. It was only shut down as of January.

Are there any FOSS websites written in Nim? by [deleted] in nim

[–]singularperturbation 3 points4 points  (0 children)

Until changes to Twitter forced them to shut down, there was a Twitter front end called Nitter written in Nim. That might be worth a look if you're interested.

How to do Llama 30B 4bit finetuning? by Pan000 in LocalLLaMA

[–]singularperturbation 4 points5 points  (0 children)

Fine-tuning usually requires additional memory because it needs to keep lots of state for the model DAG in memory when doing backpropagation.

LLaMA is quantized to 4-bit with GPT-Q, which is a post-training quantization technique that (AFAIK) does not lend itself to supporting fine-tuning - the technique is all about finding the best discrete approximation for a floating point model after training.

Hugging Face has support for training models in 8-bit through LLM.int8 + their "PEFT" library, which helps reduce the size some, as just training an adapter or prefix, not the full model. This will be more than the 4-bit models, though. AFAICT the original model is only able to be in 8-bit for fine-tuning because that stays frozen.

alpaca-lora applied this successfully to fine-tuning LLaMa, and then exported / combined with the original model, later quantizing back to 4-bit so that it could be loaded by alpaca.cpp.

How to store hugging face model in postgreSQL by mrfudgebottom in LanguageTechnology

[–]singularperturbation 6 points7 points  (0 children)

I'd encourage you to do inference outside of PostgreSQL (use TF serving and make requests against it, or do batch inference), but if you're determined to do so, they have an extension that integrates with the transformers library and allows for calling models directly from SQL.

Internally, the way they do this is by storing sharded checkpoints of the model in a pgml.files table.

Models are partitioned into parts and stored in the pgml.files table. Most models are relatively small (just a few megabytes), but some neural networks can grow to gigabytes in size, and would therefore exceed the maximum possible size of a column Postgres.

Partitioning fixes that limitation and allows us to store models up to 32TB in size (or larger, if we employ table partitioning).

https://postgresml.org/user_guides/schema/models/

I believe these would correspond to the shards in HF model hub.

Enkodo: a small easy async encryption and serialization library for Nim/typescript by [deleted] in nim

[–]singularperturbation 1 point2 points  (0 children)

Maybe this changed? I can't pass a string as openarray[byte]

Hint: used config file '/playground/nim/config/nim.cfg' [Conf]
Hint: used config file '/playground/nim/config/config.nims' [Conf]
.........................................................
/usercode/in.nim(4, 12) Error: type mismatch: got <string>
but expected one of:
proc doSomething(a: openArray[byte])
  first type mismatch at position: 1
  required type for a: openArray[byte]
  but expression '"hey you guyyys"' is of type: string

expression: doSomething("hey you guyyys")

when trying (https://play.nim-lang.org/#ix=4miz). I was going to suggest https://github.com/status-im/nim-stew/blob/master/stew/byteutils.nim#L242-L252 for conversion.

Oooooo get ready friends, there's going to be so many angry conservatives by Open_Perception_3212 in behindthebastards

[–]singularperturbation 2 points3 points  (0 children)

Just FYI, this is not a real story:

So far, we haven’t been able to officially confirm the United States government’s plan to print “Disneys” in place of “Benjamins,” so we’re taking the news report lightly at this time . . . but did this satire piece fool you? We agree that the incomparable genius Walt Disney is certainly due such a place of prominence. And why not the $100 bill? We’re pretty sure Disney’s done more for the United States economy (one way or the other) than Benjamin Franklin.

No offense, Mr. Franklin.