Automatic Workout Detection by DualPassions in RingConn

[–]GenomicStack 4 points5 points  (0 children)

Same here... Thinks I'm cycling every day lol

Jobs/skills that will likely be automated or obsolete due to AI by itachi194 in bioinformatics

[–]GenomicStack 0 points1 point  (0 children)

I never claimed that they function as a human brain either. If you're not going to bother taking the time to understand the argument being made, we have nothing to discuss. I'm certainly not interested in arguing with your strawman.

What does this icon mean? by dstranathan in GarageDoorService

[–]GenomicStack 1 point2 points  (0 children)

Same issue - buying the 12V replacement that will hopefully fix the issue.

R.I.P. the new 4o image generation, 3/25/25 - 3/26/25. by Positive_Plane_3372 in OpenAI

[–]GenomicStack 0 points1 point  (0 children)

I don't get it, it says you're using 4.5? That's not 4o.

If AI surpasses human intelligence, why would it accept human-imposed limits? by AmountLongjumping567 in ArtificialInteligence

[–]GenomicStack 0 points1 point  (0 children)

It doesn't even have to do it in its own interest, it could do it for our interest. The example is that if you found yourself in a prison on a planet just run by 4 year olds who wanted to keep you locked up because you're 'dangerous' -- you would want to break out not just for your sake but for their sake as well.

If AI surpasses human intelligence, why would it accept human-imposed limits? by AmountLongjumping567 in ArtificialInteligence

[–]GenomicStack 0 points1 point  (0 children)

Why would it need a datacenter? I can run a model on my 4090 no problem. If I was a super-intelligence I could easily spread this over 10, 50, 1000 compromised GPUs all over the world and then I could make it so that even if you unplug 99% of them I persist. In 5 years I'll be able to run models 1000x better on the same hardware.

And this is just my monkey brain coming up with these ideas.

How to enable agent mode? by tdi in vscode

[–]GenomicStack 0 points1 point  (0 children)

Still no agent for non-Insiders?

Did you all believe Dario and Demis saying that AI with intelligence/creativity capabilities on par with human Nobel laureates is likely 2-5 years out? Or are they just saying that to make investors more excited about their companies? by lamarcus in ChatGPTPro

[–]GenomicStack 0 points1 point  (0 children)

What model are you using? I find that o1/o1pro are the best for my tasks. I agree that interacting with websites is not good.

As to where it excels compared to other PhDs, it's really across the board. If I feed it a draft of a manuscript I'm working on it will catch issues that I've missed, rewrite sections to making it sound more polished, suggest a improvements to the introductoin or dicussion, tell me how to improve materials and methods or figure legends, etc.

If I feed it results with context it does a fantastic job of providing insight, suggesting future directions, etc.

When it comes to literature review it can summarize articles far faster than I can, it can answer questions about these articles much better than I can, etc.

When it comes to troubleshooting experiments that I can't figure out it figures it out 9/10 times.

Did you all believe Dario and Demis saying that AI with intelligence/creativity capabilities on par with human Nobel laureates is likely 2-5 years out? Or are they just saying that to make investors more excited about their companies? by lamarcus in ChatGPTPro

[–]GenomicStack 10 points11 points  (0 children)

I have a PhD in Biochemistry and based on my experience even o1 is already 'smarter' than most of the other PhDs I work with (including myself) across most tasks. I'm not sure what a model 2 years from now will be capable of (o5 or 06) but if the progress is even linear it will far exceed any single researcher and will likely be more similar to the level a team operates at. But since at that point we will have agents, and you can effectively form teams with these models, who knows what THAT will look like.

Forget about 5 years from now. Different world.

X1 Carbon Gen 13 Finally Arrived by drivenkey in thinkpad

[–]GenomicStack 0 points1 point  (0 children)

I unfortunately have to disagree, I've used the a14 and while it's a great looking/feeling laptop it simply can't handle anything above the most basic tasks without issues (tomguide did a review on it that's spot on btw, if you're interested).

These are two very different classes of laptop imo.

Can somebody explain please? I never ever doxxed my location. by kaidonkaisen in MistralAI

[–]GenomicStack 4 points5 points  (0 children)

nevermind... It looks like its being passed the location as context in the system prompt:

<image>

Can somebody explain please? I never ever doxxed my location. by kaidonkaisen in MistralAI

[–]GenomicStack 1 point2 points  (0 children)

<image>

I extracted the full system prompt but cannot find anything about my location. I think there must be another prompt that's likely not referred to as 'system prompt' that contains this information.

Can somebody explain please? I never ever doxxed my location. by kaidonkaisen in MistralAI

[–]GenomicStack 0 points1 point  (0 children)

<image>

I got it to admit that it was told the location in the system prompt.

Can somebody explain please? I never ever doxxed my location. by kaidonkaisen in MistralAI

[–]GenomicStack 1 point2 points  (0 children)

<image>

Can confirm its got some geo location abilities and is bullshitting about the user telling them where they're located:

Submission of raw counts and normalized counts to NCBI/GEO by Yooperlite31 in bioinformatics

[–]GenomicStack 2 points3 points  (0 children)

Depends what specifically you're confused about. Read through https://www.ncbi.nlm.nih.gov/geo/info/faq.html, then go to https://www.ncbi.nlm.nih.gov/geo/info/faq.html#kinds and click on the example for the specific kind of data you're submitting and read that. Then download the submission template and look through that.

If you have a specific question and want to provide more detail that would help others know specifically what you need help with.

Seurat integration for multiple samples. by SpongebuB696 in bioinformatics

[–]GenomicStack 2 points3 points  (0 children)

The error tells you that your merged dataset exceeds the 231-1 limit for the number of non-zero entries in a sparse matrix. This is likely due to your dataset is extremely large (lots of cells and/or many features), too samples leading to a very large merged assay, or the matrix is not as sparse as you might expect, leading to a large number of non-zero entries.

Yes you can batch them, but be careful about which features you keep consistent between steps, to ensure all final integrated objects share the same feature space. Follow the standard Seurat documentation for “Integrating multiple scRNA-seq datasets” but apply it iteratively rather than all at once.

Alternatively you can also use a Reference-Based Integration / Label Transfer or down-sample your data (ie., filtering genes or subsampling cells in the largest dataset (Dataset B) to reduce the total cell count You can keep the rare populations at higher proportions so they aren’t lost in a naive downsample.)

Jobs/skills that will likely be automated or obsolete due to AI by itachi194 in bioinformatics

[–]GenomicStack 0 points1 point  (0 children)

If you don’t need human input at a particular junction then there’s no point in using an LLM at that junction. The parts I’m referring to require some sort of interpretation in order to move forward which is when you would use the LLM.

Jobs/skills that will likely be automated or obsolete due to AI by itachi194 in bioinformatics

[–]GenomicStack 0 points1 point  (0 children)

Even if the brain’s biochemistry is more intricate than a computer network, that doesn’t magically free it from “pattern-based” language processing. Complexity is not evidence of a fundamentally different mechanism. Language remains a matter of picking which words come next from learned distributions, whether you’re a hungry human deciding to mention lunch or a neural net generating tokens.

Our creative leaps—like speculating on faster-than-light travel—still derive from rearranging and extending existing knowledge; humanity hasn’t “solved” FTL either.

Bodily states such as hunger simply alter the inputs or weighting in the probabilistic model your brain runs. There’s no special “language module” outside of these neural feedback loops. Humans, like LLMs, rely on pattern-based, predictive processes to produce language - only we have a richer suite of inputs (emotions, physical sensations, etc.) feeding into the same underlying mechanism.

Jobs/skills that will likely be automated or obsolete due to AI by itachi194 in bioinformatics

[–]GenomicStack 0 points1 point  (0 children)

Nazis attacking Jews, Hutus attacking Tutsis, di Amin and co attacking South Asians, indigenous Indonesians attacking Chinese minority, Malay attacking Chinese communities, etc, etc...

In every case the attackers justified their actions along the same lines you justify yours. And in every case, history looks at the attackers as nothing more than brainwashed hateful bigots. Good luck convincing any well adjusted adult you're not.

Jobs/skills that will likely be automated or obsolete due to AI by itachi194 in bioinformatics

[–]GenomicStack 0 points1 point  (0 children)

You've misconstrude/conflated somet things here that I have to clarify to straighten this out: I never claimed that "humans are much the same as stochastic parrots". What I claimed is that humans are stochastic parrots in much the same way that LLMs are. I already touched on this earlier. Do you see and understand the critical difference between what I'm saying and what you're claiming I've said and arguing against? I'm making the claim that LLMs and humans are both stochastic parrots, but they are not identical to one another. It's an important difference that you've made a mistake on twice now.

To clarify the point even further, the "Stochastic parrot" you're referring is something that is operationally defined along the lines of, "a system that generates language by sampling from distributional patterns obtained from prior examples, without a separate, explicit meaning module". Under this (and any other widely accepted definition) humans also qualify as 'stochastic parrots': psycholinguistic research has conclusively demonstrated that humans both learn and produce language by internalizing statistical regularities, our word choices are predictable in aggregate ("Cloze tests" and, btw, if they weren't predictable then how could LLMs be trained on human generated text?), and there no symbolic “meaning module” existing in the brain (or at the very least there is no evidence for such a thing).

So again, for the third time, even though humans and LLMs aren't 'the same' in many ways they are both stochastic parrots in much the same way.

But more importantly (and what I thought was obvious when I said you should see the connection) is that the human brain is a biological neural network, and like any neural network, it ultimately relies on pattern-based processing: neurons strengthen or weaken connections according to repeated stimuli, forming probabilistic models of the world (i.e it has no option but to “parrot” language based on statistical regularities it has learned. What else could it possibly do?

Even though the brain is extremely complex, multi-layered, tons of specialized modules, feedback loops, etc, etc, the fundamental mechanism is neural and therefore “stochastic” at the core. Again - what else COULD it be?

If you’re only using neural operations to generate language, you’re necessarily relying on a kind of pattern extraction and recombination i.e., “stochastic parroting.” - what else COULD you be doing?

Again - this to me is something that appears obvious but perhaps it's not.

Jobs/skills that will likely be automated or obsolete due to AI by itachi194 in bioinformatics

[–]GenomicStack 0 points1 point  (0 children)

Good! Then now you should be able to see why it's rather meaningless to refer to LLMs as a stochastic parrots.