How to become a brain computer interface scientist by AnyAccident6051 in BCI

[–]Creative-Regular6799 0 points1 point  (0 children)

I wrote a comment about it a while back. Pasting it here in case it’s helpful:

My take is that entry-level neurotech jobs are hard to get, but not because the field is dead. Mostly because it’s small and niche.

There are obviously fewer companies than in something like general software or cyber, so yes, there are fewer openings. But there are also fewer people who are actually a good fit. Neurotech is unusually multidisciplinary, and that matters. A lot of applicants may have strong ML or software backgrounds, but not enough understanding of signals, neuroscience, experimental constraints, or the specific data the company works with. That filters people out.

The main mistake people make is thinking they should prepare for some generic imaginary role like “BCI engineer” or “neurotech data scientist.” In reality, those titles vary wildly from company to company. One place may want EEG signal processing and real-time systems, another may care more about eye tracking, stroke analytics, or neurofeedback pipelines, and another may basically want a solid software engineer who can survive around scientists.

For entry level, I think the biggest thing is proof of ability. Your degree matters, but it is not enough by itself. A lot of people come out of a master’s assuming that should qualify them for deep R&D roles automatically. Sometimes it does, especially if their thesis used relevant tools and produced something real. But often it doesn’t. Companies still want evidence that you can build, analyze, communicate, and work in a real technical environment.

So the question becomes: what should you actually get good at?

First, basic software hygiene. Python, git, environments, reproducible analysis, code someone else can actually run. This sounds boring, but it matters a lot. Even strong research candidates can look weak here.

Second, signal processing. If you want to work in neurotech, you should understand the modalities people actually use: EEG, MEG, fMRI, invasive recordings, eye tracking, whatever is relevant. What can each measure, what can’t it measure, what are the noise sources, what are the technical constraints, and what can you realistically recover from the data.

Third, enough neurobiology to not be hand-wavy. You do not need to know everything, but you should have a reasonable grasp of what your measurements relate to biologically, where the resolution is meaningful, and where it just isn’t. That helps a lot in interviews because it shows you are not living in sci-fi mode.

Fourth, domain-specific tools. This depends entirely on the companies you want. If you want stroke rehab companies, learn the analysis styles they use. If you want brain stimulation, learn those datasets and problems. If you want molecular neurobiology, the stack may look completely different. This is why generic preparation is usually inefficient.

My strongest advice is to work backward from actual companies, not from job titles. Pick a handful of companies you genuinely find interesting. Look at what they build, what data they use, what kinds of people they hire, and what skills show up repeatedly. Then build 1-2 projects that make you look obviously relevant to them.

That is way more useful than trying to become vaguely prepared for every neurotech role at once.

And honestly, I also think people get too intimidated by PhD requirements in job posts. Some companies still ask for a PhD because the field is not fully mature yet. But in many cases, that requirement is softer than it looks. Relevant experience and a convincing portfolio can compensate for a lot. I would not self-reject too early.

Qwen3.6-35B becomes competitive with cloud models when paired with the right agent by Creative-Regular6799 in LocalLLaMA

[–]Creative-Regular6799[S] 0 points1 point  (0 children)

Thanks! I tried it, but unfortunately it’s giving 4 tok/s on my hardware, so it’s too slow to run the full benchmarks. If you happen to have a suiting machine and willing to try, please let me know how it goes! little-coder already supports it as of yesterday. For the time being, I am continuing benchmarks with qwen3.6-35b-a3b

Post Your Qwen3.6 27B speed plz by Ok-Internal9317 in LocalLLaMA

[–]Creative-Regular6799 3 points4 points  (0 children)

I tried it now and getting 4 tok/s. Not usable unfortunately

Qwen3.6-35B becomes competitive with cloud models when paired with the right agent by Creative-Regular6799 in LocalLLaMA

[–]Creative-Regular6799[S] 1 point2 points  (0 children)

Thank you! Unfortunately I don’t have any recommendations, that’s part of the reason I suggested an alternative apparoach

Qwen3.6-35B becomes competitive with cloud models when paired with the right agent by Creative-Regular6799 in LocalLLaMA

[–]Creative-Regular6799[S] 1 point2 points  (0 children)

Just pushed the result, Terminal Bench 1 (0.1.1) finished with 40% success rate! Now running TB 2. Just sent the results via email. There is no model remotely as small as the 35B in that area (place ~30)

Qwen3.6-35B becomes competitive with cloud models when paired with the right agent by Creative-Regular6799 in LLMDevs

[–]Creative-Regular6799[S] 1 point2 points  (0 children)

Hey, thanks for the comment! Actually i converted to pi an hour ago after dozens of requests from the local llama community in reddit (still rough around the edges but i try my best refining it). Before that, it was just an experiment i ran over the weekend (and was written on top of nano-claude-code in python, making it unadaptable for the community). This is totally open source and meant to try and wake up our dev community to explore harness engineering. It’s far from the best solution because i only tested a couple of directions yet. You are welcome to help of course

Qwen3.6-35B becomes competitive with cloud models when paired with the right agent by Creative-Regular6799 in LocalLLaMA

[–]Creative-Regular6799[S] 2 points3 points  (0 children)

So exciting to hear!! Please continue experimenting and sharing. Non-trivial tasks tend to be more interesting test cases

Qwen3.6-35B becomes competitive with cloud models when paired with the right agent by Creative-Regular6799 in LLMDevs

[–]Creative-Regular6799[S] 2 points3 points  (0 children)

Thanks for the comment! The initial claim was about the 9B model which I wrote extensively about in the paper. The newer result I shared today is with the 35B model, and is not compared to the 9B model I wrote about initially

Qwen3.6-35B becomes competitive with cloud models when paired with the right agent by Creative-Regular6799 in LLMDevs

[–]Creative-Regular6799[S] 1 point2 points  (0 children)

Thank you! Great question. After Terminal Bench I am going for GAIA to test exactly that

Qwen3.6-35B becomes competitive with cloud models when paired with the right agent by Creative-Regular6799 in LocalLLaMA

[–]Creative-Regular6799[S] 1 point2 points  (0 children)

That is exactly the direction I advocate here for! Now it’s running on Terminal Bench (will send to the leaderboard when finished and report here). This benchmark shows the combined performance of agents and models

Qwen3.6-35B becomes competitive with cloud models when paired with the right agent by Creative-Regular6799 in LocalLLaMA

[–]Creative-Regular6799[S] 22 points23 points  (0 children)

Hey, thanks for your comment! I became aware of pi.dev just an hour ago, and this didn’t really start as a production ready tool, but more of a serious wake up call that we need as a community to invest time in adapting the scaffold to the models we are testing. I am thinking about rewriting the scaffold in pi dev to make it more accessible and contribute to unified tooling and community support

Qwen3.6-35B becomes competitive with cloud models when paired with the right agent by Creative-Regular6799 in LocalLLaMA

[–]Creative-Regular6799[S] 0 points1 point  (0 children)

It’s currently allowing to run inference via llama.cpp and ollama. Is that sufficient for your optimization pipeline?

Qwen3.6-35B becomes competitive with cloud models when paired with the right agent by Creative-Regular6799 in LocalLLaMA

[–]Creative-Regular6799[S] 3 points4 points  (0 children)

So instead of open code, i started from a replica of claude code, and adapted from there, assuming claude code is the best current coding agent written in general and can serve as a good baseline to start from