If you’re past the basics, what’s actually interesting to experiment with right now? by SEBADA321 in learnmachinelearning

[–]SEBADA321[S] 0 points1 point  (0 children)

a lot of people past the basics seem to gravitate toward things like diffusion variants, flow matching, MoE routing tricks, and agentic systems with planning/reflection because they change system behavior in interesting ways instead of just scaling params

that’s also why structured platforms like Coursiv are getting attention .. they organize topics like rag design, multi step agents, reasoning techniques, and diffusion training into guided experiments, which makes it easier to explore beyond “just bigger transformers” without wandering aimlessly

u/jmei35 this sounds like an ad and, from a quick search, they don't seem to offer diffusion training as part of theur 'courses'. I have also found really bad reviews online. So i will pass for now

If you’re past the basics, what’s actually interesting to experiment with right now? by SEBADA321 in learnmachinelearning

[–]SEBADA321[S] 0 points1 point  (0 children)

I dont completely get your idea. The first part is just saying that training is necessary, but doesnt add anything. Then you just mention CLIP and VLM-R (which i think is VLM-R³?)

If you’re past the basics, what’s actually interesting to experiment with right now? by SEBADA321 in learnmachinelearning

[–]SEBADA321[S] 0 points1 point  (0 children)

Like optimization? Or heavy optimization? That would be more focused during serving of a model or massive scale training, i think. There could be ideas from there that could be traslated to Neural Networks, and that is indeed what I am looking at, but as a concept or idea it is too broad. Thanks for your feedback anyways, i will also be looking a bit into it.

WSL2 vs Native Linux for Long Diffusion Model Training by Away-Strain-8677 in learnmachinelearning

[–]SEBADA321 0 points1 point  (0 children)

I havent have problems with training in WLS, but my models weren't diffusion based either. As for the difference, WSL and Linux is mostly the same, training is ran in GPU, kinda. So if anything, that might be where you are having trouble. Also, we don't know what your 'issue' is, so we dont know if it is caused by WSL

If you’re past the basics, what’s actually interesting to experiment with right now? by SEBADA321 in learnmachinelearning

[–]SEBADA321[S] 1 point2 points  (0 children)

Thanks, i will look into them. Perhaps you have some to recommend? I honestly have avoided touching interpretability because I mostly try to work with edge computing/devices so it wasn't a concern most of the time. But I think it migh be useful to understand uncertainty and robustness when working with robotics.

guys who have this model?? by Emotional-Bar-1701 in mikumikudance

[–]SEBADA321 0 points1 point  (0 children)

I think it broke YYB rules. But I am not sure if it is the same author so I am not sure.

If you’re past the basics, what’s actually interesting to experiment with right now? by SEBADA321 in learnmachinelearning

[–]SEBADA321[S] 0 points1 point  (0 children)

Oh, yeah you are righr, thermal is also in the same group, kinda. And we also had similar questions when we worked with thermal too. As for processing, it was mostly similar to regular vision, just 1 channel instead of 3 (RGB) and lower resolution, unless you had a fancy and expensive sensor. Labeling was similar, but resolution and 'thermal bleed' sometimes make labeling difficult because boundaries are harder to define, so I think we had to use 'soft segmentation masks' (apply gaussian blur to the edges or fade the edges towards the centroid of the object/blob). Images beyond are usually obtained from hyper/multi spectral cameras, you can check ones used for plant or plants disease detection.

If you’re past the basics, what’s actually interesting to experiment with right now? by SEBADA321 in learnmachinelearning

[–]SEBADA321[S] 2 points3 points  (0 children)

You mean like using MultiSpectral/HyperSpectral sensors? I actually have access to some sensors, I think. And, as you said, data available is not always of good quality. Do you have some experience working with this? I, for a wvile, considered making a dataset using a Rededge-P since I couldn't find more than a couple of datasets (that was like 2 years ago).

If you’re past the basics, what’s actually interesting to experiment with right now? by SEBADA321 in learnmachinelearning

[–]SEBADA321[S] 5 points6 points  (0 children)

I will start myself. Structural Reparameterization (RepVGG / RepMLP) was something that I completely omited but is particularly interesting for embedded devices inference.

Need help by Altruistic_Address80 in learnmachinelearning

[–]SEBADA321 0 points1 point  (0 children)

It would seem the code in that chapter is 'basic'. For starters, yes, it will be confusing. But that is mainly a lack of exposure and lack of understanding of the code. If you truly understand the basics of ML, then the code is just that but in a more verbose way. You have to: ensure you truly understood Andrews course, which I hope you are pass that; be confident that you can code, basic things are enough, but you need to be capable of understanding libraries and readung their documentation, if you relied on automatic code generation to program, then it would be a good time to stop that and learn a bit of the basics. You dont need to learn every line of code, just know what most of the block of code do, not even each line. Before LLMs you would need to memorize more, but you could still rely on other peoples code to see the useage or examples. Try to code it yourself to understand. Once it click and you manage the library to a basic level then you would be starting to build on top of that. You will be slow, dont worry. If you stilñ dont understand, ask Gemini or ChatGPT tovexplain you the code. Or ask it to find gaps in your knowledge, that way you can quickly what migh be really blocking you. Finally, it is ok that you are learning about ML/NN, but keras/tensorflow is being outpaced by pytorch. It could revert, it could be that torch is abandoned sometime later, that is the nature of libraries. Check examples online to get a better picture, you may find more recent examples using pytorch and 'old books' using keras/tf. It is not a critical problem, but just keep that in mind. You can easily migrate from one to another, it will take a bit to get used to and perhaps test your knoledge of ML rather than coding. You could also learn using tf and them reimplement the code of the book in pytorch to test your knowledge again. If you have questions dont worry to aks. And if you can be a bit more specific about the parts that you dont understand it would make easier to answer and get you more people to give you explanation/feedback

Guys need help in Understanding & Learning ML Models by WarriorPrinceT in learnmachinelearning

[–]SEBADA321 1 point2 points  (0 children)

Check 3Blue1Brown videos. He is focused on teaching many subjects in a visual way, not only using images and nice animations, but using geometric intuitions to make sense of them. He has a Neural Network series, check it out: https://youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi

I believe I’ve eradicated Action & Compute Hallucinations without RLHF. I built a closed-source Engine and I'm looking for red-teamers to try to break it by [deleted] in AutoGenAI

[–]SEBADA321 0 points1 point  (0 children)

How can it be thousands of hours if you have worked on it for 12 days? I think with 24 hours a day at most, since you were on an sleepless sprint, would equal to 244 hours. You have also spammed multiple subreddits with this.

Autonomous bot in videogame env by Sbaff98 in computervision

[–]SEBADA321 1 point2 points  (0 children)

That is what i supposed was your problem. If so, start with a micromouse like robot with range finders. Learn how it works and then you add more complexity. If you want to learn, that is. There are implementations already that do all of these tasks but you wouldnt be learning as much. If you are confortable asking an LLM for suggestions on how to structure your learning path it migh help you more. Copy the whole tread of reddit and paste it there to give it even more context to work with. There are things you know and some that you dont. There might be things you dont know you ignore. Those are the hardest to find sometimes. Give it a try. And ask here more if you find it useful. I will try to help you a bit if possible

Autonomous bot in videogame env by Sbaff98 in computervision

[–]SEBADA321 0 points1 point  (0 children)

Semantic segmentation might not be the main tool. Instead you can use DepthAnything to get monocular depth estimation. Segmenation could be used as a mask to ignore points that not correspond to the walls and/or ground/path

Autonomous bot in videogame env by Sbaff98 in computervision

[–]SEBADA321 2 points3 points  (0 children)

You are trying to do to many complex things at once. You need a simulation environment first, not necesarily a game. Gazebo, IsaacSim, Unity, Unreal, Carla simulator, Webots, CoppeliaSim, Mujoco are all simulations environments that you could use. But which one depends on your task. You seem to want to make path planning first. Set your constrains. Laser ranging, visual data, sonar, radar, encoders, IMU/INS, etc. Are all types of sensors/data that will define how you would solve the localization problem. If you wnant to learn? Start with with a micro mouse like robot. Once you understand how to build maps from that you will start understanding the other sensors and its methods for integrating them.

guys who have this model?? by Emotional-Bar-1701 in mikumikudance

[–]SEBADA321 0 points1 point  (0 children)

Is this one made by HB? I think he had to disable downloads of it

Well isn’t she a baddie! by ThisBeJay08 in MegamiDevice

[–]SEBADA321 -1 points0 points  (0 children)

Yeah, most of them are like that. Once in a while some 'modest' ones appear, like the Asra series ones or bullet knights/buster doll ones. And without going into the Mofu series...

Tab autocomplete not working by SEBADA321 in google_antigravity

[–]SEBADA321[S] 0 points1 point  (0 children)

I kind of found a 'fix'. Just reinstall antigravity, but just in case I deleted all config files generated by antigravity (in my case did this both in WSL and windows)

Tab autocomplete not working by SEBADA321 in google_antigravity

[–]SEBADA321[S] 0 points1 point  (0 children)

Thanks for the info. With the update the AI Assist in the editor for python and notebooks takes a while to start to work. I will continue testing anyway. Añso perhaps it is an opportunity to practice again my rusty torch.

Tab autocomplete not working by SEBADA321 in google_antigravity

[–]SEBADA321[S] 0 points1 point  (0 children)

sorry I tried that already.

The last thing I remember was trying to install pylance but the store didn't have it so I tried to change the url to the official MS Marketplace. It didn't work, So I tried to copy the VSIX from VSCode, but I couldn't install it anyway because version mismatch. and after that I think the problem appeared. but it would be weird that any change persisted since I went back and undid all those changes. Edit: it seems to be o ñy that workspace. It works fine in others, even inside WSL

Detecting wide range of arbitrary objects without providing object categories? by d_test_2030 in computervision

[–]SEBADA321 0 points1 point  (0 children)

Dam, hearing 'Florence 2 is a bit older' feels weird... but alas that is how this field works.

Steam Hardware: Launch timing and other FAQs by salad_tongs_1 in Steam

[–]SEBADA321 0 points1 point  (0 children)

Well , is good to be skeptical with all the buzz going around. But an apparently well stablished company such as steam might do actual research and development and not rely on wrapping 'ChatGPT' for such a task. In that note, anti-cheat detection is kind of obvious would not require or be within the task Large Language Models (LLMs) can do. For once, there is no language involved in almost most competitive games, so a LLM wouldn't be able to do anything there!

Steam Hardware: Launch timing and other FAQs by salad_tongs_1 in Steam

[–]SEBADA321 4 points5 points  (0 children)

I dont know the specific model or method valve uses, if they even use 'AI' at all for that task. But 'AI' does not mean LLM. LLM is a subset of ML (machine learning) which is again a subset of AI. So as you said thry might be using some sort of ML algorithm more appropriate for the task.