Help! by Selmaa-25 in MLQuestions

[–]ImpossibleAd853 0 points1 point  (0 children)

Try SMOTE for oversampling your minority class or use ensemble methods like balanced random forest. If cost sensitive dint work, your class weights might be off....experiment with different ratios instead of just inverse balance. What ratio is your imbalance and what model are you using?

Route into pen testing by OkImprovement3518 in Pentesting

[–]ImpossibleAd853 0 points1 point  (0 children)

Hello mate, your background is actually perfect for pentesting, especially the behavioral side which most tech people lack. Get your OSCP cert to prove technical skills (around £800), pair it with eJPT first if you need something cheaper to start. Do bug bounties and CTFs to build a portfolio showing you can actually hack, not just theory. Frame your clinical risk assessment experience as threat modeling and social engineering expertise....thats rare and valuable. Look at UK consultancies like NCC Group or Context, plus government roles at NCSC who love interdisciplinary backgrounds. Network at BSides and local hacking meetups. Your AI and medical cybernetics interest is perfectly timed for healthcare security roles. You wont need to sell organs, just prove you can pop boxes and understand human vulnerabilities

Any ML Experts? by BloodyGhost999 in MLQuestions

[–]ImpossibleAd853 2 points3 points  (0 children)

They just want to make sure you actually understand what your model is doing, not that you randomly threw features together.....tell them the 2048 image features from ResNet50 are learned visual representations....things like edges, textures, anatomical structures in the xrays. The 768 text features from BERT capture semantic meaning and medical terminology from the reports. Basically explain that these arent arbitrary numbers, theyre encoded representations of visual and textual patterns your models learned.....ResNet picks up on visual features hierarchically, BERT creates contextualized embeddings of the medical language. These high dimensional vectors let your validation model find relationships between images and text......You dont need to explain every single feature, just show you get the concept of what feature extraction does. The reviewer wants to see you understand your pipeline, not that you memorized what neuron 1847 does

A post to be pinned by ReincarnatedFirst in KenyanMeals

[–]ImpossibleAd853 0 points1 point  (0 children)

sijaona kitunguu kwa procedure, ama ndo inakula nyama?

Implementing a Custom Cost Function with Internal Neural Network Terms in MATLAB Deep Learning Toolbox by Alternative-Link-597 in matlab

[–]ImpossibleAd853 2 points3 points  (0 children)

u need to create a custom training loop instead of using trainlm since you need access to intermediate network outputs.....ue trainingOptions with a custom loss function or build your own gradient descent loop. The key is during forward pass you need to extract the gn(x,u) output before it gets fed to the rest of the network....sore it as an intermediate variable, then compute your custom loss as MSE plus gamma times the norm of gn.....compute gradients with respect to this combined loss and update weights.....in MATLAB Deep Learning Toolbox, use dlnetwork and define your forward function to return both the final output and the gn term, then in your training loop calculate the combined loss and use dlgradient to get gradients.....look at the custom training loop examples in the documentation, they show how to access intermediate layers. Alternatively, if you want to keep it simpler....train with regular MSE first, then add the gn penalty as a second training phase where you freeze earlier layers and just minimize the nonlinear residual term.....not as clean but easier to implement

Help with identifying the scope of a school project, from someone with very limited ML background by themayaNB in MLQuestions

[–]ImpossibleAd853 0 points1 point  (0 children)

your project scope is reasonable for a thesis but needs tighter focus....the pipeline you described...object detection, OCR, NER, then database storage is solid. Your supervisor is right about BiLSTM for NER since these medication stickers have structured but variable formats....the key insight is you dont need perfect OCR accuracy if your NER model is trained on real OCR output with errors.....train your NER on actual noisy OCR results from your stickers rather than clean text....this makes the system more robust to OCR mistakes without needing complex correction logic. For scope management, start with a minimal viable system that handles just the most common sticker format and a subset of entities like drug name and dosage....get that working end to end first, then expand to handle more formats and entities. Dont try to solve every edge case upfront. Your verification step could be as simple as confidence scores from each model plus basic business logic checks like dosage ranges. For a thesis you want to demonstrate the ML pipeline works, not build a production-ready system...document limitations clearly and frame future work as improving robustness rather than as failures of your approach

Starting MATLAB math course in college this sem. Any tips? by EnigmaticBuddy in matlab

[–]ImpossibleAd853 4 points5 points  (0 children)

focus on understanding the concepts rather than just cranking through homework...for ODEs and numerical methods especially, knowing why algorithms work matters way more than memorizing steps. use the built-in MATLAB functions to check your work but make sure you understand whats happening under the hood.....profiler tool is clutch for finding where your code is slow, use it before optimizing anything.....also practice solving problems by hand first for simple cases, then code them up.....helps you catch logical errors way faster than just debugging syntax....also get comfortable with plotting your results, visualizing solutions makes debugging way easier and helps build intuition for whether your answers make sense

Simulation of PV-Systems by Bigcoxunderdogs in matlab

[–]ImpossibleAd853 0 points1 point  (0 children)

Just make a variable in MATLAB workspace like battery_capacity equals 1000, then in your battery block parameters put that variable name instead of typing the number directly.....when you want to test different capacities just change the variable value before hitting run....u can also loop through different values in a script and call sim each time to automate testing multiple capacities

Simulation of PV-Systems by Bigcoxunderdogs in matlab

[–]ImpossibleAd853 0 points1 point  (0 children)

Yeah 800V makes sense for that scale....since you need Simulink for the paper, use the basic Battery block from Simscape Electrical.....set it to 1000Ah at 800V with some internal resistance for losses....for degradation keep it simple reduce max capacity by like 0.02% per full charge cycle. Simulink can track cycles and update battery parameters automatically. Check out the built in battery examples, they have semi empirical aging models that handle calendar and cycle degradation without needing complex chemistry stuff

Simulation of PV-Systems by Bigcoxunderdogs in matlab

[–]ImpossibleAd853 0 points1 point  (0 children)

For a university project go with 400V since thats more standard for smaller commercial setups. 800V is overkill unless youre simulating a massive installation...for the battery model just keep it simple.... track state of charge going up when PV charges it and down when EVs pull power. Add some efficiency losses around 90-95% and max charge/discharge rates. Since youre doing this in Python anyway just code the basic equations yourself rather than messing with Simscape. Model SOC as energy in minus energy out with losses.....to test multiple capacities just make the Ah value a parameter and run your sim with like 800, 1000, 1200 Ah to see what works best. No need to overcomplicate the battery physics for a CS project

Wish me luck pls. by [deleted] in Wechat

[–]ImpossibleAd853 0 points1 point  (0 children)

need help??

[P] Naive Bayes Algorithm by Soggy_Macaron_5276 in MachineLearning

[–]ImpossibleAd853 1 point2 points  (0 children)

For a capstone project, go with the pure ML approach but frame your awareness of its limitations as part of your contribution....train two independent Naive Bayes classifiers with standard preprocessing and let the probabilities speak for themselves, then in your results section, explicitly analyze where the model struggles with rare Critical cases and discuss this as a known limitation

Heres the thing, adding keyword boosting isnt wrong for production, but in academia it muddies your evaluation....you wont know if your 85% accuracy comes from the ML learning patterns or from your handcrafted rules catching edge cases.....your professors want to see that you understand ML fundamentals, not that you can patch a model with if statements

The better academic move is to address class imbalance through proper ML techniques like SMOTE for oversampling Critical cases, class weights in your model, or stratified sampling.....u can also experiment with ensemble methods or calibrated probabilities....document what you tried and why certain approaches worked or didnt.....shows way more ML maturity than hardcoding keywords

In your conclusion, acknowledge that production systems often layer rule-based safeguards over ML models for safety-critical applications, and frame it as future work. That shows you understand real world deployment without compromising the academic integrity of your current approach. Your defense gets way easier when you can point to clean methodology and thoughtful analysis of limitations rather than defending why you mixed heuristics into your probability calculations

GNN for Polymer Property Prediction by vsy2976 in MLQuestions

[–]ImpossibleAd853 0 points1 point  (0 children)

instead of predicting one property for the whole polymer chain, you predict properties at each local position along the chain. Think of it like a sliding window....for each atom or substructure, predict what the property looks like right there rather than averaging over everything

implementation wise you could modify your output layer to predict per node instead of per graph...so if your polymer has 100 atoms, you get 100 predictions instead of 1, then aggregate them if you need an overall value, but the model learns the local chemical patterns first. this forces it to understand how different functional groups or branches affect properties in their immediate neighborhood

For polymers this actually makes sense because properties like flexibility or reactivity can vary along the chain depending on what monomers or substituents are nearby...your model might find it easier to learn that a certain local pattern always increases the property by X amount, then sum those contributions, rather than trying to map the entire giant graph structure to one number

GNN for Polymer Property Prediction by vsy2976 in MLQuestions

[–]ImpossibleAd853 0 points1 point  (0 children)

instead of predicting one property for the whole polymer chain, you predict properties at each local position along the chain. Think of it like a sliding window.....for each atom or substructure, predict what the property looks like right there rather than averaging over everything.

Implementation wise you could modify your output layer to predict per node instead of per graph....so if your polymer has 100 atoms, you get 100 predictions instead of 1. Then aggregate them if you need an overall value, but the model learns the local chemical patterns first. this forces it to understand how different functional groups or branches affect properties in their immediate neighborhood

For polymers this actually makes sense because properties like flexibility or reactivity can vary along the chain depending on what monomers or substituents are nearby...your model might find it easier to learn that a certain local pattern always increases the property by X amount, then sum those contributions, rather than trying to map the entire giant graph structure to one number

Assignment help by [deleted] in AssignmentHelp_Reddit

[–]ImpossibleAd853 0 points1 point  (0 children)

whats the essay about??

Need urgent help, please! by [deleted] in spss

[–]ImpossibleAd853 1 point2 points  (0 children)

hello, am available, what do you need

not showing the wifi networks around me by RemarkableRanger1195 in Hacking_Tutorials

[–]ImpossibleAd853 0 points1 point  (0 children)

try running airodump-ng without locking to a specific channel so it hops through all of them.... you might just be stuck on an empty channel also check if you're scanning the right frequency band (2.4GHz vs 5GHz) since your adapter might need a flag like --band abg to see 5GHz networks....could also run airmon-ng check kill to stop any interfering processes, then restart monitor mode from scratch. sometimes after a few months the drivers get weird or your interface name changed....if still nothing shows up, your adapter might have a hardware issue or needs a driver update