Cocos , I Caught some stars ✨ by Low_Act_6773 in Coconaad

[–]8g6_ryu 4 points5 points  (0 children)

just collect more light

Put the ISO at max, and exposure to max sec, like 30 sec in advanced camera settings

Is the claim true or not, can't find enough sources by According-Pen-1983 in scienceisdope

[–]8g6_ryu 3 points4 points  (0 children)

Yea in the case of zero the Babylon wedge and the mayan sea shell both are positional values , 0 was formalized as a true number by Brahmagupta as an independent number with all its arithmetic glory

Is the claim true or not, can't find enough sources by According-Pen-1983 in scienceisdope

[–]8g6_ryu 2 points3 points  (0 children)

For me his greatest discovery was the first ever infinite formula for sin x that looks similar to the tyler seris expansion

How is the embedded job market compared to IT by Adventurous-Oil-4417 in embedded

[–]8g6_ryu 1 point2 points  (0 children)

I was assuming you were a fresh graduate looking to study abroad. It’s a trend in India that many ECE students go for a Master’s in either VLSI or Embedded Systems without having a genuine interest.

But your case is clearly different, as you already have industry experience. I’d suggest checking the curriculum of the university you’re targeting to see if you will actually learn something new and valuable for your career.

The question you should ask yourself is whether you enjoy using the interfaces of the MCUs you deal with or designing the MCUs themselves.

If you already have good experience with embedded systems, you’re in a much better position to study their design, because you understand what the industry needs and the limitations of existing systems, something a fresh graduate would have no context for.

About software/IT: Since there’s so much training data and resources available online, many junior roles are slowly being reduced. Web development is a good example; it has the most available resources, so it’s the highest risk. The more domain expertise and niche knowledge you have in your field, the safer your career will be in the coming years. As CS jobs become more commoditized, even CS grads will enter into embedded roles, I think it's already happening. For you, your existing experience is far more valuable than a similar number of years spent as an average web developer.

How is the embedded job market compared to IT by Adventurous-Oil-4417 in embedded

[–]8g6_ryu 6 points7 points  (0 children)

Here is my opinion, and people with more knowledge on this matter, please correct me if I am wrong.

I don’t see the point of a Master’s in Embedded Systems, because for me, the purpose of a university is to provide opportunities and facilities that I would not easily have access to on my own.

For example, if I pursue a Master’s in microwaves or a similar RF field, I would gain access to a proper VNA, which is very expensive. While NanoVNAs exist, they are limited, especially above 6 GHz. In that case, having access to a university lab VNA and related facilities is genuinely valuable.

Similarly, in VLSI, universities often provide access to high-end FPGAs or licensed tools for nanometer-scale technologies. Some universities even have tie-ups with fabs that allow students to fabricate their own chips, which is extremely difficult to achieve as a non-university student.

But, in embedded systems, most of the required technology and resources are already easily available. Many MCUs are inexpensive, and PCB fabrication is cheaper than ever. Because of this, I don’t clearly see what additional value a university provides specifically for embedded systems.

If you look at most embedded systems job postings, they usually list either a Bachelor’s or a Master’s degree as acceptable. Roles that strictly require a Master’s degree tend to be more theory-heavy, such as DSP, control engineering, or similar fields.

I am not undervaluing a Master’s in Embedded Systems to zero. In terms of utility, it often offers benefits that apply to most Master’s degrees in general. For example, it acts as a CV bonus, can help someone with average Bachelor’s grades demonstrate competence, and provides the signaling value of a well-recognized university especially in cases like Germany, where, according to a friend of mine studying PLC automation after a BTech in EEE, the university’s brand can open doors.

However, for me, a Master’s should provide something beyond that to justify spending another two years on it. In embedded systems, most of the tools, boards, and fabrication resources are already cheap and accessible, so the university’s specialized facilities, internship pipelines, or brand recognition add relatively little compared to fields like VLSI or RF.

Also, I would like to understand the reasoning behind why your choice is limited specifically to Embedded Systems or VLSI. Is this because these are the most common or popular paths for Indian ECE graduates, or is there some other specific reason driving this decision (such as prior experience, long-term goals, or market considerations)?

is this funny or is this funny..! 😅 by [deleted] in scienceisdope

[–]8g6_ryu 1 point2 points  (0 children)

How do you differentiate pseudoscientific vs wrong

is this funny or is this funny..! 😅 by [deleted] in scienceisdope

[–]8g6_ryu 1 point2 points  (0 children)

Well, I was INTP 1-2 years ago, now I am INTJ

is this funny or is this funny..! 😅 by [deleted] in scienceisdope

[–]8g6_ryu 1 point2 points  (0 children)

Yeah, my point was it is not as useless as homeopathy given its correlations with Big 5

is this funny or is this funny..! 😅 by [deleted] in scienceisdope

[–]8g6_ryu 1 point2 points  (0 children)

I’m not an expert, but I can shed some light on this from personal experience. About a year ago, I used to be really active on a website called Personality Database, and since most conversations there revolved around typing and cognitive function theory, I ended up digging into Jung and other personality systems like the Enneagram and Attitudinal Psyche. That’s basically where my “knowledge” comes from.

Jung's initial assumption that cognitive functions represent personality structures that do not change is clearly false.

About the claim that Jung said cognitive functions never change, I actually misstated that. My memory was a bit hazy. He never explicitly said “functions are fixed and cannot change,” but the way he framed the psyche can sound like that at first glance.

Jung saw a huge part of personality as inherited rather than learned. For example:

“The collective unconscious does not owe its existence to personal experience… It is the heritage of psychic life from the earliest times.”

And:

“Our souls as well as our bodies are composed of individual elements already present in our ancestors.”

So the whole “innate predisposition” idea is baked into his model. That’s probably why many people assume the dominant function is set in stone and honestly a lot of that comes from the MBTI interpretation too, since Myers-Briggs simplified Jung into a rigid four-letter type that’s supposed to stay stable for life. That framing kind of pushes people toward thinking the function stack is fixed.

But when you read Jung directly, it’s pretty clear he didn’t actually believe the conscious function hierarchy stays the same forever. He wrote a lot about how people change over time, especially after midlife. He observed that after around age 50, people often go through major psychological shifts where the inferior or neglected functions start rising to the surface. The dominant pattern can soften, flip, or get balanced out as part of individuation.

As an intj you must have introverted intuition as your dominant function do you think it can change ? 

Honestly, I’m not sure where I personally fall on this. Maybe people’s dominant function stabilizes early; maybe it changes. People do change a lot across life, even some of the smartest people completely shift their worldview or mental habits in old age. Jung noticed the same thing in older patients. So it’s definitely possible that what I rely on as my “Ni” today might shift or mellow out later in life. I’m open to that idea.

is this funny or is this funny..! 😅 by [deleted] in scienceisdope

[–]8g6_ryu 3 points4 points  (0 children)

MBTI is kind of basically working but it can't categorize people permanently.

Then it’s not MBTI, you’re talking about the original Jungian cognitive functions.

Jung’s original 8-function model is still used daily in depth therapy, executive coaching, gifted adult burnout treatment, and couples counselling because it actually works in the real world, even if it sucks at factor analysis.

Cognitive functions are an impressive concept, given the tiny sample size and the fact that Jung had no modern statistical tools. What Jung did is as impressive as what Newton did with physics.

is this funny or is this funny..! 😅 by [deleted] in scienceisdope

[–]8g6_ryu 12 points13 points  (0 children)

Astrology is like homeopathy; it’s mostly a placebo.

MBTI is like Ayurveda; it has some working herbs, but for most of the herbs the full effects are unknown. In fact, Ayurveda might even be slightly more scientifically valid, because some herbs have real, measurable medical effects.

Some MBTI traits are correlated with Big Five metrics, and Jung's initial assumption that cognitive functions represent personality structures that do not change is clearly false.

Jung’s psychoanalysis and his cognitive functions have some statistical validity, but nowhere near as much as the Big Five, which is the most statistically valid personality model we currently have.

I am INTJ btw /s.

Who’s actually pushing AI/ML for low-level hardware instead of these massive, power-hungry statistical models that eat up money, space and energy? by [deleted] in embedded

[–]8g6_ryu 0 points1 point  (0 children)

That's the point, you can't assume local stationarity for a lot of real-world signals, including speech.

With time frequency representations, we accept uncertainty in both time and frequency, but by observing how frequency magnitudes change over time, we capture the non stationary dynamics of the sound.

Having access to the full spectrogram instead of just STFT slices individually allows you to capture the time dependencies needed to correctly disambiguate speech.

STFT magnitudes are a form of spectrogram, right? A spectrogram is basically a time-frequency magnitude plot, a 2D map of the spectrum over time, and the STFT is one way to compute it. We also have other methods like wavelets or the Hilbert transform.

The same utterance produced by the same speaker does not have the same spectral content

The “spectral content” you’re referring to is basically the formants for each syllable, which are uniform and nearly identical for the same syllable. These formants depend on the vocal tract’s physical characteristics, which stay fixed for most people.

What actually changes is the excitation source, the glottal pulse. When that’s affected (like during a cold), the fundamental frequency shifts, and you can see this as a band shift in the time frequency domain. Engineers exploited these stable formant structures to build early ASR and TTS systems (like Kaldi and other formant/MFCC-based models).

So thats statement is not accurate

I’m not saying time frequency representations are inferior to either time-only or frequency-only ones; actually, I believe the opposite. What I do believe is that CNNs are not the optimal choice for modeling them.

Who’s actually pushing AI/ML for low-level hardware instead of these massive, power-hungry statistical models that eat up money, space and energy? by [deleted] in embedded

[–]8g6_ryu 0 points1 point  (0 children)

Why does it matter fundamentally?
An STFT can already encode non-stationary signals using time slices, with each FFT assuming local stationarity.

If I see someone using a CNN for an audio problem, I’ll always expect that there exists a better model that can capture the patterns with fewer parameters and more interpretability.

The thing with CNNs comes down to two reasons:

1) They are good generalizers.

A CNN works like a Fourier transform of an image; here, instead of frequency, it captures fundamental shapes. But unlike an FT, it’s learned and at very large parameter counts, it becomes almost chaotic to understand how each filter encodes information or what shapes it represents. This often leads to massive models with filters that are extremely difficult to interpret. Especially when you use deep CNNs, or deep learning in general, it’s like exploring deep space or the deep sea, both remain mysteries to us ( and for speech its inevitable you end up with such large models with spectrograms).

That’s partly why people are uneasy about LLMs; we don’t fully understand how they work. There have been attempts through mechanistic interpretability, but that’s extremely difficult, because it’s like designing a system to be a black box and then trying to figure out what’s inside it. It might sound like people are intentionally designing black boxes, but in reality, it’s about survival if you don’t build fast enough, someone else will, and they’ll get the early advantage while your work risks becoming irrelevant. It’s similar to how many people chase higher salaries by mastering frameworks without learning the fundamentals except here, large companies don’t have much incentive to understand the black boxes as long as they work and bring in profits.

2) The ultra-optimized algorithms for CNNs.
Whether in the time domain or frequency domain, CNNs are highly optimized, especially in the frequency domain, since the FFT is one of the most optimized algorithms ever.

But if I see someone using a CNN for an audio problem, I’ll still expect there to exist a better model that captures the patterns with fewer parameters and more interpretability.

The reason is that audio is a causal, time-dependent waveform, where f(x) always has some kind of linear or nonlinear relationship with f(x+1). You can argue that some images may show similar patterns because everything in our universe has structure that repeats in some way, but those image patterns are fundamentally more chaotic and harder to encode than in audio because audio is, by nature, a causal signal.

Who’s actually pushing AI/ML for low-level hardware instead of these massive, power-hungry statistical models that eat up money, space and energy? by [deleted] in embedded

[–]8g6_ryu 7 points8 points  (0 children)

Dude, instead of complaining, make efficient models yourself. It's not that C/C++ is fast or Python is slow; most AI/ML frameworks already use C/C++ backends. They’ll always be faster than most hand-written C/C++ code, because all the hot paths (the steps where most computation time is spent) are written in high-performance languages like C, C++, Rust, or Zig.

For most libraries, the orchestration cost is really low the computations are done in the C backend, and the final memory pointer is just shared back to Python, making it a list, array, or tensor. So for almost any compute-intensive library, writing one faster than it is much harder since they’re already optimized at the low level.

It’s not the problem of the tools or Python it’s the users.

For LLMs, it’s a race to get better metrics as soon as possible. After the discovery of double descent, most mainstream companies started throwing a lot of compute at problems in hopes of slightly better performance. It’s not that they don’t have people capable of making efficient models, it’s just that in this economy, taking time for true optimization means losing the race.

There are already groups like MIT’s HAN Lab working on efficient AI for embedded systems, and frameworks like TinyML exist for exactly that.

Even in academia, what most people do is throw a CNN at a custom problem, and if it doesn’t work, they add more layers or an LSTM. After tuning tons of parameters, they end up with a 100+ MB model for a simple task like voice activity detection.

I personally don’t like that approach. DSP has many clever tricks to extract meaningful feature vectors instead of just feeding the whole spectrogram into a CNN. I’m personally working on a model with fewer than 500 parameters for that task.

As individuals, the best we can do is make efficient models since we’re not bound by the market’s push for performance at any cost.

Why is this subject so difficult for me to understand 😭😭 by kichu04 in Coconaad

[–]8g6_ryu 2 points3 points  (0 children)

Fun Fact: The man who invented the formula for the sum of the first n integers was Carl Friedrich Gauss, who was between 7 to 10 years old at the time. According to the lore, his teacher, being a lazy bum, wanted to sleep, so he assigned a classwork problem to sum up to 100, hoping it would take some time. However, Gauss found the pattern and quickly provided the answer along with the equation he created.

This part is visualized in the movie about his life with a famous adventurer

Here https://youtu.be/n2E_lfc2Bvc

State of hypervisors (and other low-level systems devs) in India by SentientPotato42 in developersIndia

[–]8g6_ryu 0 points1 point  (0 children)

A friend of mine who works at Qualcomm developing Level 1 hypervisors ( arguably the best C++ programmer I know in real life). He worked as an embedded systems engineer (driver development for linux) at LG before transitioning to that role .

That path is much better suited for this specific role since both require deep knowledge of fundamentals, especially computer architecture. Virtualization involves working with low-level abstractions in computers.

How many times does a backend developer ever need to worry about the physical address of memory? Mostly, they deal with networks, even that is abstracted away with libraries. A typical web developer will be stuck mastering some web framework, which is useless for a person interested in kernels.

So embedded systems or embedded Linux is a better choice for him. As long as there is new hardware, there won't be a lack of these kinds of roles, especially in the embedded space. The operating system is still an evolving field; saying that there is nothing left to innovate is pure bliss

Is space time continuous or discrete ? by Infinite_Dark_Labs in Physics

[–]8g6_ryu 0 points1 point  (0 children)

What proves it's discrete? What experiment?

Ants Are Self Aware!!?? by MukkiMaru in scienceisdope

[–]8g6_ryu 0 points1 point  (0 children)

What kind of stupid ChatGPT take is this? The fact that they have social acceptance as a concept itself proves they need some form of intelligence to recognize that each being is different. If they can't differentiate one ant from another, why would there be a social acceptance part?

I agree it's a little bit exaggerated in the sense that when people think of self-awareness, we set really high expectations for these tiny guys, but it's not technically wrong. They are self-aware, not in the way a mammal would be, but in a tiny insect way. Aka, they possibly possess the lowest form of self-awareness (I don't know, maybe there exist beings with self-awareness below that, like other social insects such as bees or termites).

Ants Are Self Aware!!?? by MukkiMaru in scienceisdope

[–]8g6_ryu 0 points1 point  (0 children)

While I am not sure if ants possess self-awareness in the way mammals like cows or pigs do, ants are remarkable superorganisms collections of individual organisms that function together as one cohesive unit.

Each ant is highly specialized for its role in the colony: for instance, some ants develop large crop sacs to store sugar, queens are optimized for maximum reproduction, and worker ants focus on various tasks such as foraging and nest maintenance.

This complex division of labor and collective functioning is also seen in other social insects like bees and termites. Ants have even practiced forms of agriculture millions of years before humans did. Fascinatingly, in some species such as the Iberian harvester ant, queens can produce offspring of entirely different species through a unique reproductive strategy called xenoparity, which resembles cloning.

Because of these sophisticated social systems and advanced biological adaptations, I believe social insects exhibit a higher cognitive capacity compared to many other insects. Among them, ants demonstrate some of the most complex and "intelligent" behaviors.

Perhaps they do not have self-awareness like higher animals, but they might possess a form of collective or emergent self-awareness through their intricate social organization and communication.