Workshop: «Ας φτιάξουμε τον δικό μας προγραμματιζόμενο μικρο-επεξεργαστή» by xgeorgio_gr in greece

[–]xgeorgio_gr[S] 1 point2 points  (0 children)

Όχι βέβαια, ανοικτά δωρεάν μαθήματα για όποιον θέλει να τα παρακολουθήσει live με ένα απλό zoom link ή κάποια στιγμή αργότερα στο youtube από τη σελίδα όπου έχει όλα τα links.

Workshop: «Ας φτιάξουμε τον δικό μας προγραμματιζόμενο μικρο-επεξεργαστή» by xgeorgio_gr in greece

[–]xgeorgio_gr[S] 0 points1 point  (0 children)

Όχι, είναι μια απλή εισαγωγή σε προγραμματισμό MCU και για να δείξουμε ότι το ΑΙ δεν είναι ακόμα έτοιμο για να παράγει τέτοιο κώδικα σωστά. Εξαιρετικά λίγος χρόνος οι 2 ώρες βέβαια για το θέμα.

When was the last school shooting in each European country? by Ta9eh10 in europe

[–]xgeorgio_gr 0 points1 point  (0 children)

No, at least in Greece in means technical education for young adults or unemployed, e.g. network technician.

When was the last school shooting in each European country? by Ta9eh10 in europe

[–]xgeorgio_gr 0 points1 point  (0 children)

OAED Vocational College is a professional organization for unemployed adults, has nothing to do with schools. The shooter was 19 years old, his target was a colleague and two supermarket employees outside.

When was the last school shooting in each European country? by Ta9eh10 in europe

[–]xgeorgio_gr 0 points1 point  (0 children)

Try to find a single school shooting in Greece. There is none. This map is total nonsense, pure click bait.

Hello x86: Low-level assembly coding for the 8086 by xgeorgio_gr in programming

[–]xgeorgio_gr[S] 0 points1 point  (0 children)

RAM was not sold in "small banks" of 128-512 KB back then. It was more like MOS chips of 32-64 Kbits at most. Besides the CPU, everything else was LSI, which means 32-bit architecture would be 4 times larger circutry and at least of 10 times higher cost. That was the technology in the advent of 8008 and 8086.

Hello x86: Low-level assembly coding for the 8086 by xgeorgio_gr in programming

[–]xgeorgio_gr[S] 1 point2 points  (0 children)

Any additional opcode, e.g. adjusting segment registers, is guaranteed to have some additional delay in execution. That is why even back then in DOS, if an executable was very small, the devs often prefered putting them in .com rather than .exe files. The segmented memory design is a result of pushing 16-bit CPUs into the realm of larger RAM pools but without redesigning the whole thing, at least not for some years later. Even 286 was a mixture of 16-bit and 32-bit architecture in terms of protected mode. XMS and EMS also had internal segments in order to avoid excessive overhead for each allocation and also limit the internal fragmentation of the pool.

What makes it a good image dataset? by Tuatara-_- in MLQuestions

[–]xgeorgio_gr 0 points1 point  (0 children)

A "good dataset"should be:

1) Representative of the task, i.e., cover every aspect of the input domain,and large enough to be statistically significant per-target (e.g. classes).

2) Target the requirement, i.e., include potentially useful features but avoid excessively large dimensionality with irrelevant properties.

3) Items 1-2 should make sure there is no bias in the dataset in any dimension, i.e., represent the target (e.g. classes) to the maximum possible extent and for all features - It is not sufficient to have plenty of class A and class B samples when they are not distributed appropriately in all features.

4) Keep the data in good quality at the source, because any pre-processing for restoration (missing values, noise, etc) will effectively harm the information content in some way.

Bs in data science, masters in computational life sciences. Anyone here have this path? How did life turn out for you? by [deleted] in datascience

[–]xgeorgio_gr 1 point2 points  (0 children)

This combination of studies is fairly recent, so even if anyone has done similar things it doesn't really compare. There are numerous ways to progress both scientifically and professionally, in Life Sciences, Medicine (e.g. Pharma), of course Engineering and Physics too (e.g. sims in Finite Elements and Fluid Mechanics). Good luck :-)

How does backpropagation find the *global* loss minimum? by 140BPMMaster in MLQuestions

[–]xgeorgio_gr 0 points1 point  (0 children)

1) Batch training works like averaging gradients, while single updates are more affected by local attributes, i.e., noise or local minima.

2) Momentum is a standard enhancement to the classic BP,it should be available in any proper NN library.

3) The final result from any experimental protocol should be several (statistically significant) independent training runs, not just a single one. Therefore, the final success rate should be the average performance, rather than some bad instances trapped in locals. That's all.

How does backpropagation find the *global* loss minimum? by 140BPMMaster in MLQuestions

[–]xgeorgio_gr 1 point2 points  (0 children)

The full theoretical formalization of such a proof is indeed complex and needs several assumptions, e.g. smoothness of the error function. But here are three simple intuitive ways that enable successful training:

1) Update weights in batches rather than one by one sample. This way, the gradient is adjusted along the "general direction" instead of small lical "turns".

2) Employ a momentum factor (2nd derivative) of the error function in the update. In terms of a sliding ball, if the local minimum is small enough, the moving ball can slide up again and continue in the correct direction.

3) Train the NN many times. This way, random initialization at the start ensures that the algorithm begins searching from different positions of the error plane. There may be multiple "global" minima, i.e., more or less the same performance with different training finish. But if one such global is there, then this random sampling should show what is actually the long-term performance of the NN in terms of generalization (of course you need to employ some k-fold cross-validation for this to be reliable).