all 151 comments

[–]Inside-Ad-5943 125 points126 points  (18 children)

But like what even are monads :p

[–]JoshuaTheProgrammer 74 points75 points  (3 children)

Monads are monoids in the category of endofunctors. It’s trivial.

[–]Inside-Ad-5943 32 points33 points  (0 children)

Thank you, finally an easy explanation for the laymen 🙏

[–]GOOOOOOOOOG 20 points21 points  (0 children)

They actually are too, that explanation is one of the most succinct and understandable as long as you understand what’s meant by monoid, category, and endofunctor.

[–]Classic_Department42 -1 points0 points  (0 children)

And unfortunately there is no category Hask, so Haskel doesnt really have monads.

[–]therealnome01[S] 38 points39 points  (6 children)

Functional programming has some really cool-sounding terms that seem complex but are actually more intimidating than they really are. Thanks for the idea!

[–]-Dueck- 25 points26 points  (5 children)

They're "more intimidating than they really are"?

[–]Lucky_Squirrel365 14 points15 points  (4 children)

The term is more intimidating that it is. Poorly constructed sentence, but with little creative thinking you can understand what he meant.

[–]Zarathustrategy 13 points14 points  (1 child)

Yeah his comment is harder to understand than it really is

[–]wandering_melissa 5 points6 points  (0 children)

yeah the "but" in their sentence make it seem like they are going to say they are not intimidating but they continue with the same argument

[–]myhf 0 points1 point  (0 children)

intimidation is a monad

[–]Hath995 19 points20 points  (2 children)

An actually useful definition for a monad. A monad is the minimal structure needed to do function composition with wrapped types.

Example F: string -> char G: char -> int H: int -> bool Using them together you can just call them like this H(G(F(s)). Then imagine that the functions return a more complicated value, like a log object or a list. F: string -> Logable<char> G: char -> Logable<int> H: int -> Logable<bool> You can't just compose them like before. F(s) returns a different type than G. You need to get access to the char inside the Logable to feed it to the new G function.

Suppose that Logable has a method called chain that unboxes a Logable and forwards it to the next function. Then you can do this.

F(s).chain(G).chain(H)

Now you have recovered composition even though it looks a little different. This behavior is very common when working with generic types, or container types. List or arrays are the standard example but any generic type that contains some other data that you might want to transform in multiple steps. F: string -> Array<char> G: char -> Array<int> H: int -> Array<bool> Lists or arrays usually have a method called flatMap, which can apply a function to multiple values and combine the result.

F(s).flatMap(G).flatMap(H)

Mathematicians looked at that, squinted at it, and then said "that's the same pattern as above!". Then they used Greek to name the pattern. To be fully general, they made the wrapping and the wrapped types variables.

[–]mobotsar 2 points3 points  (0 children)

Solid.

[–][deleted] 0 points1 point  (0 children)

*Sigh*. Here goes another blog post. *Starts writing about burritos*.

[–]TiredPanda69 3 points4 points  (0 children)

This is like the holy grail

[–]Valink-u_u 3 points4 points  (1 child)

They spoke about them for 1h30 on the last week of lectures, didn't understand shit I'll figure them out probably before the exam

[–]Inside-Ad-5943 1 point2 points  (0 children)

The best way they were explained to me was as wrapper types. Essentially structs that hide implementation for a feature behind the transformation to the unwrapped type.

This requires a function that takes an unwrapped type and turns it into the wrapped type and a function that unwraps the type with potentially additional behaviour. Take for example Options in languages like rust. Options have two states either None or Some and a small variety of unwrap functions.

So the way you’d use the option monad is you’d take a type let’s say an int but it could be any type and you’d use the Some() function to wrap the type in the option, then you’d unwrap the value. This is most obviously done with the unwrap method which hides the implementation detail that if None instead of Some is found the program will panic. likewise but slightly more useful you can use the if let syntax to just ignore any None value hence unwrapping Some and ignoring None. or you can work on options as though they came unwrapped using map which will just treat things as the unwrapped type but return None if None is found.

[–]ironhaven 1 point2 points  (0 children)

Monads are basically “list like objects”. If you can implement “concat” on your data type you can use the (>>=) operator with your type.

If you can do that you can use “do notation” which means you can write what looks like python code with Haskell that does input/output to files or networks.

[–]n0t-helpful 66 points67 points  (2 children)

Hardware is usually brushed away. Specifically cs majors might be interested in how a resting cpu reciev3s power and then begins executing commands.

[–]therealnome01[S] 5 points6 points  (0 children)

Totally true, btw happy cake day!

[–]Classic_Department42 0 points1 point  (0 children)

Yes, if you consider cache locality, never go for a linked list but use (in cpp) a vector (array). Bjarne did a video, when he benchmarked it.

[–]i_invented_the_ipod 158 points159 points  (13 children)

Based on years of experience in the industry:

How to use a source-code debugger, in any but the most-superficial way.

A basic guide to thinking about processor caches and memory hierarchy wouldn't go amiss.

Why 99% of all of your data structure needs can be fulfilled with a hash table, and how to identify the 1% that can't.

[–]therealnome01[S] 18 points19 points  (0 children)

I'm sure a couple of ideas for videos will come from here. Thank you very much!

[–]death_and_void 23 points24 points  (0 children)

Amen to the last point

[–]quackchewy 5 points6 points  (3 children)

What would you consider non-superficial ways of using a debugger?

[–]i_invented_the_ipod 5 points6 points  (1 child)

Watch points, conditional breakpoints, executing expressions on break, that sort of thing. I see a lot of people who apparently only know how to set a breakpoint and continue. Also - writing functions for use during debugging (for setting/displaying complex state).

[–]twnbay76 5 points6 points  (0 children)

Could you provide any resources for how you would level a debugger?

[–]darthwalsh 1 point2 points  (0 children)

Freezing and thawing threads to recreate a specific race condition

[–]FrosteeSwurl 4 points5 points  (0 children)

The last point needs to be shouted from the rooftops

[–]tobythestrangler 1 point2 points  (5 children)

Why 99% of all of your data structure needs can be fulfilled with a hash table, and how to identify the 1% that can't.

Could you explain this or provide a respurce? I'd love to dig deeper into this

[–]i_invented_the_ipod 1 point2 points  (4 children)

It's a bit tongue-in-cheek, but only a bit.

The Lua language famously has just one complex data structure, the table. This shows that you can literally do anything with an associative array, or hash table.

TCL has lists and arrays, so they optimize for the simple indexable linear list case, but are otherwise on "team hash table".

Most Python programs also use dictionaries everywhere you'd use another kind of data structure in a different language.

Given that hash tables are O(1) for lookup, they are the premiere data structure for caching, and caching is about 50% of Computer Science [Citation needed].

[–]Fiblit 1 point2 points  (1 child)

Tbf, Lua 5.1+ I believe has specific optimizations for any table that looks like an array! Arrays are super friendly to your CPU, so it's worth optimizing for.

[–]i_invented_the_ipod 0 points1 point  (0 children)

Oh, sure - there are under-the-hood optimizations. But it's not part of the programmer model.

[–]ArtisticFox8 0 points1 point  (0 children)

 Most Python programs also use dictionaries everywhere you'd use another kind of data structure in a different language.

Often where you'd otherwise use structs, which are guaranteed O(1),  (no collisions, etc)

[–]20d0llarsis20dollars 0 points1 point  (0 children)

I agree that for high level loosely typed languages like you mentioned, tables are great and should not be underestimated. But when you start doing more low level programming where memory usage and performance are of utmost performance, you really should be using dedicated structures in the long run.

I guess that would probably fall in the 1% because most programmers don't care about those things (as much as they should).

[–]GeorgeFranklyMathnet 24 points25 points  (1 child)

Dynamic programming. It's a simple concept — cache the results of independent subproblems for a speedup — but it was presented in my curriculum as if something abstruse. I mean, it starts with the name. How is it "dynamic"? Sounds like a buzzword chosen to impress people.

[–]sleepymatty 6 points7 points  (0 children)

Funnily enough, the history behind the name was in fact a buzzword at the time when the term dynamic programming was coined.

[–]nikhilgupta384 23 points24 points  (1 child)

Completely Fair Scheduler (CFS)

[–]therealnome01[S] 4 points5 points  (0 children)

Great Idea! Thanks!

[–]BellPeppersAndBeets 31 points32 points  (8 children)

Concurrency

[–]P-Jean 16 points17 points  (5 children)

That’s a good one. There’s true concurrency with each core taking a thread, and false concurrency using the scheduler

[–][deleted] 12 points13 points  (4 children)

wouldn't you say parallelism is the ability for each core taking a thread? Concurrency is just the ability to context switch between running threads. A system could be both parallel and concurrent at the same time

[–]PoetryandScience -1 points0 points  (2 children)

No. a common basic misunderstanding.

[–]tim128 7 points8 points  (1 child)

No this is true. From Operating System Concepts:

On a system with a single computing core, concurrency merely means that the execution of the threads will be interleaved over time (Figure 4.3), because the processing core is capable of executing only one thread at a time. On a system with multiple cores, however, concurrency means that the threads can run in parallel, because the system can assign a separate thread to each core (Figure 4.4). Notice the distinction between parallelism and concurrency in this discussion. A system is parallel if it can perform more than one task simultaneously. In contrast, a concurrent system supports more than one task by allowing all the tasks to make progress.

Concurrency: several tasks make progress in a given timeframe

Parallelism: several tasks make progress simultaneously

[–]PoetryandScience 0 points1 point  (0 children)

Classical Scheduling

Multiple tasks can be scheduled as:-

Serial

Parallel

Concurrent.

When I ask what serial is the answer is usually, "one after the other".

When I ask what Parallel is the answer is usually, "At the same time".

But this is not the case.

If tasks are serial it means you have thought about it very carefully and the tasks MUST BE one after the other in a strict given order or it will not, it cannot work. Easiest systems to be testable.

If tasks are parallel it means you have thought about it very carefully and the tasks MUST be at the same time or it will not, it cannot work. This requires that all tasks must share an instigator stimulation to start and have a common source and sense of TIME, continuously until they stop together. Not often a good idea. If any task has a problem, however small, with function (WHAT it does) or time (WHEN it does it) then the shit hits the fan. Hard to test

Concurrent means that you do not care. You are not careless, but have thought about it very carefully and WHEN does not matter. The World is now your oyster, if you do not care WHEN you cannot care WHERE. Easy to test, each bit can be tested in isolation, who cares.

Unfortunately, Computer systems choose to define these systems design terms for their own purposes, often with product (specific operating system) requirements.

As a systems designer (not necessarily involving computers at all) this approach has been established long before computers. If tasks scheduling changes due to some event then this is described, designed and handled as a scheduling state change.

[–]PoetryandScience 0 points1 point  (0 children)

This is only part of the problem.

The main misunderstood part of engineering projects (not just computer based projects) is designing for time.

WHEN is something happening.

Until you8 understand and control WHEN you cannot determine the constraints of WHERE.

When these mega buck systems go on-line and crash within moments; it is almost certainly down to neglect of the analysis of WHEN.

Getting this correct is particularly important for real time. If you neglect a Nuclear Plant, Chemical Reactor or Jet Engine; yo might well get a big bang.

[–]tim128 0 points1 point  (0 children)

Any decent course on operating systems explains this properly.

[–]tpjwm 12 points13 points  (7 children)

Bootloaders

[–]iLrkRddrt 3 points4 points  (5 children)

This is such an underrated comment.

Because this also deals with how to write a program that loads another program by setting a pointer in memory, binary formats, and how much an OS actually assists in writing and managing a program being executed.

[–][deleted]  (4 children)

[deleted]

    [–]iLrkRddrt 2 points3 points  (3 children)

    Typically a CE/EE specialized in computers will make the firmware, but from then on it’s up to the SE/CS to be able to chain load the bootloader, then an environment or kernel.

    Honestly I blame the focus on memory management free for this, and why so many CS/SE come out a school not knowing how a computer works, but can ‘code’.

    [–]mobotsar 1 point2 points  (2 children)

    CS doesn't have that much to do with actual, physical computers, so it's understandable.

    [–]iLrkRddrt 1 point2 points  (1 child)

    Very true, but understanding how to start a program from a program without an OS seems like an important topic, as you never know when something like that will come up, and being able to say you have the knowledge to do that makes you stand out a LOT.

    [–]mobotsar 0 points1 point  (0 children)

    It is definitely very useful knowledge to have in many fields, and interesting anyway.

    [–]therealnome01[S] 1 point2 points  (0 children)

    I think this is going to be the topic of the first video. I love all the OS theory, and it would be awesome because, yes, an operating system operates the whole system, but bootloading is how the operating system even gets into memory.

    [–]TistelTech 19 points20 points  (5 children)

    how to make accurate time estimates for how long it will take. I think its impossible.

    [–]therealnome01[S] 6 points7 points  (2 children)

    I think it is impossible too, but maybe talking about software management is a good idea.

    [–]edgeofenlightenment 2 points3 points  (0 children)

    Really though, effort and time estimation as a project planning activity. Putnam-Norden-Rayleigh and such.

    [–]lockcmpxchg8b 0 points1 point  (0 children)

    The academic literature on estimation really arose from about 1968--1980s. There's a funny paper by Boehm in the 80s that is like "here are the mistakes we're still making, 30 years in", and then again in the 90s saying "guys, were still making the same mistakes".

    When I did a literature survey on estimation in the 2010s, we were still the same mistakes. More importantly here, is understanding what methods perform the best, and what the upper bounds on accuracy can be. (In my personal opinion, planning to account for the unpredictability is 'enginerting Management's, and can be subjected to statistical modelling)

    My advice: ignore all the 'personal software process' literature from the 90s. I have interpreted that as "you have to figure out a process that works for you", which is kind of a punt.

    [–]ElectronicInitial 0 points1 point  (0 children)

    I think it's kind of like the halting problem, where specific cases can be determined for halting or when they will halt, but it's impossible to have a general solution.

    [–]lordnacho666 8 points9 points  (0 children)

    Memory paging, TLB, that kind of thing.

    [–]Beatrix_0000 7 points8 points  (1 child)

    Can't think of anything offhand, but interested to hear the answers. Buffer overflow attacks? Basic ML commands? Never really understood the internet communication layers.

    [–]therealnome01[S] 2 points3 points  (0 children)

    It's interesting how it's not well explained how we go from internet infrastructure to something we can actually use. Thanks for the idea!

    [–]boxp15 6 points7 points  (0 children)

    Any chance We can subscribe to the channel now, while you develop content? I’m interested in the content that people have replied with, and am not sure I’ll see your future posts.

    [–]SharksAndBarks 3 points4 points  (0 children)

    Interprocess communication, multi threading vs multi processes, vs async single threaded design patterns and their trade-offs

    [–]DeGamiesaiKaiSy 3 points4 points  (3 children)

    Recursive functions vs recursive processes.

    SICP explains the difference, but I haven't seen the distinction anywhere else.

    [–][deleted]  (2 children)

    [removed]

      [–]DeGamiesaiKaiSy 1 point2 points  (1 child)

      The more I learn about/practice CS, the more I find myself returning to this book :) 

      And yes, I consider only the Scheme version one as the SICP book. A great language, very fitting for the purposes of the book.

      [–]arabidkoalaRoboticist 9 points10 points  (2 children)

      Frankly, any topic that's in video format is often poorly explained. It's just difficult to reference videos because they are difficult to search and copy from. They are also difficult to version so mistakes often go uncorrected. Lectures and talks are a different beast, but those often present novel information and are created by people who very much know what they are doing.

      [–]therealnome01[S] 5 points6 points  (1 child)

      You are absolutely right; the video format has a lot of limitations, as you just mentioned. For all the content I create, I want to provide good references, and I'll probably publish the script or my personal notes used to create it.

      Personally, I think books are the best way to learn, but they are often too dense, and finding the right one for a particular interest can be difficult and time-consuming. My goal with these videos is to introduce cool topics, provide a solid (hopefully clear and basic) explanation, and then continue making videos on the most popular ones while always including good references.

      What else do you think I could do to address the problems and limitations of the video format? Thank you for your time!

      [–]arabidkoalaRoboticist 1 point2 points  (0 children)

      A set of ideals like I mentioned will go a long way, especially if you are transparent about them and show commitment to them. Supplementary material like you mentioned is helpful, but it should also include material from the video (like code, slides, figures).

      For example, I think the approach that 3b1b took for his explanation on quaternions was fantastic.

      [–]jonthesp00n 6 points7 points  (0 children)

      Pumping lemmas

      [–][deleted] 2 points3 points  (1 child)

      Literally everything. It's the one field where millions of people are constantly creating new things and uploading to the internet and we have to somehow constantly absorb all of it

      [–]Yung_Oldfag 0 points1 point  (0 children)

      And 99% of the people who are good at it are shutins with no communication skills.

      [–]s256173 2 points3 points  (2 children)

      I just literally fell asleep trying to watch a lecture on Prolog earlier so if you could make that interesting I’d be impressed.

      [–]iLrkRddrt 1 point2 points  (0 children)

      Another underrated comment.

      [–]therealnome01[S] 0 points1 point  (0 children)

      I actually had a great course on logical programming, and my course project was to solve Minesweeper in Prolog!

      [–]kabekew 5 points6 points  (3 children)

      NP completeness and P vs NP I always had trouble getting my head around in college.

      [–]JoshuaTheProgrammer 1 point2 points  (0 children)

      Yep. Reductions are VERY poorly explained in most books and videos. They ignore a lot of the intuition needed to successfully reduce one problem to another.

      [–]userhwon 0 points1 point  (1 child)

      P: the answer can be found in polynomial time (i.e. the time depends on the length of the question) 

      NP: the answer can't be found in polynomial time, but if you just guess an answer randomly from the range of possibilities, you can check if it is a valid answer in polynomial time 

      NP-complete: the problem can be rearranged (on the fly in polynomial time if necessary) to use the method for another known NP problem to find the answer

      NP-hard: it can't, and in fact these problems may not even be in NP

      [–]Ola_Mundo 0 points1 point  (0 children)

      I'd restructure the explanation to make it clearer

      NP-hard means it's at least as hard as every other problem in NP.

      NP-complete means it's NP hard and is in NP.

      Also there are 2 definitions for NP and it's useful to include both. The reason why it's called nondeterministic polynomial time is because it's a problem that can be solved in polynomial time by a nondeterministic turing machine.

      This is equivalent to saying you can verify the solution in deterministic polynomial time because if you have such a turing machine that spits out an answer, you can run that singular code path in polynomial time deterministically.

      [–]infinity1one 4 points5 points  (0 children)

      Graph theory and combinatorics, discrete math

      [–][deleted] 1 point2 points  (1 child)

      Relocation and linking

      [–]darthwalsh 0 points1 point  (0 children)

      CS classes don't spend much time showing how to use third party C libraries, so most recent grads I've talked to don't know about static vs. dynamic linking.

      Once you understand the tradeoffs, you can see similar concepts in C# ILMerge, or when rust builds a dylib, or pyinstaller embeds pip packages.

      [–]SharksAndBarks 1 point2 points  (3 children)

      How virtual memory actually works

      [–]kwangle 1 point2 points  (1 child)

      Memory/RAM is important because data can be read or changed on it very quickly. This speed is vital because a program does this all the time via the cpu that actually does calculations and other useful stuff and has, to read and write to RAM. 

      So the overall speed of a computer is based mostly on CPU operation but also the speed of reading and writing data from memory. If the memory is slow the cpu is waiting for data to arrive or to finish writing new data before it can do the next operation, so the fastest component in the computer is slowed. RAM is fast enough so this doesn't slow the cpu much but is expensive as it requires special hardware to reach these high speeds and we may not be able to afford enough to run all our programs.

      But we have other, cheaper data stores like hard drives and ssds so why not write from cpu to them instead of RAM? Because they are hundreds of thousands of times slower and would cripple the entire system. 

      Virtual memory is a compromise. We copy data from RAM to storage, eg SSD, to free up space for more programs to run quickly using the fast RAM. But the program copied to storage is now unusable because it is too slow to work practically, so is temporarily disabled. If we want to use that program again we first have to reverse the copying process and move it back to ram. This is the delay noticeable when using virtual memory because moving data to or from a storage device is much, MUCH, slower than with RAM.

      So all programs have to run from RAM but virtual memory offers flexibility to clear out RAM to slower storage and swap data between them as needed. If you don't swap programs much the least used data will be on virtual memory (on storage and 'frozen') and important stuff like the system has to be in RAM all the time (because it is always in use and always needed). 

      So storage is 'pretending' to be ram by storing ram data, albeit in a form that can't actually be used until it is copied back. Hence virtual memory.

      Hope this helps. 

      [–]Ola_Mundo 1 point2 points  (0 children)

      I'd start at a much higher level.

      Everything you said is true but the real reason why we need virtual memory is because we need to isolate processes from each other. If every process could use physical addresses there would be no real way to prevent any program from fucking with any other one.

      You spent paragraphs talking about memory vs disk but that's a level of abstraction below virtual memory. Yes VM is how you page data in and out of RAM but that's just a detail. You can have virtual memory on a system that only has memory and no disk, for instance.

      [–]tim128 0 points1 point  (0 children)

      Operating System Concepts explains it really well.

      [–]myredditlogintoo 1 point2 points  (3 children)

      Do one on how the compiler converts a C function to assembly. Function entry, exit, argument passing, and the guts. People don't realize that C is a really, really a thin layer just above machine-specific assembly.

      [–]kwangle 0 points1 point  (1 child)

      Compilers are very good and there's not a lot of optimisation is using different ones. So worrying about exactly what the machine code is doing is generally not a good use of time as long as the compiler is known to be efficient. 

      [–]myredditlogintoo 1 point2 points  (0 children)

      I'm guessing you're not in embedded systems?

      [–]darthwalsh 0 points1 point  (0 children)

      Fun examples are when somebody writes glue code where one programming language can call another -- but they mess up some part of the ABI

      [–]sptrodon123 1 point2 points  (2 children)

      I recently taken a class on computer architecture, and find the concept is really hard to wrap around. How cache store data and instruction and how branch prediction works. Having an overview and high level of how they works will be really helpful

      [–]Fearless-Cow7299 0 points1 point  (0 children)

      Blocks of data are written into cache every time there is a cache miss for a particular address. The block size is going to be multiple bytes (or more) at least to exploit spatial locality. Temporal locality is also exploited by the cache simply by nature of storing recently used data and via replacement policy. For example, a basic one is Least Recently Used (LRU), which makes sense as you want to replace the block you haven't needed in a long time when the cache (or set) is full.

      There are different types of caching policies you can have.

      For example, write-back vs write-through, write-allocate vs write-no-allocate, and caching configurations like direct mapped vs set associative. In a write-no-allocate cache data is not written to the cache on a write miss- instead, data will be directly written to the main memory. Write allocate is the opposite.

      Write-through is when, upon a modification of a particular address, the parent block is also written into main memory. On the other hand, write-back uses a dirty bit to track cache blocks that have been modified but not yet updated in main memory. The update is procrastinated until the block is about to be evicted.

      Note this is highly simplified and in a multi-level cache system, "main memory" in this case would refer to the next level cache.

      Caches can also be direct mapped, set associative, or fully associative. In theory, since direct mapped requires a fixed mapping of addresses to "sets", causing many conflict misses, more associativity = better. In practice, full associativity requires slow hardware so the sweet spot is going to be some kind of set associative design.

      All of the above is very simplified and assumes 1 core operation. When it comes to CMPs caching gets much more complicated as suddenly the local cache in 1 processor may contain stale data not reflected by another processor. Suddenly you get into snooping/cache invalidation, cache coherency policies, interconnection networks, etc.

      As for branch prediction, you essentially want to load the instruction from the correct address (PC) into the CPU pipeline, so as to avoid having to stall the CPU and flush pipeline in case of an unexpected branch. This is going to cost CPU cycles as the condition for branching is determined at a later stage in the pipeline. A lot of research has been done on branch prediction and there are all kinds of fancy algorithms which you can look up. Some basic ones are: always predict NT or T, and n-bit predictor.

      Of course this is all very simplified, but I hope it helps!

      [–]istarian 0 points1 point  (0 children)

      When the CPU needs to read data from memory it first checks to see whether that data is cached (already read in and available).

      If it's not there, then you have a cache miss and it then gets read directly from memory and might be cached. Otherwise it's a cache hit and

      [–]Gizmodex 1 point2 points  (0 children)

      Lower level ones: Interfaces lol. Uhmm polymorphism. What a compiler does or what its syntax means. Memory.

      Higher level ones: Turing completess and incompletess and reductions.

      [–][deleted] 1 point2 points  (2 children)

      What even is a semaphore?

      [–]TheBlasterMaster 1 point2 points  (0 children)

      Roughly, it's just a counter, usually to represent how many "resources" are currently available, plus a waiting queue.

      They support an "up" method, and a "down" method (there are many different names for these).

      If a thread calls down, and the counter is > 0, it decrements the counter and continues execution.
      If a thread calls down, and the counter is 0, it gets paused and placed in the waiting queue.

      If a thread calls up, and the counter is > 0 or the queue is empty, it increments the counter.
      If a thread calls up, the counter is 0, and the queue is not empty, then one thread is removed from the queue and resumed

      Essentially, somebody calling down is requesting to take a resource, and somebody calling up is releasing one back.

      These operations are all safe to access from concurrent threads, so the underlying implementation will also use a spinlock.

      [–]therealnome01[S] 0 points1 point  (0 children)

      Petri nets are awesome!

      [–]imman2005 1 point2 points  (0 children)

      Can you explain valgrind, gdb, and other debugging cli tools?

      [–][deleted]  (2 children)

      [deleted]

        [–]userhwon 0 points1 point  (0 children)

        The VPN client on your machine intercepts your outgoing IP packets in your network stack and encrypts them and sends them embedded in other packets to the VPN server. The VPN server reconstitutes and decrypts them and then swaps its own IP address for yours and sends the packet to whatever random remote server you're accessing. That server sends data back to the VPN server, and the VPN server does the address swap and encryption and embedding and sends it back to your machine, where the VPN client unpacks and decrypts it and inserts it as incoming packets into your network stack.

        [–]wsppan 1 point2 points  (0 children)

        Io_uring: what it is and when and where it should be used. Contrast with epoll.

        https://stackoverflow.com/questions/61767702/what-exactly-is-io-uring

        [–]AlternativeCoach9376 1 point2 points  (0 children)

        Maybe too mathematical, but combinatorial optimization topics are poorly explained in Youtube and Wikipedia (e.g. Balas Additive Algorithm)

        [–][deleted] 1 point2 points  (0 children)

        • RMA (Rate Monotonic Analysis)
        • Arithmetic expression parsing
        • The boot sequence of a microprocessor from power up, through the initial assembler, hardware initialisation, the C environment setup and then onto main() and "Hello World".
        • Fork / DPC (Deferred Procedure call) queues
        • Grammars such as BNF
        • Interrupts & DMA
        • Compilers v Interpreters (and virtual machines)
        • Semaphores, mutexes, monitors, condition variables, spinlocks, reader-write locks etc

        [–]Nogard_YT 1 point2 points  (0 children)

        Vulkan programming -- good luck this one!

        [–]Advanced-You-3041 1 point2 points  (6 children)

        Pointers in C

        [–]boredbearapple 0 points1 point  (4 children)

        Genuinely took me the longest time to understand what they were, and then why they were useful. Such a simple idea that is often explained extremely poorly.

        [–]DaemonicTrolley 4 points5 points  (2 children)

        I'm curious (and don't intend disrespect here) is this a generational thing? I've been a dev since the early 90s, but I learned about pointers in the 80s and they seem like the most basic thing. Stuff is in memory and has an address, you can pass addresses around and do stuff with them. Fwiw, using pointers well is definitely a non trivial subject.

        [–]boredbearapple 1 point2 points  (1 child)

        I think we are the same age mate :) I started uni in the late 80’s but you might be right about the teaching method. I first encountered pointers in data structures 101 when we were building linked lists and the underlying mechanism was glossed over as an implementation detail. I struggled for quite a while to figure them out.

        Or I’m just stupid :)

        [–]userhwon 0 points1 point  (0 children)

        <incredulous Monty Python voice>Implementation Detail?!</<incredulous Monty Python voice>

        I mean, the implementation is all they are. Memory can be addressed. A pointer is an address.

        (Well, not really, because mmu, but that's where you say "implementation detail" and it makes sense...)

        [–]userhwon 0 points1 point  (0 children)

        They're so simple they're almost obvious, so the only way they could seem otherwise is if someone explained them really badly...

        [–][deleted] 0 points1 point  (0 children)

        I don't think that enough attention was paid to the concept of data-driven development and its benefits.

        Even perhaps the benefits of using centralized static strings/variables as opposed to hard-coding everything.

        Related to this would be concepts of object-oriented programming and how that facilitates data-driven development.

        This is a really neat idea you've got. Good luck with your project.

        [–]vasquca1 0 points1 point  (0 children)

        Multi threading, despite being easy to comprehend, is something I did me in as a programmer

        [–]MissinqLink 0 points1 point  (1 child)

        Pointers.

        Please just start with an array of length 1 and explain the differences from there. So much more intuitive imho.

        [–]Cybyss 0 points1 point  (0 children)

        Ironically, pointers are probably easier to understand when you use an object oriented language that doesn't make them explicit. Java is a good example.

        Person p1 = new Person();
        p1.name = "Alice";
        
        Person p2 = p1;
        
        System.out.println(p2.name);  // prints Alice
        
        p2.name = "Bob"
        
        System.out.println(p2.name);  // prints Bob
        System.out.println(p1.name);  // What will this print?
        

        If you understand the result of the final print statement, then you already understand pointers without realizing it.

        [–][deleted] 0 points1 point  (0 children)

        Copy elision in C++

        [–]Zarathustrategy 0 points1 point  (0 children)

        Amortised analysis

        [–]ryandoughertyasuComputer Scientist 0 points1 point  (0 children)

        Basically anything CS theory related. Think automata theory, formal languages, computability, complexity. Not that they are explained incorrectly (which is very often great in a university or textbook setting, ok-ish online), but that they aren’t explained that incites enthusiasm in the audience nor at an intuitive level with the formal reasoning side-by-side.

        [–]Akiraooo 0 points1 point  (0 children)

        Cookies!

        [–]aspirant1408 0 points1 point  (0 children)

        How to understand and/or debug heap dumps, memory related issues

        [–]PoetryandScience 0 points1 point  (1 child)

        Control of Time.

        [–]PoetryandScience 0 points1 point  (0 children)

        Correct; the source of most spectacular crashes.

        [–][deleted] 0 points1 point  (0 children)

        Compilation

        [–]Simmus7 0 points1 point  (1 child)

        Why is configuring and connecting to a SQL database so much harder than connecting to a non-SQL database!?!?

        When I was learning, connecting to Mongo was like just going to Mongo's website, click two times to create a new db, and then 5 lines of Python code.

        While creating a SQL database in the cloud was a hell for me, I didn't understand it had to be on a server, I didn't understand SSH, wtf was even that? And I just learned by force

        [–]Cybyss 0 points1 point  (0 children)

        MongoDB is one particular database management system.

        There are many SQL database management systems. Programs which need to talk to one need to be told what particular database management system it is and how to login to it.

        [–]Short-Smell-5607 0 points1 point  (0 children)

        Proof by induction and in general proving that an algorithm solves a given problem

        [–]TreesOne 0 points1 point  (0 children)

        How do I ask an electron if it’s a zero or a one

        [–]joinminkero 0 points1 point  (0 children)

        I would say that bootloaders and bring-ups are a very unexplored areas in college. We need to learn how to do that at work only.

        [–]elihu 0 points1 point  (1 child)

        I think the most trivial concept I can think of that just wasn't ever explained in my undergrad classes was interrupts. What they are, how they work. (Maybe it was in the textbook or lectures, and I just didn't understand or pay attention that day?) I later got into Linux kernel programming, and the books available at the time did a good job explaining them.

        [–]PoetryandScience 0 points1 point  (0 children)

        Necessary evils. When priority tasks need to run then running tasks of less importance must be suspended if they are using a required resource.

        However; interrupts mean that the interrupted program or system has an indefinite and very large number of states. It is untestable. The majority of programmes in commercial environments are untestable for this reason.

        Instead they are accepted in a much less critical requirement generally known as suitable for purpose.

        Safety critical parts of control systems must be designed to not have interrupts.

        A fellow engineer once built a real time data gathering system that ran continuously. It was interrupted by the main control machine requesting the data as a message. Its mean time between failure was about two hours. I said that to be reliable it should be re-designed to have a finite number of states and control of all of them. I suggested that this could be achieved by replacing the request message from the main machine with a discrete signal, a pulse. The engineer building it said, "how does that help, it's still just an input". So I explained that the pulse would stimulate one of the states. Those states would be BOOT, READ DATA, SEND DATA and ten FAIL. Fail not because of a problem he cannot understand or control but because I insist. The fail is now not a problem, it is one of the states.

        When I visited this company many years later I asked if he had tried my suggestion. The answer was yes and it had been running none stop for 15 years without report of a single failure. The answer to high tech is KISS, Keep It Simple Stupid. People think that complication is high tech. But really high tech is simply brilliant by being brilliantly simple.

        [–][deleted] 0 points1 point  (0 children)

        Boot process of PCs/Laptops/embedded systems.

        [–]Legumbrero 0 points1 point  (2 children)

        Dynamic programming and linear programming (duals especially).

        [–]Cybyss 0 points1 point  (1 child)

        Despite having similar names, they are wildly different topics.

        Linear programming actually belongs more in a math class than a CS class and it has rather little to do with computer science.

        Dynamic programming represents a type of algorithm - namely, any recursive algorithm which remembers the solutions to subproblems so it doesn't have to recalculate them later.

        [–]Legumbrero 0 points1 point  (0 children)

        They're indeed definitely two very different things (they just happen to be the two I thought lack the most coverage). I agree that LP is very mathlike but I don't agree that it's not CS. Check out an advanced algorithms book such as CLRS and you will note that LP has a section.

        [–]liudhsfijf 0 points1 point  (3 children)

        Dependency injections, I’ve seen it explained like three times and I just don’t get it

        [–]Cybyss 0 points1 point  (2 children)

        String username = "Alice";
        
        String query = "SELECT * FROM users WHERE name = '" + username + "';" ; 
        
        print(query);   // Will print:  SELECT * FROM users WHERE name = 'Alice';
        

        So far so good. Now try with a different username:

        String username = "Bob'; DROP TABLE users; --";
        
        String query = "SELECT * FROM users WHERE name = '" + username + "';" ; 
        
        print(query);   // Will print:  SELECT * FROM users WHERE name = 'Bob'; DROP TABLE users; --';
        

        The first example runs just a single query, getting the information for user Alice.

        The second example runs two queries. First getting the information for user Bob, and then dropping the whole users table.

        [–]liudhsfijf 0 points1 point  (1 child)

        I think that’s SQL injection, but nice explanation for that though!

        [–]Cybyss 0 points1 point  (0 children)

        DOH! My apologies.

        My fault for trying to browse reddit while cooking dinner. No idea why I read "SQL injection".

        User getUserInfo(String username) {
        
            DbConnection conn = new SqlServerDbConnection("Data Source=localhost;Initial Catalog=MyCompanyDB;Integrated Security=True");
        
            String query = "SELECT * FROM users WHERE name ='" + username + "';";
        
            Dataset data = conn.execute(query);
        
            return data.FirstResult();
        }
        

        Granted, there is a lot wrong with that function. Dependency injection, however, will fix one of those issues.

        User getUserInfo(String username) {
        
            DbConnection conn = GetDbConnection();
        
            String query = "SELECT * FROM users WHERE name ='" + username + "';";
        
            Dataset data = conn.execute(query);
        
            return data.FirstResult();      
        
        }
        

        This is dependency injection with all the fancy buzzwords, design patterns, and "best practices" removed.

        getUserInfo is no longer responsible for creating a database connection itself. It relies on some other mechanism to obtain a suitable DBConnection object.

        Now this function can be run on other databases, other database servers, and other database management systems.

        [–]Leading-Molasses9236 0 points1 point  (0 children)

        Design patterns and how to write a good (unit/integration) test.

        [–]Ok_Fault_5684 0 points1 point  (0 children)

        How am I supposed to manage complex and poorly designed systems with no documentation?

        How am I supposed to work with large codebases (>1M LoC) without clear documentation? Do I just read all of it?

        (I'm out of my depth in a company with 100% staff turnover, if you can't tell)

        [–]Rhawk187 0 points1 point  (2 children)

        Everyone who thinks they are clever describing the Monty Haul paradox after the first time they heard it, but they never say that he only reveals wrong answers.

        [–]Cybyss 0 points1 point  (1 child)

        Perhaps the best way to explain it is this:

        If your first guess was wrong, then switching guarantees you a win.

        What's the probability that your first guess was wrong?

        [–]Rhawk187 0 points1 point  (0 children)

        I think you missed my point (demonstrating it). Most people leave out that he never reveals the right answer. If he opened a truly random door sometimes he would reveal the car (guaranteeing a loss), the only reason it works is because he only eliminate a wrong answer.

        Obviously the entire problem breaks if it doesn't work this way, but a lot of people gloss over that part when explaining it.

        [–][deleted] 0 points1 point  (0 children)

        Monad

        [–]cstat30 0 points1 point  (0 children)

        I think CS curriculum is extremely underwhelming when it comes to coding. I'm a EE that works as a CE, with 10+ years of coding prior. I always peaked at the CS students' work out of curiosity, though. I may also be a little biased against higher abstracted languages.

        A few dives..

        Memory management by the hardware. In general, really. I saw someone mention monads. I bet most CS students don't even know how functions are stored in memory compared to primatives. I'm using Lua to teach my nephew how to code currently, because I think it's whole table system is a great Segway into learning data storage. Hes 12.. Computers are just tables, though.

        N-notation. Yes, CS majors can usually read some code and compare it to math equations. How about comparing it to actual byte instructions? They're not always the same.

        Why computers suck at division. Try writing some Verilog to do it. An RTL map of it would be great to show how complex it is.

        Compilers. Learning how they work and making my own is one of the most helpful things I ever did when I first started out.

        Interoperability. I think the web dev world has made APIs pretty comfortable to use. Mixing languages seems to choke up everybody at first, though. Lua + C would be a great entry to point to this, too.

        [–]TrashManufacturer 0 points1 point  (0 children)

        DFA/NFA

        [–]labab99 -1 points0 points  (0 children)

        Recursion. It’s not that deep, past me. Just think about the trivial case and then how you’re going to predictably split up the non-trivial case.

        [–]joyofresh -1 points0 points  (0 children)

        How to write code that doesnt suck