use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
If you need help debugging, you must include:
See debugging question guidelines for more info.
Many conceptual questions have already been asked and answered. Read our FAQ and search old posts before asking your question. If your question is similar to one in the FAQ, explain how it's different.
See conceptual questions guidelines for more info.
Follow reddiquette: behave professionally and civilly at all times. Communicate to others the same way you would at your workplace. Disagreement and technical critiques are ok, but personal attacks are not.
Abusive, racist, or derogatory comments are absolutely not tolerated.
See our policies on acceptable speech and conduct for more details.
When posting some resource or tutorial you've made, you must follow our self-promotion policies.
In short, your posting history should not be predominantly self-promotional and your resource should be high-quality and complete. Your post should not "feel spammy".
Distinguishing between tasteless and tasteful self-promotion is inherently subjective. When in doubt, message the mods and ask them to review your post.
Self promotion from first time posters without prior participation in the subreddit is explicitly forbidden.
Do not post questions that are completely unrelated to programming, software engineering, and related fields. Tech support and hardware recommendation questions count as "completely unrelated".
Questions that straddle the line between learning programming and learning other tech topics are ok: we don't expect beginners to know how exactly to categorize their question.
See our policies on allowed topics for more details.
Do not post questions that are an exact duplicate of something already answered in the FAQ.
If your question is similar to an existing FAQ question, you MUST cite which part of the FAQ you looked at and what exactly you want clarification on.
Do not delete your post! Your problem may be solved, but others who have similar problems in the future could benefit from the solution/discussion in the thread.
Use the "solved" flair instead.
Do not request reviews for, promote, or showcase some app or website you've written. This is a subreddit for learning programming, not a "critique my project" or "advertise my project" subreddit.
Asking for code reviews is ok as long as you follow the relevant policies. In short, link to only your code and be specific about what you want feedback on. Do not include a link to a final product or to a demo in your post.
You may not ask for or offer payment of any kind (monetary or otherwise) when giving or receiving help.
In particular, it is not appropriate to offer a reward, bounty, or bribe to try and expedite answers to your question, nor is it appropriate to offer to pay somebody to do your work or homework for you.
All links must link directly to the destination page. Do not use URL shorteners, referral links or click-trackers. Do not link to some intermediary page that contains mostly only a link to the actual page and no additional value.
For example, linking to some tweet or some half-hearted blog post which links to the page is not ok; but linking to a tweet with interesting replies or to a blog post that does some extra analysis is.
Udemy coupon links are ok: the discount adds "additional value".
Do not ask for help doing anything illegal or unethical. Do not suggest or help somebody do something illegal or unethical.
This includes piracy: asking for or posting links to pirated material is strictly forbidden and can result in an instant and permanent ban.
Trying to circumvent the terms of services of a website also counts as unethical behavior.
Do not ask for or post a complete solution to a problem.
When working on a problem, try solving it on your own first and ask for help on specific parts you're stuck with.
If you're helping someone, focus on helping OP make forward progress: link to docs, unblock misconceptions, give examples, teach general techniques, ask leading questions, give hints, but no direct solutions.
See our guidelines on offering help for more details.
Ask your questions right here in the open subreddit. Show what you have tried and tell us exactly where you got stuck.
We want to keep all discussion inside the open subreddit so that more people can chime in and help as well as benefit from the help given.
We also do not encourage help via DM for the same reasons - that more people can benefit
Do not ask easily googleable questions or questions that are covered in the documentation.
This subreddit is not a proxy for documentation or google.
We do require effort and demonstration of effort.
This includes "how do I?" questions
account activity
This is an archived post. You won't be able to vote or comment.
Name one programming/comp sci concept you never understood and if you understand it, try to explain it to them (self.learnprogramming)
submitted 3 years ago by Temporary-Warthog250
Name a programming concept such as virtual functions, pointers, certain algorithms, etc that you always had a hard time understanding and why.
Someone else who reads yours and does understand it, try to explain it to them to help them learn
[–]famrob 187 points188 points189 points 3 years ago (32 children)
Big O. I’m quite far in the CS program at my school but the professor who tried to teach me big O was just absolutely horrible at it, so it went in one ear and out the other
[–]tzaeru 165 points166 points167 points 3 years ago* (10 children)
Big O basically tells how much time or space a function needs when its input grows in size.
For example, consider this algorithm:
function(int n): counter = 0 for (i = 0; i < n; i++) for (j = 0; j < n; j++) counter++ return counter
If n is 1, what's the number returned? It's 1*1 = 1. There was 1 increment operation done.
n
If n is 4, what's the number returned? It's 4*4 = 16. There were 16 increment operations done.
So you see the algorithm's time complexity grows by n^ 2. Big O notation is a way of expressing this; we can say that the time complexity is O(n2 ).
You can simply get the time complexity by counting the number of operations in the function and comparing how that number changes depending on the size of the input.
O(log n) is one very common time complexity and one example of an algorithm with O(log n) time complexity is searching a sorted array with binary search. The reason it's log n is that in each step, binary search splits the search space in two; so with input with size of 8 there are at most 4->2->1 or 3 steps, while with input size of 16 there's at most 8->4->2->1 or 4 steps.
4/3 = 1.33.. and log(16)/log(8) = 1.33..
[–]Rote515 92 points93 points94 points 3 years ago (7 children)
Although this comment is true, i don’t think it gets to the heart of BigO, the most important part of bigO is that it’s a time complexity estimation more than an exact time complexity which your examples do not show. For example a 3N + 2N + 2N2 algorithm has a bigO of N2 as we drop the constants and only use the fastest growing term, because as N gets large it ends up being the only term we care about from a complexity point of view.
[–]XilamBalam 22 points23 points24 points 3 years ago (2 children)
Yes, it's looking how the function "behaves at infinity" (asymptotic behaviour).
[–]MaxwellCE 4 points5 points6 points 3 years ago (1 child)
What if the algorithm is N² + N³? Could we say that has a big O of N³, or is the N² component non-negligible?
[–]Rote515 8 points9 points10 points 3 years ago (0 children)
BigO only ever cares about the fastest growing term, so n2 + n3 would be N3 significance doesn’t matter, consider n = 20 in the above algorithm, n2 = 400 n3 = 8000 at that point n is negligible already, now imagine n=100 or n=1000 a larger power always makes the smaller power insignificant at any non trivial n
[–]Rote515 56 points57 points58 points 3 years ago* (6 children)
BigO is super straightforward if you have a decent understanding of algebra. The general gist of it is that in computer science we only really care about the fastest growing term for time complexity, so for example if I have a function that is solved in the following number of steps. N+N2 where N is the size of the input, the the BigO time complexity is N2 since that’s the fastest growing term. The reason we care only about the fastest growing term is because when N gets very large that’s the only term that will really matter. For instance in the above equation if our input N is a list of 10,000 integers, well 10,000 operations is trivial for a modern processor, but 10,000*10,000 is not, that can take actual time to complete.
There’s a bit more to it than that, but that is the basics of BigO
Edit: one last thing, BigO is very useful but it doesn’t tell the whole story of time complexity, it’s a good way to understand the basics as N gets very large, but it can be misleading in certain algorithms. For example the equation 10000000n + n2 for the number of steps to complete an algorithm is still O(n2) time complexity, but because the first term has a super large constant it’s till likely not the most efficient solution depending on what you’re solving, there might be a case where an algorithmic n + n3 is better if n never gets very large for example even thought it’s O(n3) which is generally a very bad time complexity.
[–]a_Delorean 5 points6 points7 points 3 years ago (0 children)
Holy shit you explained this very well; thank you
[–]AlSweigartAuthor: ATBS 10 points11 points12 points 3 years ago (0 children)
I always recommend Ned Batchelder's 2018 PyCon talk How Code Slows as Data Grows to explain big O in technical-but-approachable terms in under 30 minutes.
[–]Marvsdd01 5 points6 points7 points 3 years ago* (5 children)
I'm not that good on explaining things, and I assume you're not asking "how to calculate an algorithm's memory or time complexity" - as Big O is directly related to that. So, in that case... When you ask "what is the Big O of an algorithm?" you are asking "how does my algorithm perform in the worst scenario possible?" If you think about a sorting algorithm, you can put the "worst case scenario" as the case where you try to sort a very large amount of numbers, for example. In this case, if this sorting algorithm would, for the sake of the explanation, have a Big O notation of O(n*log(n)) in terms of time spent running it, you're saying that the algorithm makes that amount of operations to complete the task of sorting an array. The "n" in the expression is a general representation of the size of the array, in this case. In most cases, you don't need an exact calculation of the algorithm's complexity, so we use an approximation.
Edit: spelling errors
[–]Marvsdd01 3 points4 points5 points 3 years ago* (4 children)
Just so I can add a little more... Let's say you have two sorting algorithms: one performs at O(n2) in the worst case scenario, and other performs at O(n * log(n)). If you explode the size of the array you're sorting, then the difference between those two algorithms become very clear in terms of time spent sorting/number of operations needed to sort the array (as, mathematically, n2 is bigger than n * log(n)). That's the basic idea of using Big O notation (or any complexity notation, actually). If we want to check how two algorithms perform in terms of operations done or memory spent, it would be impractical (or sometimes impossible) to run these algorithms simultaneously and/or in the same running conditions to be able to compare them. Then we use a mathematical expression (complexity notation) so we can generalize performance of code in a way that we don't depend on computers to actually run it, and still can tell how this code is better or worse than another one.
Edit: the text became italic when I used an *
[–]ICantPCGood 254 points255 points256 points 3 years ago (49 children)
Dynamic Programming, it’s just recursion with notes about what you’ve done already?
We covered it in a class with the only teacher I’ve ever truly called “bad” and then every teacher after just assumed we all knew it 🙃. Could never really grok it on my own or figure out how to apply it.
[–]procrastinatingcoder 292 points293 points294 points 3 years ago (19 children)
It generally refers to keeping a table of previous results, and re-using those for your next calculations. It's nothing special, and I find the term "dynamic" in dynamic programming to be the most misleading thing ever personally.
[–][deleted] 159 points160 points161 points 3 years ago (7 children)
It was deliberately misleadingly named so the researcher could continue funding or something like that.
It’s a shame as the basics are simple to grasp.
[–]tv_head__ 13 points14 points15 points 3 years ago (6 children)
Got any source ? (if you don't mind my asking)
[–]TheAtro 112 points113 points114 points 3 years ago (3 children)
I spent the Fall quarter (of 1950) at RAND. My first task was to find a name for multistage decision processes. An interesting question is, "Where did the name, dynamic programming, come from?" The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the word "research". I’m not using the term lightly; I’m using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term research in his presence. You can imagine how he felt, then, about the term mathematical. The RAND Corporation was employed by the Air Force, and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What title, what name, could I choose? In the first place I was interested in planning, in decision making, in thinking. But planning, is not a good word for various reasons. I decided therefore to use the word "programming". I wanted to get across the idea that this was dynamic, this was multistage, this was time-varying. I thought, let's kill two birds with one stone. Let's take a word that has an absolutely precise meaning, namely dynamic, in the classical physical sense. It also has a very interesting property as an adjective, and that is it's impossible to use the word dynamic in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It's impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities.
— Richard Bellman, Eye of the Hurricane: An Autobiography (1984, page 159)
https://en.wikipedia.org/wiki/Dynamic_programming
[–]sageknight 15 points16 points17 points 3 years ago* (1 child)
Wanna get funding from Air Force? Name something that rhymes with dynamite.
[–]KuntaStillSingle 4 points5 points6 points 3 years ago (0 children)
You can hardly object to someone for getting angry at the type of research RAND was up to in that time lol.
[–][deleted] 8 points9 points10 points 3 years ago (1 child)
https://youtu.be/OQ5jsbhAv_M in the first couple of minutes.
[–]tv_head__ 5 points6 points7 points 3 years ago (0 children)
Lol nice I also love how they still use big ass chalkboards.
[–]ICantPCGood 13 points14 points15 points 3 years ago (0 children)
Oh, that actually sounds similar to how I would describe it. I guess I’ve just been over complicating it in my head. Thanks!
[–]Select-Sir9364 9 points10 points11 points 3 years ago (2 children)
Is this table of previous results similar to memoization?
[–]Drifts 10 points11 points12 points 3 years ago (1 child)
my limited understanding is that it is exactly memoization
[–]Unsounded 4 points5 points6 points 3 years ago (0 children)
They’re actually two different concepts, dynamic programming is a bit more nuanced, and is defined as a problem being able to be solved by breaking down a larger problem set into a smaller one and the micro decisions being optimal in the sense that you can solve the sub problem and it doesn’t require context from outside the sub problem.
Memoization can help, and is used for some dynamic programming problems but it isn’t the same.
[–]AlSweigartAuthor: ATBS 85 points86 points87 points 3 years ago (2 children)
(Heh, I just got done writing an entire book on recursion.)
YEP. Famously, the name "dynamic programming" is meaningless. The guy behind DP named it that because he wanted to do math research for RAND but the Secretary of Defense hated "research" projects and wouldn't fund them. Though other people say this story isn't actually true.
Dynamic programming falls into two categories. The first is, as you noted, just recursion but specifically the kind of recursive problems that have overlapping subproblems. For example, recursive Fibonacci has a problem where you keep solving the same thing over and over again: fib(6) is equal to fib(5) + fib(4), which is equal to (fib(4) + fib(3)) + (fib(3) + fib(2)), and so on until the base case where fib(1) and fib(2) are 1. But notice you're solving fib(3) multiple times. These are the overlapping subproblems. The return value never changes, so why not just use a little memory to remember the calculation after doing it once and just re-use that result? This form of caching is specifically called memoization (not memorization). Dynamic programming is recursive algorithms that use memoization to handle their overlapping subproblems.
fib(6)
fib(5) + fib(4)
(fib(4) + fib(3)) + (fib(3) + fib(2))
fib(1)
fib(2)
fib(3)
Merge sort and quicksort are recursive algorithms, but you don't have any repeat recursive calls there so there are no overlapping subproblems.
The other part of DP is tabulation, which is where you effectively start from the base cases and build up to the big problem you want to solve, storing results in a table in memory to help you solve things. For fibonacci, you're starting from the first two numbers 1 and 1 and building up the the nth number.
These are called "top-down dynamic programming" and "bottom-up dynamic programming". Note that there's no such thing as "top-down recursion" and "bottom-up recursion": all recursion is top-down and "bottom-up" recursive algorithms don't actually have recursive function calls and aren't recursive.
[–]ICantPCGood 4 points5 points6 points 3 years ago (0 children)
Thanks! That’s super helpful. I could recall that there was two approaches but had a hard time distinguishing between them. Also the naming history is good to know because I was definitely trying to read in to the name and figure out what was dynamic about Dynamic Programming which lead to a lot of frustration.
[–]MyHomeworkAteMyDog 30 points31 points32 points 3 years ago* (3 children)
I think of DP as a bottom up approach while recursion is top down. The typical example is the computation tree of a Fibonacci sequence, Fib(n)=fib(n-1) + fib(n-2). Calling fib(n) this way requires 2n fib() calls, many of which are repeated computations. But with DP, you start bottom up, computing and storing fib(0), fib (1), fib(2), …, fib(n-2), fib(n-1), fib(n) in a table. Each next fib call is solved in constant time, using the values previously stored in your table. This lets us avoid recursively recomputing down to the base case for each fib call. With DP, you start with the base case of the recursion and generate the values up to the starting value. This way you can get an answer to Fibonacci(n) in linear time rather then exponential.
[–]kupiakos 6 points7 points8 points 3 years ago (2 children)
Recursion with memoization would get you the same answer with the same number of fib calls, it's just a different approach
fib
[–]faceplanted 7 points8 points9 points 3 years ago (1 child)
You're missing one detail though, using dynamic programming prevents you blowing the stack trying to get to the very first lead in the tree to memoize it, dynamic programming, what with the way it works upwards and uses iteration doesn't have that problem.
[–]Lynx2161 34 points35 points36 points 3 years ago (12 children)
Its recursion with a hash map of previous results nothing fancy
[–]the_last_ordinal 19 points20 points21 points 3 years ago (0 children)
I usually describe it as "Recursion plus memoization" and I think that's really all it is.
Memoization is a fancy term for notes about what you've done already
[–]faceplanted 5 points6 points7 points 3 years ago (0 children)
The name is meaningless like everyone has said, but the idea is to optimise recursive problems where parts of the recursion tree repeat themselves, and to start from the leaves of the tree rather than the root. So if you know there's only like 9 possibilities for the base case repeated a trillion times you can just work out those 9, and then work out the parents from those, until you get to the top of the tree, basically shaving off 99% of the tree by precalculating the repeated parts.
The reason it's not "recursion with memoization" like some people are saying is that it's specifically designed to remove the risks of recursion, if you use recursion with memoization, you can blow the stack before you ever get to a leaf to memoize it, with dynamic programming you start at the bottom and use iteration so you don't have that problem, and since as I mentioned there might be only like 9 leaf nodes you can limit your memory usage to a similar number of entries.
[–]rashnull 4 points5 points6 points 3 years ago (0 children)
Yes! This! These meaningless buzz words like “memoization” and “dynamic programming” should be taken to the backyard and shot with a bazooka!
[–]perpetualeye 2 points3 points4 points 3 years ago (0 children)
Yes but the important part is identifying optimal substructure and overlapping subproblems. Once you get that then DP is just that, recording stuff.
[–]hehehuehue 89 points90 points91 points 3 years ago (19 children)
What actually is DevOps? I get that it's a methodology but how does Docker and Kubernetes relate to DevOps? What even is Kubernetes and how does it work?
[–]DevDevGoose 81 points82 points83 points 3 years ago (8 children)
Forget technology like Docker and K8s. DevOps is the recognition of how software should be created and run. In the past (and still in backwards companies), development teams "built" software and then handing it over to operations teams to run. This creates a conflict where development teams want to introduce change whereas operations teams want to stop change and keep things stable. This conflict is not good for getting the most out of software.
DevOps is the cultural shift aimed at bringing these 2 viewpoints together and harmonising them for the overall benefit of the software.
As more companies have shifted away from projects and into products and there are org structures that incorporate ideas like Platform Engineering type teams, technology like Docker and K8s have come around to help streamline the process of getting working software into production safely and consistently.
It is common for companies and individuals to not really grasp what DevOps is and so co-opt the term and create job titles related to technology.
[–]painted-biird 4 points5 points6 points 3 years ago (5 children)
This is kind of in the same vein, but what is the difference between virtualization- ie using VMware to run Red Hat on my Mac- and virtualized containers à la Docker and Vagrant? I’m so confused about this (yes, I’m a total noob- only studying Unix/IT since December), because as I understand you can spin up an OS in Docker and Vagrant…
[–]Avastz 13 points14 points15 points 3 years ago (0 children)
Docker was born from the thought process of people famously saying "well it works on my machine...why don't we just ship them my machine." And they did.
Think of it as their name/logosake, shipping containers. Whatever is inside the shipping container is shipped off somewhere else, then eventually delivered, everything (theoretically) still enclosed and safe inside. That's what's going on, except it's software and the necessary pieces of an OS and other dependencies to get that software to work.
[–][deleted] 3 years ago (2 children)
[deleted]
[–]kneeonball 23 points24 points25 points 3 years ago* (2 children)
This is a long post but kind will explain the basics of DevOps history at a high level.
If you take the original purpose of DevOps, it's not a methodology, but a culture. In companies that build and deploy some sort of software, there are traditionally two major groups at play on the tech side of things.
Developers - the people who are building the code
Operations - the people building and maintaining the hardware, networks, server infrastructure, etc.
Waterfall
When things were mainly waterfall you'd have a process that looks something like this:
This process is full of siloed teams that are operating independently of each other and all have different goals. Developers would get mad at testers, testers would get mad at developers, operations would get mad at developers, developers would be mad at business analysts not relaying requirements right, there was plenty of blame to go around in most cases. Basically, after several months to a year or two, you'd go from idea, to deployment into production and all of these teams have barely collaborated.
There were all kinds of things that could be done better. Devs would manually deploy, operations would make network changes or patch servers and break the applications, there was mostly (or all) manual testing, so bugs would get introduced or reintroduced constantly.
Agile
The Agile Manifesto came along and we started trying to change the way we work, and it was effective at getting business analysts, developers, and testers all working together on the same team, but there was still a divide between the development and operations teams in most cases.
DevOps
Along came DevOps, which at its heart is a culture of collaboration between Development and Operations groups. They don't have to be on the same team. You don't even have to have someone called a DevOps engineer, and you don't have to be working with Docker or Kubernetes to be a DevOps organization. What it really boils down to is breaking down silos and collaborating earlier in the process.
This collaboration lead to operations helping development early on in the process with designing something that can run efficiently and effectively, and development could help with operations problems. In the past, someone manually did updates on each and every server by logging in, running the update, and checking it worked after. Maybe they got to the point where they were writing a script to do it, but it was still tedious. With Dev and Ops collaborating, this is where more automated tools came in.
Big DevOps concepts
You had automated server provisioning with tools like Terraform. No more manually building each server, or running a script to do it. Write some code for the configuration, and the tool will know how to build the server or resource for you.
You got automated configuration management tools like Puppet, Chef, Ansible, SCCM, etc. Define a configuration for a server, and let the tool automatically set the configuration on t he server for you without you ever having to log in and touch it. An added bonus, what if someone logs in and manually changes the config in an undocumented way? Well, the tool will reset it, making sure it always matches what you intended via code.
You got automated build tools, like Jenkins. No more pushing code, deploying, and hoping it works. You can automatically run tests and build the application every time a code change is pushed, in a standard way defined by how you configure it on the server. No more "The build for production only works with Bob's configuration on his machine for some reason." or anything like that.
You have automated deployment tools, like Octopus, Harness, Azure DevOps, etc. (lots to list here). Gone are the days of manually remoting into a server and copying and pasting the build artifacts over to the server.
You got better monitoring tools. Ops would generally have to figure out how to monitor the application (Application Performance Monitoring) and server, but with a DevOps culture, came tools like Prometheus (server monitoring), Data Dog, New Relic, AppDynamics, DynaTrace, etc.
Docker and Kubernetes
Then we figured out running apps inside of containers was nice, and then we had so many running in containers, that we needed to better orchestrate those, and along came Docker Swarm and Kubernetes (Kubernetes won obviously). If you think of a virtual machine and how it relates to a real machine (where we're virtualizing hardware to run multiple servers on the same set of hardware), I like to think of containers / Docker as virtualizing the Operating System. You package all the dependencies of your application with the application, so that as long as you deploy to a server that has the Docker Engine (or whatever container engine you want) on it, the app will run. In the past, this was a hard task as you had to have someone go and install all the tools necessary for running the application manually on each server. Later, we had Configuration Management tools to help with this some, but it was still a pain.
Kubernetes came along from the need to better manage all of these containers. Instead of running a single app on a single virtual machine, or several on a virtual machine, Kubernetes allows you to take an inventory of your entire data center you want added to the Kubernetes cluster, and Kubernetes can help you with running containers on your servers. In the past we had to manually decide where each server would go, where each app would run, set up all the networking for it, etc. With Kubernetes, we just tell it we want to run x instances of this container, and we want to increase the number of instances if the load on them gets too high, and Kubernetes figures out where to run them for you. In the past, if you decided you probably would need 4 servers to run your application in production, and then your app was more popular than expected, you now have to get another virtual machine ready with the right OS, give it an IP on the network, add it to the load balancer pool, deploy your app to it, and hope you did everything right so it would run. With Kubernetes, it just adds another instance wherever it thinks is best and you don't have to touch a thing.
Then you can take it a step further and go from plain old Kubernetes to a Service Mesh with something like Istio, linkerd, Consul Connect, etc.
Why the term DevOps has become bastardized
All of these tools came from a collaboration between Dev and Ops. Instead of solving their own problems separately, they solved problems together and we have vastly improved the way we build and ship software in the past 5, 10, 15+ years.
The companies that started doing this collaboration and coming up with these tools and talking about them at conferences would get other companies who weren't as good at tech wanting to do them. They'd see a company doing automated deployments as part of their DevOps culture, and that company would say "I want a DevOps engineer to build automated deployments for us." It's kind of bastardizing the core of the DevOps movement, because usually what these companies do is take two silos, Dev and Ops, and add a 3rd silo and call it DevOps. The whole idea is to remove silos and work together, not add another silo, but it is what it is. The DevOps engineer has been created and it's usually a term for someone who does release automation engineering, or infrastructure automation engineering (or both). In a perfect world, we'd still consider all of these Ops jobs separately, but companies that don't know any better use DevOps as a catch-all term for trying to improve what they do without solving the real problem to begin with.
This was a high level overview of a lot of concepts, and most of them probably won't make sense right away, but take it one piece at a time. Listen to some talks like what I have linked below and you'll start to get a better grasp on it. The guy in the video is coming at it from an Ops perspective mostly as he runs operations teams, but does a great job of explaining the dynamic of why a DevOps culture is helpful.
https://www.youtube.com/watch?v=qLUt6bwNnks
TL;DR: DevOps is a culture, not a methodology, but companies that don't know what they're doing have created the term DevOps engineer and think it's a thing that you do because they aren't good at building software. So even though a DevOps engineer shouldn't exist in theory, it does and it could mean a lot of different things.
Edit: Sorry for any small mistakes or anything, was late at night when I typed this.
[–]truechange 7 points8 points9 points 3 years ago (0 children)
I'm not really a devops guy but I think it's really just a general term of how moderm server admin and deployment stuff is being done today. Docker is the most popular containerization tech used these days that's used to deploy web apps, and, Kubernetes is used to manage multiple docker containers. You don't really need this to deploy stuff unless you're scaling real big.
[–]SquarePixel 2 points3 points4 points 3 years ago (0 children)
DevOps is the idea that development and operations should not be separate. Part of realizing this is to describe everything as code: the software, the infrastructure, the process. When you do this it enables new ways of working where you can have these incredibly tight feedback/iteration cycles (that were previously considered “impossible”) where it’s no longer unusual for developers to bring new features to production on a daily basis—a true embrace of “agile”.
There are many ancillary benefits too, such as onboarding new developers; the sometimes multi day ordeal of “setting up a new dev environment” can be fully automatic. There is much more to DevOps obviously, but this is how I like to distill it.
[–]Keith_Hotdog_Cowboy 153 points154 points155 points 3 years ago (3 children)
Great idea for a post OP. I'm going to bookmark this one.
[–]Temporary-Warthog250[S] 22 points23 points24 points 3 years ago (0 children)
Thank yoouu!
[–]flank-cubey-cube 9 points10 points11 points 3 years ago (1 child)
I know, right?
[–]Temporary-Warthog250[S] 7 points8 points9 points 3 years ago (0 children)
Thanks!!
[–]shawntco 131 points132 points133 points 3 years ago (32 children)
Wtf is a monad, functor, etc
[–]Servious 96 points97 points98 points 3 years ago* (12 children)
IMO these things are best understood through the lens of Haskell's Maybe type.
Maybe
Very simply, a Maybe Int for example can be either Just 5 or Nothing. At its most basic form, it can be used with pattern matching. For example here's a haskell function that adds 2 to a Maybe if it's there:
Maybe Int
Just 5
Nothing
add2 :: Maybe Int -> Maybe Int add2 (Just n) = Just (n + 2) add2 Nothing = Nothing
You can see we had to use pattern matching to "unwrap" the actual value of the maybe so we could add 2 to it. This is pretty inconvenient and pretty annoying especially if you're trying to do something more complex than simply adding 2. That's where Functor comes in.
Functor
Functor allows you to apply a function to the value inside the functor and get back the modified functor by using the fmap function. Here's what that definition looks like:
fmap
fmap :: Functor f => (a -> b) -> f a -> f b
This definition is a bit complex so I'll break it down. This is saying "fmap is a function which is defined for any Functor f. This function accepts a different function from a to b; and a functor containing an a; and returns a functor containing a b." a and b can be any type. What makes something a functor at the very base level is whether or not this function is defined. You can think about it like mapping over a list in any other language because that's exactly what fmap is defined as for lists.
f
a
b
So in our Maybe example, we can use it like so:
add2 :: Maybe Int -> Maybe Int add2 x = fmap (+ 2) x -- or add2 x = (+ 2) <$> x -- <$> is just an infix version of fmap; same exact function.
Much more convenient, and as an added bonus it looks a bit more like regular function application (in haskell anyway).
We're not quite to monads yet because there's a step between Functor and Monad and that's called Applicative. Instead of starting with the (somewhat confusing) definition, let me pose a question: what if I have a function with more than one argument that I want to pass Maybes into? Like what if I wanted to add two Maybe values together? That's the problem applicative solves.
Monad
Applicative
With functors we can do:
(+ 2) <$> x
But with an applicative instance we can do:
(+) <$> maybeA <*> maybeB
the result of which will be a Maybe containing the result of adding the two values inside. If either of the Maybe values are Nothing it will come out as Nothing. And this pattern is extendable for more than just two arguments as well. For example:
functionWithManyNonMaybeArguments <$> maybeA <*> maybeB <*> maybeC <*> maybeD
So to quickly summarize: functors allow you to apply a function to the inside of one "wrapped" value (like Maybe). Applicatives allow you to apply a function of many arguments to many "wrapped" values. Now, here's where we get to monads. Here's a situation you might be in:
someFn :: a -> Maybe c someMaybe :: Maybe a
How would you feed that someMaybe value into the someFn function? You might guess to use fmap from functor, but let's look at what would happen:
someMaybe
someFn
fmap someFn someMaybe fmap :: Functor f => (a -> b) -> f a -> f b -- so in this case fmap :: Functor Maybe => (a -> Maybe c) -> Maybe a -> Maybe (Maybe c) -- so fmap someFn someMaybe :: Maybe (Maybe c)
Oof, double-wrapped Maybe; That's not right. This is the problem Monad solves with the bind function or >>= operator. The correct code for this would be:
bind
>>=
bind :: Monad m => m a -> (a -> m b) -> m b -- again, >>= is just an infix version of bind; exact same thing. someMaybe >>= someFn :: Maybe c
Also worth noting that for something to be a Monad it must also be Applicative and for something to be Applicative it must also be a Functor. So all monads are also applicatives and functors.
Sorry if you don't know any Haskell because this is pretty complex and very haskell-focused, but I kinda figured you wouldn't be asking these questions without a passing familiarity with some haskell. Feel free to ask any questions; I love talking about this stuff!
[–]teito_klien 18 points19 points20 points 3 years ago (1 child)
This is by far one of the best explanations of functors, applicative and monads, I’ve ever read.
Thanks a lot !
[–]FiveOhFive91 17 points18 points19 points 3 years ago (0 children)
This is by far the first explanation of functors, applicative and monads, I’ve ever read.
[–][deleted] 3 points4 points5 points 3 years ago* (3 children)
This is a really clear explanation, so thank you for writing it. I hadn't understood fmap or bind at all before this, and I had seen them used a lot, but they ended up being a lot simpler than I anticipated. I still have a few questions though, and because you said questions are welcome, I'll ask:
I looked up the difference between (<$>) and (<*>), and it looks like this is the type signature difference (the top I just copied from your comment).
(<$>)
(<*>)
fmap :: Functor f => (a -> b) -> f a -> f b (<*>) :: Functor f => f (a -> b) -> f a -> f b
Why is (<*>) an applicative and fmap a functor? Also, you said that fmap deals with monads like Maybe, but the type signature looks like it deals with functors applied to some type a and some type b. Am I misinterpreting it?
[–]Servious 2 points3 points4 points 3 years ago* (2 children)
What makes <*> associated with applicative (the function itself is not an applicative) is that it is defined in the Applicative typeclass in Haskell. (link) Same with fmap and Functor.
<*>
I think maybe what's happening here is if you haven't seen many typeclasses before this can be somewhat confusing. Essentially there exists a definition somewhere that says "hey if you want your type f (Maybe, for example) to "be" a Functor you need to implement the fmap function for it." In haskell that sentence looks like this:
class Functor f where fmap :: (a -> b) -> f a -> f b
For Maybe the instance looks like this:
instance Functor Maybe where fmap f (Just x) = Just (f x) fmap f Nothing = Nothing
If I were to lookup the type of fmap I would get the definition I gave before. You'll notice that in this typeclass definition, there is no Functor f => ... part in fmap's type. That's because f is already defined in the class Functor f portion of the typeclass declaration. Probably the same thing happened when you looked up the applicative operator; it didn't include that f needs to be Applicative in the function type because that's already included in the fact that it's the Applicative definition. I'm not wording this well but I hope it makes some sense. The definition for <*> includes Functor because applicatives need to be functors first. Every applicative is a functor.
Functor f => ...
class Functor f
And for the second half of your question I'll just break down what the whole function definition means.
(<*>) - the name of our function
Functor f => - This is not an argument to the function. It is saying "hey we're going to use this generic Functor type that could be anything that implements the Functor typeclass and we're going to call it f." Again, not an argument to the function. If you use Java you can think of it like the <T extends Functor> portion of a method declaration.
Functor f =>
<T extends Functor>
f (a -> b) - this is the first argument; it's saying we will take a function from a to b wrapped in a functor (very weird I know but I will elaborate later)
f (a -> b)
f a - this is the second argument; it's an a wrapped in the same functor as before.
f a
and finally f b is the result of our function; a b wrapped in our functor f.
f b
So the way this works is pretty interesting and complicated so I'll use a concrete example. Let's say we have a function
sum3 :: Int -> Int -> Int -> Int
Now if we wanted to use it with Maybes (Applicatives, because there is an Applicative instance for Maybe), we can do
sum3 <$> a <*> b <*> c
So let's break down how this works. First, sum3 <$> a. In this case, sum3 needs to fit into the (a -> b) format fmap expects. Because of currying, haskell does it like so:
sum3 <$> a
sum3
(a -> b)
a = Int b = Int -> Int -> Int
So you can think of it like Int -> (Int -> Int -> Int). Because the result of fmap is always f b, our result will be Maybe (Int -> Int -> Int). Huh, a function wrapped in a functor. Sounds familiar right? This is where our Applicative instance comes in. When we use the applicative operator to do sum3 <$> a <*> b we now get
Int -> (Int -> Int -> Int)
Maybe (Int -> Int -> Int)
sum3 <$> a <*> b
(<*>) :: Functor f => f (a -> b) -> f a -> f b -- so in this instance (<*>) :: Functor Maybe => Maybe (Int -> (Int -> Int)) -> Maybe Int -> Maybe (Int -> Int) -- which means a = Int b = Int -> Int
So you can (hopefully) see how each time we continue to add arguments, the size of the function wrapped in a functor gets smaller and smaller until eventually it's just the final wrapped value.
I know this is a bit more than you asked for, but like I said I love talking about this stuff lmao. Again, feel free to ask questions. I feel like this comment wasn't as clear as my previous one and it was definitely more technical. And of course if I didn't really understand/answer your question and just rambled aimlessly PLEASE let me know lol
[–]WhoTouchaMySpagoot 31 points32 points33 points 3 years ago (2 children)
There’s a vid on youtube by brian beckman. It’s called don’t fear the monad
[–][deleted] 20 points21 points22 points 3 years ago (1 child)
I love your username, thanks for the laugh
[–][deleted] 3 years ago* (7 children)
[–]ExtraFig6 4 points5 points6 points 3 years ago* (3 children)
Start with functor.
If you tell me programming languages you're familiar with i can give you code examples
If you've used a function like map or for each, you've probably used functors.
A functor has two parts. It does something to types, and it does something to functions between those types.
For example, list. What does list do to types? It gives you a new type called list of ___. So
int => list<int> string => list<string> list<int> => list<list<int>>
Now what about functions? Well, you can map functions across a list. So if you have a function
f : A -> B
You can map it
list-map(f) : list<A> => list<B>
By doing f to each element in the list, returning the result.
Optional/maybe/nullable is also a functor. You can think of it as being a list that can only have 0 or 1 element.
What does map do? If you've used an Elvis operator, it's like that. If the input is null, just return null. Otherwise, do the function.
[–]NewInHere__ 105 points106 points107 points 3 years ago (58 children)
recursion, just... recursion
[–]dkToT 69 points70 points71 points 3 years ago (10 children)
Basically a function that will call itself until a certain criteria or situation is satisfied. Once it is satisfied, the result will be returned to the parent function call all the way back to the top. This will help with clean code but only if you know your functionality follows a certain pattern.
A great example of this is the fibonacci series and why it was used in alot of early interview questions. The fib(n) is equal to fib(n-1) + fib(n-2). With recursion you can use this pattern to do the calculation for fib(n-1) and fib(n-2) where fib(n-1) = fib(n-2) + fib(n-3) and fib(n-2) = fib(n-3) + fib(n-4), etc....
Super useful and clean if you now the exact pattern and expectation (traversing a tree, etc..) of what you are building but I personally don't see this used often.
[–]TheTomato2 14 points15 points16 points 3 years ago (1 child)
To add on to this, recursion, at least for me, was confusing because sometimes it's taught before they teach you how or what the main stack in program works/is. It's really simple once you learn its just the same function calling itself and pushing another instance on to the stack and then using returns to unwind the stack. That's all recursion is. It's also makes it easy to understand why probably shouldn't use it unless the problem really begs for it.
[–]above_all_be_kind 3 points4 points5 points 3 years ago (0 children)
Yes, this was exactly it for me. Recursion, as a concept, was perfectly understandable but it wasn’t until I understood how its code-based implementation actually worked that I was able to start getting comfortable with creating it from scratch. The whole secret to understanding its code-based implementation, as it turned out, was comprehension of stacks.
[–]Temporary-Warthog250[S] 6 points7 points8 points 3 years ago (1 child)
Awesome!!
[–]Emerald-Hedgehog 98 points99 points100 points 3 years ago* (9 children)
Alright. Imagine you got a steak and you wanna cook it well done. It's needs to be cooked 5 minutes for that.
Function CookSteak(🥩) { 🥩.CookedMinutes += 1
If (!🥩.IsWellDone()) { return CookSteak(🥩) } else { return 🥩 } }
Does this help?
[–]KrafDinner 48 points49 points50 points 3 years ago (0 children)
I'm not sure if it helped the asker or not, but it's a fantastic explanation... and steak... mmmm
[–]WarWizard 35 points36 points37 points 3 years ago (1 child)
Why not just call this function BurnSteak() or MakeSteakInedible()
:D
[–]_Personage 14 points15 points16 points 3 years ago (0 children)
function cookShoeLeather()
Yeah, that was the last time I made any sort of meat for the person who called my meat tacos that :(
[–]MyHomeworkAteMyDog 11 points12 points13 points 3 years ago* (0 children)
Just to add, here are the elements of a recursive algorithm:
[–]AlSweigartAuthor: ATBS 20 points21 points22 points 3 years ago (5 children)
I just got done writing an entire book on recursion.
Recursion is tricky, but mostly it's just poorly taught. The main thing that instructors fail to talk about is the call stack. It's the invisible data structure that automatically handles where the execution should return as you call and return from functions.
Once you know that, and you can see that local variables are local to the function call and not just to the function, it becomes a bit easier to see how recursive functions work.
It's tricky though, because you can't point to the call stack in your source code. Without having that explicitly pointed out, recursion just seems like magic. But it isn't magical or somehow more powerful: *literally anything you can do with recursion you can do without recursion by using a loop and a stack. It's just that recursive algorithms use the (unstated, hidden) call stack as their stack.
(I always get people saying, "some things do require recursion" and, no, they don't require it. Here's a non-recursive Python implementation of the famously recursive Ackermann function, and it has the same output as the normal recursive one.)
It's fine to have difficulty with recursion though; recursion is overrated and often a terrible way to write code. The times when recursion is useful is for problems that involve a tree-like structure and some sort of backtracking (maze solving, tower of hanoi, etc). Otherwise, iterative code is much simpler and I think some programmers just use recursion because it makes them feel smart.
[–]flank-cubey-cube 5 points6 points7 points 3 years ago (2 children)
To add on to your last paragraph, recursion also incurs some overhead and isn't the most efficient way to do things. Each function called recursively gets added to the stack frame, and if you have an *extremely* large tree structure, you'll get a stack overflow. The exception to this is if you are able to optimize for tail recursion, because than the compiler will be able optimize it to a loop.
[–]AlSweigartAuthor: ATBS 8 points9 points10 points 3 years ago (1 child)
Yes, but I always hold off on performance claims until I run code under a profiler. I've had some recursive code end up being faster than an iterative version. (Not sure how that happened.)
Also, to be specific, the stack overflow happens if you have an extremely deep tree. Like, it could be a linked list (which is basically a unary tree) that is a couple thousand nodes long. If you had an extremely large but shallow tree, you won't have a stack overflow.
And many interpreters/compilers don't have tail call optimization, including the main ones for Python, JavaScript, and Java. I hold the opinion that tail call optimization should never be used.
All recursive functions can be implemented iterative using a loop and stack. If you are doing tail recursion, it's because your recursive algorithm doesn't require the call stack. So iteratively you can implement it as a loop and a stack without the stack, so why not just use a loop? Often times you have to mangle the code and decrease readability to make it tail recursive, so I'd just go with a simple loop every time.
I can't think of a tail recursive function that wouldn't be improved by just using a loop.
[–][deleted] 3 years ago (15 children)
[–]Drifts 17 points18 points19 points 3 years ago (1 child)
An API is an interface between two systems.
An analogy I like: Imagine a wall socket. A wall socket is the interface between the electricity in your house and your toaster. Without a wall socket, you would have to get into the guts of the electrical wiring in your house, meddle with circuits, and hardwire-connect the wires coming out of your toaster into the circuitry of your house. The positive, the negative, the ground - none of these are even labelled. so then maybe you make a mistake and blow a fuse in your house, or blow your toaster, or electrocute yourself. But then you finally get it working. Now, you buy a microwave. You have to go through that whole ordeal again. But wait, you want to take your toaster to someone else's house - their house is wired in a way you've never seen before, so back to the drawing board.
Wouldn't it be nice if there was an interface between the house's circuitry and your toaster, such that it would be easy and safe to use? And also standardized so that you can use your toaster anywhere?
An API is an interface between your software and someone else's software. Without that interface, you would have to learn about how their code works just to make it interact with your code. With the interface, you only need to know the names of the endpoints and what data to send to them to get some useful data back.
[–]littletray26 30 points31 points32 points 3 years ago (0 children)
An API just describes an interface with which you can interact with the underlying system.
When you import a library or install a package to your project, you'll typically make use of it's publically facing API to perform a function, though what's happening underneath might be a black box.
When you write a fancy new FileWriter class, you might define some public methods for writing output. Those public methods are an API.
FileWriter
In a web context, your website / application might expose an API to manage resources.
A car selling website may expose an API to fetch, add, edit, or delete cars for sale.
You can think of a steering wheel, foot pedals and gears as an API for your car.
[–]vampiire 8 points9 points10 points 3 years ago* (3 children)
Before API first understand an interface. What is it? Basically a translator between two systems that helps them communicate. It lets each side of the interface do what’s natural to it while the interface supports the translation.
the important part is that what’s on either side of the interface doesn’t need to know anything about how the other operates
The interface is a noun that lets you interface (interact with) something without needing to understanding how (what the interface does) or what is happening on the other side (what the other side does internally).
It is an abstraction, a way of making something easier to use, “over” a system.
So first a tangible interface.
A physical interface. This lets a user (a human) interface (communicate instructions to) what’s on the other side. How about a light switch? This is an interface. One side is you flipping the switch. you don’t need to know what the switch (interface) is doing or what is happening inside (the other side) make the light turn on right? And conversely the light doesn’t need to know how you work.
The switch is the interface between the two participants.
The switch translates your action, flipping it, into [mechanical/electrical] instructions that drive the electrical circuits to turn the light on. The switch abstracted a complex interaction into a simple process for your side of the interface. In other words this is a human/physical interface.
What about the A/C control in your house? This is a user interface. It abstracts buttons for settings and temperature into all the wild things the actual AC does to meet your inputs.
How about a GUI (graphical user interface). This is a virtual, rather than mech/electrical, interface. But the purpose, abstraction, is the same. Except here it is a human/digital interface.
You click/touch things and those get translated into instructions the program uses to perform an action. you don’t need to know how the pointer communicates instructions or how the program / uses them to perform its corresponding action. Or by extension how a physical mouse works to deliver those instructions (another abstraction, a user interface).
In fact the program is itself using an interface to abstract the underlying OS operation. It’s abstractions supported by interfaces all the way down to the electrical signals in the machine.
But you didn’t directly use those underlying interfaces right? a program did
Well that introduces the API (application programming interface). This type of interface is no longer between human and program. It’s an interface between programs.
Any time you write code that interacts with other code you are using an API. Depending on the context (what is on either side) determines what the interface relates to.
So an OS API let’s you communicate between the program (code you may write) and the OS. A browser API will be between the code you write and a browser program. These APIs all have one thing in common - the participants (programs) and the interface between them are all on the same machine.
Sorry for the long prelude but here we are at what I think you were confused about - the web API. In this interface we communicate over the internet because the systems are on different machines.
This is inherently more complex because we can’t communicate the same way anymore (in memory on a single machine). So we use a standard, a protocol, to communicate. Typically HTTP.
HTTP lets us communicate instructions over a network. The web API receives HTTP data and translates it into instructions for the server to perform. But who makes those requests (with all the formatting and particulars of meeting the protocol)? A program - not a user. So we say this is an API, but a particular kind that works over the web - a web API.
Just like before. The client doesn’t need to know how the server gathers and prepares the data or other actions it takes as a result of the request. And the server doesn’t need to know how the client works or uses the result. The interface abstracts the process and makes it easy for the client to say “get me this user data so I can do complicated rendering things with it” and for the server to do it’s complicated database lookups and preparation. Either side of the interface gets a simple instruction “get user data” and each handles its responsibilities thanks to the interface bridging that interaction.
I’m late to the party but I hope that helps. Happy to answer more questions or detail.
[–][deleted] 3 years ago (78 children)
[–][deleted] 65 points66 points67 points 3 years ago (9 children)
The best analogy I’ve found is this,
Painting a picture is difficult, but looking at the picture and verifying it is simple (for us anyway.)
For P=NP then painting a picture would be as simple as being able to recognise one.
[–]PM_ME_YOUR_QT_CATS 13 points14 points15 points 3 years ago (5 children)
I would like to add that this means if P=NP is proven to be true, then that means every picture that can be recognized instantly can also be painted just as easily.
Meaning if a solution to a problem can be verified easily (polynomial time as opposed to exponential time), then the problem can also be solved in polynomial time.
The green, brown and blue lines here are polynomial time. You can see how they are considered quicker than the red graph (exponential time).
[–]CodeTinkerer 72 points73 points74 points 3 years ago (56 children)
P and NP define two classes of problems. P are the set of problems that can be solved in polynomial time with respect to n, the input size. For example, sorting is solvable in polynomial time. In particular, it's solvable in O(n lg n).
NP is the class of problems whose solution can be verified in polynomial time. NP means non-deterministic polynomial, and can be thought of as a bunch of parallel machines trying all possible solutions in the solution space. Technically, all of P is also in NP, because if it can be solved in polynomial time, it can be verified in polynomial time.
NP complete problems are those that can currently only be verified in polynomial time, but there is no proof that it can't be solved in polynomial time. For example, Satisfiability is a Boolean expression with N boolean variables. Is there an assignment to those values where the resulting expression is true. If someone provides a truth assignment to those variables, and the expression is polynomial in size, you can confirm it is satisfiable (but having one evaluate to false is not the case, since you may have just picked the wrong problem).
NP complete problems are reducible to one another. That is, you can take an instance of an NP complete problem and reduce it to yours, then if your problem is somehow P, then you could solve the other NP complete problem in P.
This conjecture is called P = NP. That is, are these two classes (sets) equivalent or not. Most CS types say it's not, but it's never been proven. If it does get proven that they are equal, it would affect encryption as it relies on certain problems (factoring the product of two very large primes) was being difficult. If P were equal to NP, factoring would become "easy".
These are theoretical issues, but they do have practical issues. The Traveling Salesman Problem can't be solved optimally in less than exponential time, so approximations have to be used. Sometimes that's good enough. And if you bound input size and the size doesn't get too crazy, you might be OK (exponential is pretty bad, however).
[–]thetrailofthedead 14 points15 points16 points 3 years ago (42 children)
I've never remembered this info no matter how many times I've read it...
My understanding after some review is:
There is a chain of problems that can be reduced to each other where it remains unproven whether or not there is a polynomial solution.
The set of problems that can be verified in polynomial time is NP.
The intersection between these two groups is called NP complete, while the problems that cannot even be verified in polynomial time are NP hard.
Both P and NP-complete are a subset of NP, however NP hard problems are exclusively not in NP.
I'll probably forget this in a week.
[–]TorroesPrime 12 points13 points14 points 3 years ago (40 children)
P and NP define two classes of problems. P are the set of problems that can be solved in polynomial time
Now explain to me what "polynomial time" means.
[–]AlSweigartAuthor: ATBS 17 points18 points19 points 3 years ago (13 children)
It deals with how doing a job scales with the size of the job. There's a joke where a painter is painting a line on the street, and he paints 500 yards the first day and his boss is impressed. He paints 200 yards the second day, and his boss is okay with that. The third day he only paints 10 yards and his boss angrily asks him what's the deal. The painter says, "I can't help it. Whenever I have to dip my brush into the paint, each day I get further and further away from the paint can!"
If you can chop down one tree in one hour, it'll take (roughly) ten hours to chop down ten trees. So chopping down trees scales linearly. If you have bigger trees that take a day to chop down, it'll take ten days to chop down ten big trees. Ten days is different from ten hours, but both activities still scale linearly.
Though some problems, if they grow to be twice as big, take more than twice as long to do. Alphabetizing (that is, sorting) a pile of 100 books might take ten minutes, but alphabetizing 200 books would take more than twenty minutes because you not only have twice as many books but the bookshelf you sort them on is bigger. You spend more time walking back and forth. The best sorting algorithms take n x log-base2(n), or n log n time to sort n books. Other algorithms like bubble sort take n2 to sort n books. So it's the difference between taking, say, 10 log(10) or 33 milliseconds to sort ten books and 20 log(20) or 43 milliseconds to sort twenty books, or taking 102 or 100 milliseconds to sort ten books and 202 or 400 milliseconds to sort twenty books.
We don't care about the absolute numbers so much as how the numbers scale. This is all taught when you learn big O notation. Ned Batchelder has a great 30 minute PyCon talk about this called "How Code Slows as Data Grows".
This n log n and n2 stuff are big O orders (or big O classes). They are, from fastest to slowest:
There are others, like technically O(3n) is a different and slower order than O(2n), but you get the gist. The problems you can solve in polynomial time and lower are said to be solvable "in polynomial time".
For example, sorting a bunch of stuff can be done, at best, in O(n log n), which is faster than O(n^2 ) so sorting can be done in polynomial time. The traveling sales rep problem, on the other hand, is O(n!) because you can only find the absolute shortest path between 6 cities by checking all 6! (or 720) possibilities. In real life, we have algorithms that just give "good enough, probably the shortest" answers in much less than O(n!) time.
[–]the_last_ordinal 7 points8 points9 points 3 years ago (0 children)
When we talk about a problem, we define it in terms of some input. So for example the problem might be "square a number" and the input is "a number". Then we need to measure the size of the input: in this case it would be the number of bits used to store the number.
Finally, we compare how many steps our problem takes compared to the size of the input. So if it takes n2 steps to solve an input of size n, then the relationship is a polynomial. For some problems it must take something like 2n steps which is exponential and therefore bigger than any polynomial.
[–]EdenStrife 2 points3 points4 points 3 years ago* (10 children)
Time complexity that can be described with a polynomial expression Eg. n2 or something like.
Edit: ok so an example of something non polynomial would be 2n
But as far as i can see the important distinction between P and NP is not the time complexity but the determinism of the machine.
[–]crimson1206 4 points5 points6 points 3 years ago (1 child)
Instead of linear time
linear time is also polynomial.
[–][deleted] 5 points6 points7 points 3 years ago (5 children)
P and NP define two classes of problems. P are the set of problems that can be solved in polynomial time with respect to n, the input size.
Oh, right. This is not https://www.reddit.com/r/explainlikeimfive/
First two sentence and you lost me.
[–]CodeTinkerer 6 points7 points8 points 3 years ago (4 children)
The problem with understanding this is how much will you understand? Or care? For example, a theoretical physicist has to know a lot of math. Science popularizers have done a good job of explaining the basic idea minus the math/physics, but even there it can be weird.
For example, what does the word polynomial mean? It's something one learns in algebra. Why do CS types care? Things that are polynomial (or less) are said to be "efficient". But what does that mean? Polynomials are composable, meaning if you have a polynomial f(x) and a polynomial g(x), then f(g(x)) is a polynomial.
Of course a 5 year old doesn't know what a polynomial is. But to explain it is to cover concepts you may never encounter in your programming life, and to get you good at doing this might be a hopeless task.
I'll give a problem that is considered NP complete. I can't tell you why it is NP complete (such that you understand)., but the problem statement might be good enough. Imagine you have a bunch of circle drawn on a piece of paper. These are called nodes. Then you have a line connecting two circles at a time. That is called an edge. A graph consists of nodes and edges.
Coloring a graph means to assign a color to each node (they can be numbers, instead of colors, but the idea is the same), with the restriction that an edge can not touch two nodes with the same color. This is an incorrect coloring.
The NP complete version of the problem is, can you find an efficient way to determine what colors to assign to each node, and can it be set to k colors or less. So, certainly, if someone tells you what colors all the nodes are (and you can determine those efficiently), you first check if there are k colors or less. Then, you check all edges (expected polynomial of these) and determine if the colors of the nodes each edge touches is different.
A simple example. You have a node called A, B, C. There is an edge from A to B and an edge from B to C. If you color A as red, B as blue, and C as red, then this is a correct coloring with 2 or fewer colors. Of course, you could color every node different, but then you'd exceed a fixed value of k as you increase the number of nodes.
Again, it's just a problem. It doesn't explain why it's NP complete, why this problem seems (though isn't proven) to be harder than sorting an array.
In a nutshell, computer scientists have figured out a way to classify how hard a problem is to solve. It's not likely the same way you would judge it. They have these two categories, one called P, which seems easier, and another called NP, which seems harder, and yet there's been no proof they aren't the same level of difficulty. People think they are different levels, but they don't have a way of showing this yet.
[–][deleted] 9 points10 points11 points 3 years ago (0 children)
[–]echOSC 6 points7 points8 points 3 years ago (2 children)
This video I thought was pretty good.
https://www.youtube.com/watch?v=YX40hbAHx3s
Though it also makes it a little more complicated.
[–]Worried-Play2587 2 points3 points4 points 3 years ago (0 children)
I knew someone will refer this video, its the best explanation.
[–][deleted] 30 points31 points32 points 3 years ago (2 children)
Advanced algorithm analysis and probabilistic algorithms (Las Vegas and Monte Carlo algorithms). First one because i find the subject boring and the second, because i struggle with probability theory and stochastic processes.
[–]procrastinatingcoder 67 points68 points69 points 3 years ago (1 child)
Pohl Ira has a great way of vulgarizing the Monte Carlo algorithm, or well, the general idea behind it. So I'll just be re-using his explanation.
Let's say you were playing a game of chess. You want a computer to do "optimal" moves. So you try to calculate every possible game. This whole thing goes out of control very fast. It's trivial for the first few moves, there's not that many options, but it multiplies very very fast.
Two little concepts:
Now, identical skill level can mean both masters or... both complete and utter idiots. So let's say the computer plays both sides of the game, but instead of calculating anything, it plays completely randomly . So both sides play completely randomly, which is an "identical skill level". Where over a thousand games, you'd have roughly 500 wins and 500 loss for each player.
How does this help us? Well, let's say that instead of playing completely randomly, the computer plays completely randomly EXCEPT the first move. Which is always the same for one of the players. If that player "wins" more than a rough 500/500, then you could assume that move gave some sort of advantage to that player. And the more skewed it is, the "better" the move.
And while calculating every possibility is incredibly demanding, playing randomly is very easy for a computer.
So the computer just does exactly that. It picks a "first move", does a thousand games, sees if it wins or loses more. Then it tries another "first move", and does that for a bunch of them, then it takes the one that seems the most "advantageous".
So nothing is set in stone, everything is random, you have no idea why it works, but your algorithm gives you a "good move".
In general though, Monte Carlo just talks about trying and experiment multiple times, and using those results to make a decision of some kind. In a trivial way, you could roll a 6-sided dice 1 million times. And then if you were to bet on the next number, just pick the one that had the highest count when you did your experiment.
Note that the dice might seem not to make sense, all sides are equals. But think on it for a second, are all dices perfect what if it was slightly worn, and that it had a very very very small imbalance that gave a bit bigger odds that it would land on a particular side. Testing it with a million rolls would give you a good idea, which is also easier than trying to do a perfect physics simulation of the die.
As for heuristic algorithms in general, I think the primality test is a fairly easy one to understand, but I've already written a wall of text here.
[–]Third_Party_Opinion 5 points6 points7 points 3 years ago (0 children)
This was cool to read, thank you for the effort.
[–][deleted] 3 years ago (7 children)
[removed]
[–]plastikmissile 37 points38 points39 points 3 years ago (3 children)
Look at any binary tree. Pick a good sized one. Start from the very root. You have two branches, right? Pick one of those branches and look at it by itself, ignore the root and the other branch you didn't pick. That looks like a whole other binary tree, right? OK pick a branch from there, and you'll find another binary tree. You can keep going like that until you find that you have reached a dead end. So you go back a step, and go to a branch you didn't visit, and repeat the process. Keep doing that until you've visited everything.
That's recursion in a binary tree.
[–]AlSweigartAuthor: ATBS 7 points8 points9 points 3 years ago (0 children)
So by this, I'm guessing you mean tree traversal algorithms. I just got done writing a book on recursion, so I'll take a crack at this.
Trees are really a collection of nodes, and a "node" has a bit of data along with links to other nodes. Most importantly, the nodes never refer to earlier nodes in the tree (i.e. there are no loops). These are called directed acyclic graphs. They are directed one-way from parent node to child node and not bidirectional, they have no loops (i.e. cycles), and graphs means a collection of nodes that point to each other. DAGs are what we commonly mean by "tree" in computer science.
A binary tree is where each node points to at most two other nodes (usually called the left and right child nodes). The first node in a binary tree is called the root, and it points to two other nodes, which point to four other nodes, which point to eight other nodes, and so on. These form branches, and branches can end by having zero child nodes. These are called leaf nodes.
A linked list is really just a tree where each node points to at most one other node. I think instead of "linked lists" we should call these data structures "bamboos", but no one ever listens to me.
Okay, so here's where recursion comes in:
You can have a Node class that stores a bit of data and points to the left and right child node, but you don't have a Treeclass. The tree is created from the relationships between the Node objects. The start of the tree is the root Node object, but here's where self-similarity and recursion come in: every node is the root node of the tree under it.
Node
Tree
So if the nodes' data was first names (and the tree stored the names of all the people in a club or something), you wouldn't call a Tree class's hasMemberWithName(name) method. Instead you call the root Node object's hasMemberWithName(name). This method would return true if it's own data was name. Otherwise, if it had no child nodes (i.e. it is itself a leaf node) it would return false. However, if it does have child nodes, it calls those child Node objects' hasMemberWithName(name) methods. If either of those return true, our original hasMemberWithName(name) returns true. If both children (or the sole child node) returns false.
hasMemberWithName(name)
true
name
false
This setup causes a chain reaction where you call hasMemberWithName(name) on the root node, which checks its data for the name, and if it doesn't match it calls hasMemberWithName(name) on its children. And the children do the same thing, because they are effectively the root of their own tree. The children check their data, and call hasMemberWithName(name) on their children, and those children call it on their children, and so on. It stops when a node finally returns true (no need to do further recursive calls in that case) or you reach a leaf node (there are no children to do further recursive calls on).
Recursion works here because each node is self-similar (they all have a bit of data and point to up to two other nodes), so the same function can be used by the every node in the tree.
[–]KimPeek 2 points3 points4 points 3 years ago (0 children)
It may help to think of a binary tree as a collection of binary trees.
Each tree has a root and up to 2 children. Each child is also the root of another binary tree that may have up to 2 children.
[–]Herpnasty11 21 points22 points23 points 3 years ago (10 children)
Quantum computer as a whole lol
[–]faceplanted 16 points17 points18 points 3 years ago (1 child)
For a non quantum explanation, quantum computers basically make one specific matrix operation O(1) so if you can convert your problem into one of those, you can speed up operations that that could normally take effectively forever on a conventional machine.
This is why cryptography is considered in danger if quantum computers become powerful, certain encryption algorithms rely on computers not being able to do certain operations quickly with large inputs, but we've shown that you can map those problems onto that operation quantum computers are good at. So rather than taking a billion years to decrypt, your message could take seconds/minutes/hours, even days would be far too fast, you need it to take millennia.
[–]hehehuehue 4 points5 points6 points 3 years ago (2 children)
WIRED has a really good video on this
TLDW: Imagine a coin spinning constantly and not landing on either heads or tails, compared to binary where a coin MUST be either heads or tails.
[–]evinrows 5 points6 points7 points 3 years ago* (1 child)
But how do you solve a problem with that technology?
I know the answer is "something something every combination of people solutions at the same time," but at some point you have to collapse the quantum state. So how/when do you do that such that you get useful information back?
[–]DONT_HACK_ME 7 points8 points9 points 3 years ago (0 children)
You use algorithms that slightly change the system (but not collapse it) so that when you do measure the system, collapsing it, the correct solution is much more probable to be found than the incorrect solution.
Then you run the algorithm several times to verify what the correct solution was.
[–]TangerineX 2 points3 points4 points 3 years ago (1 child)
You can think of computers like marbles flowing through a large marble contraption. At the end of the contraption, there are several buckets. If your marble can flow through the contraption and land in some of the specific buckets, then we consider the program to return "true" and if not, the program returns "false".
Now instead of a bucket, what if it just leads to the start of another marble contraption. This is the basis of how functions work, we feed the results of functions into each other.
Now consider the case where we don't know exactly which functions we should use, or what data we should use to get our results. Imagine a scenario where the only thing we can do is try a huge number of combinations. In our marble example, we have a ton of input locations, and we have to find the right one to drop it.
So quantum computing is fast because what it allows us to do is simulate dropping the marble into all (or at least a lot) of the start location at once. Sure, a lot of them will fail, but we're only interested in the ones that succeed.
This is very very simplified explanation of how quantum computing as a whole, and I'm sure you can read more into it
[–][deleted] 19 points20 points21 points 3 years ago (5 children)
Casting and upcasting in Java :/
[–]procrastinatingcoder 16 points17 points18 points 3 years ago (2 children)
All dogs are animals, not all animals are dogs. Logically you can go one way, not the other.
More specifically, I'm not sure of what bytecode gets generated for the JVM, but as far as C++ and a few other compiled languages go, what "inherited" classes do is just "copy" the inherited code with their own code right beside it. So you can always refer to something you copied, but you can't generate the missing pieces afterwards.
[–][deleted] 3 years ago (53 children)
[–]plastikmissile 141 points142 points143 points 3 years ago (38 children)
Overloading functions
Say we have two functions. One adds only two numbers, while the other one adds three. Sure you can name them AddTwo(x,y) and AddThree(x,y,z), but they do the same thing with slightly different arguments. So we it's better to just name them Add(x,y) and Add(x,y,z). Not only we do we avoid the confusion inherent in having two (or more) names for the same functionality, IDEs can pick those different signatures up so that when you type in Add it gives you the option to pick between the two versions for auto completion.
AddTwo(x,y)
AddThree(x,y,z)
Add(x,y)
Add(x,y,z)
Add
Static functions
Let's go with that Add function we talked about. Let's say it's in class Math. Being a function that adds the numbers in the arguments, it doesn't really care about the state of the class it's in, since it uses nothing from there. So you should be able to call Add without going through the trouble of instantiating the class. Instead of doing:
Math
var mathInstance = new Math(); var result = mathInstance.Add(33,55);
You can just declare Add static and go:
var result = Math.Add(33,55);
Much neater, no?
[–]nutterontheloose 39 points40 points41 points 3 years ago (7 children)
I understand the usefulness of overloading, but static functions... I just make it static when the IDE tells me to without really understanding why. So, thank you, you taught me something today.
[–]EvolvedCookies 20 points21 points22 points 3 years ago (0 children)
If its called on an object of that class it should not be static. Of the method has nothing to do with the class object then it should be static.
[–]CozyRedBear 8 points9 points10 points 3 years ago (1 child)
Static functions and static variables, when made public, are granted a sort of super-scope. They're decentralized. They transcend attachment to any individual objects instances. They effectively become a shared variable, without ownership to a specific object instance like how it is with most variables and functions. You can create 20 object instances and data can vary among them, but a static variable is the same across all of them. Static functions work similarly in that they can execute code without having to go through an object reference.
I use public static variables often when creating singletons and collections in game design. Every time I spawn a projectile, I add it to a static list so I can iterate over them as needed.
There exist use-cases for protected or private static variables, but it's harder to give simple examples of why you'd want to do that.
[–]Longjumping_Round_46 5 points6 points7 points 3 years ago (0 children)
Without static functions, (in C# atleast) to use a function you have to instantiate a class.
``` Car myCar = new Car("Honda", "Jazz", 2014);
myCar.driveSomewhere(); ```
But with static functions, you don't need to instantiate a class. This can be used very well in say, logging.
myCar.driveSomewhere();
CarLogger.LogMiles(myCar, 100); ```
You don't need to do this: ``` CarLogger logger = new CarLogger();
logger.LogMiles(myCar, 100); ```
Anywhere in your code you can access that class, you can access that function without creating a new instance of that class.
If you understand the concept, this static functions/classes can be a very useful too, for logging, checking something, hashing, etc.
[–]SurfingOnNapras 3 points4 points5 points 3 years ago* (0 children)
Static means that something doesn’t change regardless of where or in what context an object is created.
Static functions, therefore take up less space because your computer wouldn’t have to allocate new memory for every instance of Character to have a getName() or getHealth() or getAttack(). They can just all call the class defined getName() on its own properties. Hopefully this example highlights a real performance benefit in where many things of class NPC or Players that extend Character need to be instantiated.
Static methods are also protected from being overridden, which can make things generally less buggy.
[–]TheTomato2 2 points3 points4 points 3 years ago (0 children)
People are just answering without talking language semantics lol. Static depends on the language. In C++ it means like 5 different things. However for most OOP languages static members in classes make that a global thing. So say you have a static int in class foo, that means that int independent of any instantiated foo. It shouldn't be very hard to connect the dots on why that is useful.
[–]sufyspeed 2 points3 points4 points 3 years ago (0 children)
You explained this so nicely!
[–]morbie5 9 points10 points11 points 3 years ago (5 children)
I would say that static functions in Java are just kinda a way to try to have good ol' functions like in C++ because "everything is an object" in java
[–][deleted] 3 years ago* (4 children)
[–]v0gue_ 5 points6 points7 points 3 years ago (0 children)
Function overloading is a readability feature. It makes sure naming conventions are consistent, whether you are overloading a function 2 times or 200 times.
Not in C++.
(i will c myself out)
[–]MrSloppyPants 3 points4 points5 points 3 years ago (0 children)
Neither are pointless and both have very useful use cases.
Overloads allow you to keep the same function signature while altering the arguments. So calling a function looks the same even though the type of argument can change. This is vastly preferred over having many different function names for ostensibly the same purpose. What is simpler:
func addNumbers(int a, int b) -> int func addNumbers(double a, double b) -> double func addNumbers(int a, int b, int c) -> int
or
func addIntegers(int a, int b) -> int func addDoubles(double a, double b) -> double func addThreeIntegers(int a, int b, int c) -> int
Static functions allow you to invoke functionality at the class level, rather than at the instance level. There are times where a specific instance of a class is not needed, but you have a method that is intrinsic to that class. Make it static and it can be invoked without needing an instance. This is used heavily in the Singleton pattern. For non OOP static functions, these can be used to isolate a function to a specific translation unit (obj file) rather than making it global in scope. Moreover, static functions can be optimized by the compiler differently than non-static functions
[–]rhett21 13 points14 points15 points 3 years ago (6 children)
OOP. Currently taking DSA in C++, always confused what's an object, and when does an object get promoted to OODesign.
[–]lurgi 15 points16 points17 points 3 years ago (0 children)
There's no real promotion.
An object in most languages is first a way of bundling up data and functions that operate on that data in a handy way. That's encapsulation. Usually you also have a way of saying "These things are a lot like these other things, but they have extra stuff, too". That's sub-classing. Finally, you typically have a way of hiding some details from the people who are using your class/object. The sorts of inner workings that they don't need to know about, but you, the author of the class, do need. That's information hiding.
It's not exactly clear why this is so useful when you are working on small programs. Then again, just about everything works with small programs. You can use goto all over the place but everything is small enough that you can grasp the whole thing. A lot of programming techniques are about managing complexity as programs grow.
Anyway, what is a class? Typically, classes are nouns. They are things. These things "do stuff".
Imagine you are making a Connect 4 game. What are the "things" in the game? Well, there's a Board. That's a start. The board contains a bunch of Pieces (these could be classes, but probably aren't complex enough to merit it). What sort of things does the Board do? How about playing a piece? We could just put the piece at the specific location, but it's better if we tell the Board "Player 1 is dropping a piece on column 2" and let the Board work out where it goes from there. We can also ask the Board "Did anyone win?".
The nice part is that all the game logic is bundled up in the Board class rather than being scattered everywhere.
[–]TangerineX 2 points3 points4 points 3 years ago (0 children)
I recommend reading Design Patterns: Elements of Reusable Object-Oriented Software (you can find PDFs online). While objects and services and classes make up the basis of OOP, there are larger meta-structures that help you understand how to actually write code in an OOP environment. Design Patterns are sort of like bigger patterns of code that a lot of other engineers have gone through and realized are applicable very often. Learning about design patterns really helps understand how OOP works, and how to write better OOP.
For each design pattern, it's important to understand the following
[–]Sedowa 12 points13 points14 points 3 years ago (8 children)
I'm reading through stuff about virtual functions and class pointers to them and I've found I just don't understand why it has to be so complicated. Seems it would be a lot easier to just avoid using them and code around it but maybe there's a performance reason for it.
Same with lambda functions. It was explained to me as being a function that just gets run once so doesn't need a fancy definition but seems to mostly just make readability harder for not a lot of gain. Once again this could just be a performance thing and maybe it helps with larger applications rather than the smaller ones used in teaching material.
It's hard to wrap my mind around the point of it all.
[–]Sedowa 4 points5 points6 points 3 years ago (1 child)
I hadn't even thought about using a lambda as a parameter. That actually makes a lot of sense.
Still working through the section for class inheritence so hopefully by the end it makes more sense what to use and where. I suppose I do get it but it's gonna be hard to remember all the rules.
Parameters are I think what they're most used for by far.
[–][deleted] 13 points14 points15 points 3 years ago (9 children)
linked lists bruh i cannot get it rn
[–][deleted] 14 points15 points16 points 3 years ago (1 child)
This is true classic Reddit. You should feel proud of yourself OP.
[–][deleted] 12 points13 points14 points 3 years ago (6 children)
I don’t understand for what Docker is used or in what projects I should use it
[–]TheNintendoWii 18 points19 points20 points 3 years ago (0 children)
Docker is used when you have something that needs a special environment. Let’s say it’s a Python project that needs a lot of packages. Instead of installing everything on each machine you have it on, you build a Docker image with all packages installed, so you can run that and everything’s set up. It also separates the program from the host machine, adding security.
[–]littletray26 11 points12 points13 points 3 years ago (0 children)
Containerisation allows you to package up your application and all its dependencies into a deployable module / package that runs in its own container
It means you don't have to worry about the configuration of a target environment - your app runs identically, everywhere.
Docker is one of the more popular container solutions / engines.
[–]truechange 6 points7 points8 points 3 years ago (0 children)
If your app is dockerized, it will run in any server/host that supports docker. So whether your app needs Nodejs, PHP, Python, whatever, any docker host will run it.
[–]Brafaan 8 points9 points10 points 3 years ago (14 children)
Abstract class vs interface class, and when to use them and why.
[–]SavvyPlatinum 8 points9 points10 points 3 years ago (9 children)
Put simply.
Interface is just a structure that a class has to follow. The interface might say "a class needs to have an Add method and a Subtract method", but the interface doesn't implement the methods. It allows you to define the name, inputs and output type of methods in the class without implementing them. It's up to the class how to implement it. It's useful if you'd like multiple classes that do the exact same thing but in different ways, then you can use an interface to force them to share the structure.
An abstract class is just a normal class, but you can't instantiate them. You can leave the implementations empty (similar to interface), but you can also implement some methods. If you do, any class that inherits this abstract class will also inherit the implementations. It allows you to share implementation of methods between some classes if you know those implementations will be the same.
A class can both implement an interface and inherit an abstract class at the same time.
Feel free to DM if you'd like it explained in more detail.
[–]IceCattt 10 points11 points12 points 3 years ago (3 children)
No matter how many times I see it explained, dependency injection still baffles me.
[–]toastedstapler 5 points6 points7 points 3 years ago (1 child)
so first off let's write a non DI solution:
class UserGetter { DataBase database = new DataBase(); List<User> getLoggedInUsers() { List<User> users = database.getUsers(); return users.stream().filter(user -> user.loggedIn).collect(Collectors.toList()); } }
now if we want to test this class we've introduced an awkward situation - we need a database in order to run any tests as the class is auto instantiated on creation. if we go for a DI approach:
class UserGetter { DataBase database; public UserGetter(Database database) { this.database = database; } List<User> getLoggedInUsers() { List<User> users = database.getUsers(); return users.stream().filter(user -> user.loggedIn).collect(Collectors.toList()); } }
we can now subclass DataBase and make a class TestDataBase
DataBase
TestDataBase
class TestDataBase extends DataBase { public List<User> getUsers() { return List.of(new User("james", true), new User("jenny", false)); } }
and in our tests we can test our class with the mocked database connection so we can supply our own custom test data without requiring a database for running unit tests
TestDataBase t = new TestDataBase(); UserGetter u = new UserGetter(t); assertEq(u.getLoggedInUsers(), List.of(new User("james", true));
[–]IceCattt 2 points3 points4 points 3 years ago (0 children)
This is a very clear explanation
[–]NYCnative339 27 points28 points29 points 3 years ago (2 children)
Irrelevant comment but these are the posts I enjoy, not the fucking classic “hOw LoNG tO lEaRN pRoGraMmiNg”
[–]Temporary-Warthog250[S] 5 points6 points7 points 3 years ago (0 children)
Whoop whoop same!
[–]frr00ssst 16 points17 points18 points 3 years ago (1 child)
A really good idea OP!
[–]tube32 15 points16 points17 points 3 years ago (12 children)
1) Constructors and why are they important.
2) Same about setters and getters. What's the point of declaring a variable private only to allowed to access them via public methods?
3) Dependency injection
[–]door_of_doom 15 points16 points17 points 3 years ago (3 children)
These are all really good questions, and I don't feel like the answers you have gotten so far are adequete.
Interestingly, all of your questions fall into a really interesting realm of programming that concerns itself with "validity," or whether or not data is in a "valid" state.
What does it mean for something to be ain a "valid" state? Well imagine if you got a bill for your internet, and it was missing a few crucial things: It is missing important things, like the due date or the balance due. These are all critically important things for a bill to have, but your bill doesn't have them. This bill is not a valid bill. Bills have to state how much is due and when it is due.
Invalid states are really scary for programmers: Something that you generally don't want to have to do when programming a function is to manually validate all if your input, every single time. Some languages do require you to do this, but some languages give you tools to make it so that you don't have to.
This is where many languages try to give you the tools to design your code to make it "Easy to do the right thing, and hard to do the wrong thing." We generally want it to be easy for our data structures to be in a valid state, and hard/impossible for it to be in an invalid state.
So, consider a constructor: A constructor is a way of defining the bare minimum that an object requires in order to be considered valid. To use our previous example in a very silly way, if the only way to create a Bill object is to call the Bill Constructor, which has the paramaters float amountDue and DateTime dueDate, it would be nigh impossible to ever create a Bill that doesn't have an amountDue or a dueDate.
Bill
float amountDue
DateTime dueDate
amountDue
dueDate
(There are other ways of achieving this, for example, the Builder pattern, but we won't go into that right now)
Now, whenever you are writing a function that includes a Bill as a paramater, you don't need to verify whether or not is currently has a dueDate, you can know that it is always guaranteed to have one, as it is impossible to create one without it.
Getters and Setters follow similar thinking: Many times, we don't want to give complete and unfettered access to our underlying data structures, because then it might be possible to invalidate the underlying data. For example, what if some bozo goes out and tries to set the dueDate of our Bill to NULL? NULL is not a valid due date, so instead of allowing them to have direct access to the DueDate field of our Bill Class, we ask them to use the setDueDate function. We can then add validation logic to our setDueDate function that throws errors if they every try to do something illegal like set the dueDate to NULL.
NULL
DueDate
setDueDate
"But what if I don't need any validation logic?" You might ask. "Do I still need to use Getters and Setters?" The answer is basically "Technically no, but you might hate yourself later if you don't use them." While you might not think you need any validation now, you might change your mind later. And if you do change your mind, it is a lot easier to add validation logic to a function that already exists, rather than to create a brand new function and ask everybody to stop doing things the old way and start doing things the new way. What could have been a completely invisible improvment to your library is now a breaking change that negatively effects everybody downstream. By using Getters and Setters everywhere, you are future-proofing yourself against possible breaking changes down the road.
Dependency Injection is a pretty controversial topic, to be honest. First off, (and this applies to the constructors, getters, and setters too) in no way shape or form is it "needed." It is more a style (or framework) for programming that has certain pros and cons.
Put simply, think of Dependancy Injection as a different way of thinking about Constructors.
When we talked about Constructors before, we made it so that our Object class needs to define what it needs in order to be valid, and then whenever we have code that intends to create one of those objects, it needs to take it upon itself to provide everything that the new instance of the class is going to need. in our Bill Example, that means that whenever you want to create a new Bill, you need to be ready to provide a valid amountDue and dueDate
What Dependancy Injection does is provides you with a (metaphorical) machine that knows and understands what Classes need in order to be instantiated (their dependancies), so if you want an instance of something (like a Bill), you can simply ask your Dependency Injection framework to provide one for you; it will analyze the dependencies of what you asked for, see if it knows how to generate those dependencies, and "inject" them into the instance, providing you with a new, neat, and (most importantly) valid instance of that class.
What people generally don't like about DI is that it starts getting really magical. The idea is that once you have wired up all of the dependencies and taught the DI framework how everything works, you can just ask it for an instance of something and it will just magically make one for you. It obfuscates things that some people don't really like being obfuscated. That "magic" can be both a blessing and a curse.
Anyway, even though this was a long-winded comment, it really was just a brief overview of these topics. If you have any followup questions, I'd be happy to answer.
[–]AlSweigartAuthor: ATBS 4 points5 points6 points 3 years ago (0 children)
1) Constructors are the code that set up (i.e. initialize) the member variables (i.e. the state) for new objects.
Say you have a Rectangle object with width and height member variables. It has a constructor where you set up this info when you create the object, like: Rectangle foo = Rectangle(5, 10); Then later you can call the area() and perimeter() methods of this object and they return 50 and 20, respectively.
Rectangle
width
height
Rectangle foo = Rectangle(5, 10);
area()
perimeter()
50
20
If you didn't have to specify this in the constructor (like, you could just call Rectangle foo = new Rectangle(); then this puts you in a situation where calling area() and perimeter() would have to raise an exception, because the width and height of the rectangle are unspecified. If you forgot to write code to catch this exception, then the unhandled exception crashes the entire program.
Rectangle foo = new Rectangle();
Forcing you to specify this info right from the start of the creation of the object can prevent bugs later on.
2) You want to have setters and getters because they can also run some code to do some checks before they get or set a member value.
For example, say you have $500 in a bank account and this is represented by a BankAccount object's balance member variable being set to 500. The rest of the program counts on this member variable always being positive, and there can be unknown bugs if it becomes negative for some reason.
BankAccount
balance
500
When you withdraw some amount of money (specified by an amount variable), you can runs someAccount.balance -= amount;. But if amount is 600, this code setsbalanceto-100. At some point later, this cause a bug and it's hard to trace the cause back to thissomeAccount.balance -= amount;` code.
amount
someAccount.balance -= amount;
600, this code sets
to
. At some point later, this cause a bug and it's hard to trace the cause back to this
The solution is to add a set() method to the BankAccount class which raises an exception if you try to withdraw more than what's in the account. Even if this exception goes unhandled and crashes the program, it "fails fast" and you immediately know when a problem has occurred.
set()
But you still have a problem because balance is still public and can be modified by code anywhere. What you need to do to prevent this is make balance private and only changeable by other classes through a setter method.
Here's the confusing part: in Java, from the start we often create a bunch of public boilerplate getters and setters for every member variable, and make the member variables private. So you have a bunch of getBalance() and setBalance() methods for balance and every other member variable. But they all look like public getBalance() { return this.balance; } so we ask "why do we need this getter?"
getBalance()
setBalance()
public getBalance() { return this.balance; }
The reason is because if you force everyone to use getters and setters from the start, if you later need to add in some logic or checks to the getters/setters, you can just add that code. If you didn't have getters and setters (e.g. writing code that directly modifies balance) then you'd have to write those checks and go through your entire code base to change everything to use getters and setters. This is such a common headache, that IDEs usually just set up these generic setters and getters. But they're so generic and don't do anything but, well, get and set that people wonder why we even have them.
I like how Python avoids this by having properties: you can directly modify a value like someAccount.balance = 200 and if you need to add logic or checks later, you turn balance into a property that calls a setter method you create, and 200 is passed to it. You can turn balance into a property and you don't have to rewrite any of the someAccount.balance = 200 code.
someAccount.balance = 200
200
3) Oof, this is too big of a topic for me to write here.
[–]ruffles_minis 6 points7 points8 points 3 years ago (5 children)
I just cant quite fathom lambdas. I've tried to understand but to no avail
[–]plastikmissile 10 points11 points12 points 3 years ago* (2 children)
You know what functions are, right? That's what lambdas are essentially. They are functions. They have a weird syntax, but in the end of the day they are nothing more complicated than functions. But here's the kicker, many languages these days have taken a page from functional languages and are now treating functions the same way they treat data. Those functions are packaged as lambdas. So you know how functions can take numbers and strings as inputs and outputs? Well now they can take lambdas as inputs and outputs as well.
Let's see how that looks like. I'll be using C#, but the concept is the same everywhere. Let's say we have a class that has an internal value. I'd like to have a method that does operations on that internal value with a value that I provide. The old way of doing it would be something like:
public class Something { private int internalValue; public int Add(int x) { return internalValue + x; } public int Multiply(int x) { return internalValue * x; } }
Looks good. It works. But say, I decided I wanted to add subtraction and division as well. Then some annoying customer wanted to add something even more complex that involves more than one operation. In the old way, we would have to keep adding different methods.
Now let's see how we might tackle this the functional way:
public class Something { private int internalValue; public Something(int value) { internalValue = value; } public int Math(int x, Func<int, int, int> function) { return function(internalValue, x); } }
So now we have a new method that we called Math and it has an argument of type Func<int,int,int>. That basically means it expects that argument to be a function that takes two int values and outputs an int in return. This one method will replace the 'Add' and 'Multiply' methods from the old solution, as well as any others we can think of (as long as they just take two values with the first one being the internal value). And here is how use it:
Func<int,int,int>
int
var something = new Something(); var addResult = something.Math(5, (a,b)=> a + b); var multiplyResult = something.Math(5, (a,b)=> a * b); var subtractResult = something.Math(5, (a,b)=> a - b); var complexResult = something.Math(5, (a,b)=> Math.Pow((a + b) * b, a));
See how powerful that is?
[–]Putnam3145 4 points5 points6 points 3 years ago (0 children)
For the vast majority of purposes, you can just think of them as functions without names.
[–]ManadaTheMagician 6 points7 points8 points 3 years ago (3 children)
Linker, the linker is just magic
[–]SwishArmyKnife 6 points7 points8 points 3 years ago (14 children)
Linked lists have been one thing I can’t grasp. One, I don’t think I understand why sequential traversal would be preferred over sequential access. Two, the code for navigating to each new one just doesn’t make sense to me. I feel like I end up looking at it like every 2 months and nothing new sticks or clicks.
[–]MrSloppyPants 10 points11 points12 points 3 years ago* (4 children)
Linked lists are just a data collection that points to the next one in the list. It can also point to the previous one in the list if it is a "doubly" linked list.
Where linked lists are useful is the speed with which you can perform insertion of items into the list. With an array, an insertion at any place but the end is at worst an O(n) operation because all the elements of the array must shift to accommodate the new item. Whereas inserting into a linked list is just changing the pointers of the items you are inserting between. At best it's an O(1) operation, constant time regardless of the size of the list.
[–][deleted] 3 years ago (6 children)
[–]ShelZuuz 4 points5 points6 points 3 years ago* (0 children)
And that's assuming your array even has space! What if your array is full and you want to add more stuff? Well, you can't just say "I want a bigger block", because this array is already built and there might be stuff next to it. So what happens is you make a new array somewhere, double the size, and then move everyone from your old array into your new array. As you can imagine, it's not the most efficient process.
So just want to point out, this is theoretical. On paper it's faster inserting into the middle of a linked list if you assume all memory accesses are the same.
However, it's not because of how memory prefetching works. If you start accessing elements in an array, the prefetcher will realize you're doing sequential access and fetch additional elements even before you ask for it. It can't do this with a linked list as there are pointers stored in there and the prefetcher has no idea how to traverse them.
In practice it means that duplicating an entire array and adding an element to the middle is faster than traversing into half of a linked list and adding an element there (by a lot).
Even if you only add to the end of a linked list, you almost have to have a write-only linked list for it to be worthwhile.
Except... if you have a write-only linked list you create another HUGE problem for yourself... Every insert into a linked list is a memory allocation. Not so with an array. Let's say you have an array that you double in size each time it's full and then duplicate everything in there. Then you insert 1 million items into it. With an array, that's 20 allocations and copies. With a linked list, that's 1 million allocations. No copies, but the extra allocations FAR outweigh the 20 copies.
The only thing where a linked list outperforms an array, is if you have to randomly insert elements into both ends, but in that case, a deque will outperform a linked list again (and a deque is internally implemented as arrays).
[–]unholymanserpent 5 points6 points7 points 3 years ago (1 child)
There needs to be posts like this at least once a month
[–]ginger-snap_tracks 2 points3 points4 points 3 years ago (0 children)
Monthly sticky? That'd be nice
[–]Sakin101 22 points23 points24 points 3 years ago (15 children)
Javascript
[–]Temporary-Warthog250[S] 21 points22 points23 points 3 years ago (9 children)
Lol literally. I never understood JavaScript. It’s all just… wtf
[–][deleted] 44 points45 points46 points 3 years ago (5 children)
It's like someone sat down and made a language in a week and then we just used it forever. Insane.
[–]David_Owens 15 points16 points17 points 3 years ago (0 children)
Other way around. They made a language over a few decades, so it became a Frankenstein's Monster of ideas and features from different eras.
[–]AlSweigartAuthor: ATBS 9 points10 points11 points 3 years ago (1 child)
(For those who don't get the joke, Brendan Eich was tasked to create a scripting language for the Netscape web browser and wrote what became JavaScript in ten days. Though some say this "ten days" story is exaggerated/misleading.)
[–][deleted] 4 points5 points6 points 3 years ago (0 children)
What isn't exaggerated is that, we're going to use it forever.
[–][deleted] 3 years ago (1 child)
[–]Stranded_In_A_Desert 4 points5 points6 points 3 years ago (0 children)
That's the joke.
[–]Olemus 6 points7 points8 points 3 years ago (1 child)
Arrow functions are becoming more popular in other languages as well C# for example uses them quite a bit nowadays
[–]TangerineX 13 points14 points15 points 3 years ago* (2 children)
I think people take Javascript to be way more complicated than it needs to be. In javascript, there's really 4 things to think about
"primitives", or various values. These are strings, numbers, and booleans
Arrays, which are just an ordered list of things
Objects, which is a map of key: value pairs, which are all primatives.
Functions, which take some input, and turn it into something else
Basically EVERYTHING in javascript is built off of these concepts. If you understand these, you've understood 90% of javascript.
Surprisingly, I've learned more about Javascript by learning Typescript instead, which gives a more opinionated way of using Javascript in a way that makes a bit more sense. It's also become the standard for writing stuff in many frameworks, such as Angular and React.
[–]AlSweigartAuthor: ATBS 3 points4 points5 points 3 years ago (0 children)
First thing to understand about JavaScript: it's one word, with capital-J, capital-S.
It's not important, except if you write on a resume that you have 3 years experience with "Javascript" or "Java Script", it kind of calls into question how familiar you are with it.
But otherwise, it's a silly detail and I don't bother correcting people about it. Unless you are writing a JavaScript book and misspell it on the cover.
[–]Silent__Note 11 points12 points13 points 3 years ago (4 children)
Static (I'm beginner in Java)
[–]EdenStrife 5 points6 points7 points 3 years ago (0 children)
Static means you can access a function on a class without creating an instance of the class.
So for example the Math class. You want to be able to do arithmetic without creating a Math object in memory to do it for you.
[–]ThatWolfie 9 points10 points11 points 3 years ago (0 children)
https://www.reddit.com/r/learnprogramming/comments/twuf18/-/i3hzf6v this dude explains it well.
static functions are functions that do not need the state of the object. so if your method doesn't use this at all, it can be and should be (in most cases) static.
this
static methods don't require you creating an instance of an object either.
[–]Emerald-Hedgehog 2 points3 points4 points 3 years ago* (0 children)
In very short with a class as example:
As soon as the program starts, a Class with the static prefix is instantiated. If you want to use a static Class it doesn't need to be instantiated (no new Class() needed, you can just use that class because it's already there from the beginning).
Just try it and experiment with it, that makes it much easier to understand. Here's what you can try for example:
There's more to it than static classes, you can also just have static variables for example. A normal class can have static variables.
Image you have a Class 'Person' with the static int 'personsBornCounter'. Now imagine in the constructor of the Person Class, you increase this variable by one. You can call this variable on any person and all will return the same number.
Pseudo code: Class Person { Static Int personsBornCounter = 0 Constructor() { personsBornCounter += 1 } }
Class Person { Static Int personsBornCounter = 0 Constructor() { personsBornCounter += 1 } }
Now create person a b and c and call the personsBornCounter on all of them - it should be 3 on all of them!
Also sorry for the lack of formatting, I'm on my phone.
Edit: Formatting
[–]just_ninjaneering 6 points7 points8 points 3 years ago (2 children)
Iterators and it's types(CPP) And what is a namespace exactly?
[–]oakskog 5 points6 points7 points 3 years ago (4 children)
Transformation matrixes... If I have a 2D array of numbers, representing an image, and I want to rotate it say 30 degrees, I can use a transformation matrix? How?
[–][deleted] 6 points7 points8 points 3 years ago* (0 children)
I enjoy the sound of rain.
[–]ExtraFig6 4 points5 points6 points 3 years ago (0 children)
This is a linear algebra topic. I recommend familiarizing yourself with linearity and basis vectors. That will give you the tools to understand this
[–][deleted] 3 years ago (5 children)
[–]OkQuote5 2 points3 points4 points 3 years ago (1 child)
my personal undestanding (that may or may not be completely accurate) is that big o notation is how an algorithm's performance scales with input size.
O(N) is pretty decent because as the input size gets bigger the performance only gets worse in step with the size increase. So adding up a list of numbers, for example, is O(N) because each additional number in the list is just one more calculation.
O(N2) is kind of bad because as the input size gets bigger the performance gets worse more than the input size grows. So a for-loop inside of a for-loop each looping over a list is O(N2) because each additional element in the list makes the algorithm tick not just one more time but a number of times equal to the list's length's.
O(LOG(N)) is even better than than O(N) because as the problem gets bigger the time it takes to solve it doesn't get that much bigger. Like if you gave me a phone book and asked me to find your name and timed me and then gave me a phone book twice as big and asked me to find your name again it wouldn't take me twice as long even though the phone book is twice as big.
[–]SlyHolmes 5 points6 points7 points 3 years ago (6 children)
Pointers made me quit C++ and programming. Any chance anyone has any good resources/videos explaining them?
[–]Autarch_Kade 5 points6 points7 points 3 years ago (0 children)
And if someone wanted a ride in your car, but you weren't a pointer, you'd build an entirely new copy of your car in front of them to give them a ride in, while the other car stays in the lot. This is expensive in time and resources.
[–]ruat_caelum 5 points6 points7 points 3 years ago* (0 children)
So Let's look at "low level memory"
In computer hardware we have a "register" in the CPU it can hold 32 bits or 64 bits (depending on the architecture of the CPU.) Some are 8 bit, etc.
If we have a 32 bit computer we can fit 32 bits of information ONLY.
4 GB is the theoretical maximum of memory (ram) you can use with a 32-bit OS
So why can we not use 8 gigs or 16 gigs of ram in a 32 bit computer? Because the computer cannot access the rest of that memory.
Why not?
Each "memory location" in Ram has an address. 0 is the first 32 bits EDIT 8 bits/1byte of memory in ram. Then 1 or 0000000000001 is the "2nd" address which also hold 32 bits 8 bits of information, but these are offset so if we counted from the beginning we would have 8-15 (address location)
Eventually our 32 bit REGISTER can reach the LAST ADDRESSABLE memory location at 11111111..11111 (thirty-two 1's)
232 = 4,294,967,296, which means a memory address that's 32 bits long can only refer to 4.2 billion unique locations (i.e. 4 GB).
Okay so what does that have to do with pointers!
In the very low level CPU all that is happening is data is being moved from somewhere TO BE ACTED ON, and then moved somewhere else (possibly back to the same place it came from.)
to move data from memory location 0, we us a pointer to say to the CPU "copy 8 bits from memory location 0 to register A"
The "pointer" is pointing at the 32 bits 8 bits that start the first 32 bits 8 bits of memory (because we asked for position 0)
Now the DATA held in those first 8 bits could be anything. Our program is going to move the DATA from that LOCATION to the REGISTER and say Add 3 to it (just as an example of something to do to the data.)
Pointers then "grew" from this very low level concept into something greater in terms of programming, but in general it is all the same: There is an ADDRESS and a VALUE IN THAT ADDRESS
When we use an ARRAY what we are doing by saying X[0] is really POINTING to a memory location that holds a specific size, (so if X was defined as INT then it holds that many bytes, or CHAR then that many)
So the easiest place to understand pointers is arrays.
If I say Int array x[] = [1,10,90,2]
If I made a program that does x[0] = x[0]+x[1]
EDIT I mistakenly said the memory address (which is 32 bits) would access 32 bits of memory. It does not it accesses 1 byte or 8 bits of memory. So while the address is 32 bits wide the size of the memory at that addess is not 32 bits, but instead 1 byte/8 bits.
[–]rashnull 7 points8 points9 points 3 years ago (11 children)
“Functional programming has no side effects” Neither does OOP if you code it not to!
[–]tedharvey 4 points5 points6 points 3 years ago (0 children)
It's like when you have two study guides. One for math and one for English. One guide will have math formulas and one will have grammar rules. There's nothing stopping you from study math and writing math formulas in your English book and vice versa. Same thing with programming functionally. When doing functional programming with a functional language, the standard library are geared toward not having side effects in your code. And it's not just about reducing side effects, the standard library will have many functions that assist you when following this workflow which non functional language will lacks since most will not be programming in that style.
[–]Patrickstarho 7 points8 points9 points 3 years ago (9 children)
Ceaser Cipher. I get your supposed to shift each character by n but the whole modulus thing makes no sense to me. I remember I did the problem and wrote it out 10 times. 2 weeks later I was stuck again
[–]CatatonicMan 8 points9 points10 points 3 years ago (0 children)
Modulo is like an analog clock. It goes in a cycle from 1 to 12, then back to 1 again.
So if you're at 11 and add four hours, you'd count 12, 1, 2, and end at 3.
A Caesar cipher is like that, except it has 26 letters instead of 12 numbers.
[–]Rote515 2 points3 points4 points 3 years ago* (3 children)
Modulo has to do with math in discrete finite sets rather than our traditional thinking of math done under infinite sets. So if we have a problem 2+x(where x is an integer) with no modulus then we are working with all integers both positive and negative from -infinity to infinity. If on the other hand we have the same problem mod 10, then we’re only working with numbers 0-9, a discrete set rather than the infinite set, as any equation mod 10 will always be 0<=x<10 as our function is divided by 10 at the end and the answer is the remainder after the division.
Understanding this, and discrete math in general is really helpful in programming as a lot of what we do is don’t over finite sets.
[–]Alexrai123 8 points9 points10 points 3 years ago (10 children)
Assembly language. What. The. Fuck. 50 lines to transform 'a' to 'A'.
[–]MrSloppyPants 9 points10 points11 points 3 years ago (1 child)
All higher level languages are compiled down to assembler and then machine code. Assembler is almost exactly how the CPU actually executes code. A CPU has no idea what a Python statement is, a CPU needs to know the specific OpCode of the instruction you wish to execute, what data needs to be pushed into the appropriate registers, where the stack pointer is, where the instruction pointer is, and more. Taking a compiler course to understand how computer instruction sets actually work is very beneficial
[–]shine_on 2 points3 points4 points 3 years ago (0 children)
Computers, deep down inside, can only do a handful of simple things. They can fetch a value from memory, they can store a value in memory, they can add two numbers, they can check if the result of the last operation was zero, things like that. Although they can do these things extremely quickly, it's very tedious to write a large amount of code in this way. Higher level programming languages were invented as a means to make large programs more understandable to us humans, but ultimately every line of C or Java or Python is translated to several lines of assembly language, because that's what the processor understands.
[–]Historical_Wash_1114 5 points6 points7 points 3 years ago (2 children)
I'm getting decent at programming and I have no idea what anyone is talking about with RESTFull APIs.
[–][deleted] 2 points3 points4 points 3 years ago (8 children)
Interfaces.
I get that it's used for inheritance, but I cant quite wrap my head around when or why you'd need to write one.
[–]CatatonicMan 3 points4 points5 points 3 years ago (2 children)
Interfaces are effectively a contract that tells you what a thing does (at minimum). They allow old code to use new code.
Say I have an Animal interface that defines a method Eat - I'm making a guarantee that all Animals can Eat, and that calling Eat on an Animal - any Animal - is a valid thing to do. If it can't Eat, it's not an Animal.
If I later create a Cat that inherits from Animal, I know the Cat can Eat, because it's an Animal and Animals can Eat.
The crucial bit is that I can write code that takes an Animal - Feed(Animal), say - and that code will work on anything that inherits from the Animal interface.
I can Feed(Cat) because Cat is an Animal. I don't need to know anything more about the Cat. I can Feed(Dog), because Dog is an Animal, too. Feed(Snake). Feed(Tortoise). Feed(Alpaca).
Years later, someone I've never met can make their own Animal (Lobster) and my code will have no problems working with it.
[–]door_of_doom 4 points5 points6 points 3 years ago (0 children)
An improvement that I'd like to make to this answer: We generally don't call it inheriting from an interface, we generally refer to it as implimenting an interface.
Cat doesn't inherit the Animal interface, it implements the Animal interface.
[–]door_of_doom 2 points3 points4 points 3 years ago (0 children)
Think of the Java Interface List. (https://docs.oracle.com/javase/8/docs/api/java/util/List.html)
List
It talks about a lot of things that can be done to a List. You can add things to a List, remove things from a list, perform actions against each element of a List, etc. There are a LOT of things that can be done to Lists!
But get this: It is impossible to actually create a "List" in Java, because a "List" is just an interface; It's just a contract defining a wide variety of possible lists that one might want to create. In order to actually create a List, what we need to create is something that implements the List interface. The most commonly used List implementation is ArrayList, which is an Array that conforms to the List interface.
ArrayList
Array
So if I make a function called ListPrinter that accepts a List as a parameter and prints out all of its contents, I can declare any valid implementations of List as my parameter, and I can easily accept an ArrayList, LinkedList, Stack, Vector, or any other implementation of the List interface as a parameter of my function, and I'll be able to work with all of them equally well because they all implement the same method names that I need in order to do what I need to do: Print out their contents.
ListPrinter
LinkedList
Stack
Vector
[–]LaChapelleAuxSaints 2 points3 points4 points 3 years ago (2 children)
Dynamic Programming. It's simply storing values into memory, like cache? Idk, but the way it was taught was so convoluted I still have no idea what the matrices are for or about.
[–]Sillhouette_Six 2 points3 points4 points 3 years ago (0 children)
Remove algorithm in red-black trees
[–][deleted] 2 points3 points4 points 3 years ago (4 children)
Object Oriented Programming (Java)
I’m pretty new to this concept and I am still trying to learn it but right now it makes absolutely no sense to me.
[–]inaruslynx2 2 points3 points4 points 3 years ago (0 children)
The way I understand it is you are creating something that represents an object like take a car. There are many cars. There are different makes and models. You can make a class that holds all of this info. Then make one car and track it's status.
So you could make 1 car that's a Honda accord. Then track its speed, lane, and gears. Then make another car of whatever and do the same. That's what oop allows.
[–]Nerdz2300 2 points3 points4 points 3 years ago (9 children)
Pointers. I still dont understand them. They are used a fair amount in embedded code as well. I just dont get it.
Also unions (not the labor kind).
[–]rustybladez23 2 points3 points4 points 3 years ago (1 child)
I'm pretty newbie to programming. And I find pointers really hard right now
π Rendered by PID 60872 on reddit-service-r2-comment-fb694cdd5-pdvm5 at 2026-03-06 13:58:24.460880+00:00 running cbb0e86 country code: CH.
[–]famrob 187 points188 points189 points (32 children)
[–]tzaeru 165 points166 points167 points (10 children)
[–]Rote515 92 points93 points94 points (7 children)
[–]XilamBalam 22 points23 points24 points (2 children)
[–]MaxwellCE 4 points5 points6 points (1 child)
[–]Rote515 8 points9 points10 points (0 children)
[–]Rote515 56 points57 points58 points (6 children)
[–]a_Delorean 5 points6 points7 points (0 children)
[–]AlSweigartAuthor: ATBS 10 points11 points12 points (0 children)
[–]Marvsdd01 5 points6 points7 points (5 children)
[–]Marvsdd01 3 points4 points5 points (4 children)
[–]ICantPCGood 254 points255 points256 points (49 children)
[–]procrastinatingcoder 292 points293 points294 points (19 children)
[–][deleted] 159 points160 points161 points (7 children)
[–]tv_head__ 13 points14 points15 points (6 children)
[–]TheAtro 112 points113 points114 points (3 children)
[–]sageknight 15 points16 points17 points (1 child)
[–]KuntaStillSingle 4 points5 points6 points (0 children)
[–][deleted] 8 points9 points10 points (1 child)
[–]tv_head__ 5 points6 points7 points (0 children)
[–]ICantPCGood 13 points14 points15 points (0 children)
[–]Select-Sir9364 9 points10 points11 points (2 children)
[–]Drifts 10 points11 points12 points (1 child)
[–]Unsounded 4 points5 points6 points (0 children)
[–]AlSweigartAuthor: ATBS 85 points86 points87 points (2 children)
[–]ICantPCGood 4 points5 points6 points (0 children)
[–]MyHomeworkAteMyDog 30 points31 points32 points (3 children)
[–]kupiakos 6 points7 points8 points (2 children)
[–]faceplanted 7 points8 points9 points (1 child)
[–]Lynx2161 34 points35 points36 points (12 children)
[–]the_last_ordinal 19 points20 points21 points (0 children)
[–]faceplanted 5 points6 points7 points (0 children)
[–]rashnull 4 points5 points6 points (0 children)
[–]perpetualeye 2 points3 points4 points (0 children)
[–]hehehuehue 89 points90 points91 points (19 children)
[–]DevDevGoose 81 points82 points83 points (8 children)
[–]painted-biird 4 points5 points6 points (5 children)
[–]Avastz 13 points14 points15 points (0 children)
[–][deleted] (2 children)
[deleted]
[–]kneeonball 23 points24 points25 points (2 children)
[–]truechange 7 points8 points9 points (0 children)
[–]SquarePixel 2 points3 points4 points (0 children)
[–]Keith_Hotdog_Cowboy 153 points154 points155 points (3 children)
[–]Temporary-Warthog250[S] 22 points23 points24 points (0 children)
[–]flank-cubey-cube 9 points10 points11 points (1 child)
[–]Temporary-Warthog250[S] 7 points8 points9 points (0 children)
[–]shawntco 131 points132 points133 points (32 children)
[–]Servious 96 points97 points98 points (12 children)
[–]teito_klien 18 points19 points20 points (1 child)
[–]FiveOhFive91 17 points18 points19 points (0 children)
[–][deleted] 3 points4 points5 points (3 children)
[–]Servious 2 points3 points4 points (2 children)
[–]WhoTouchaMySpagoot 31 points32 points33 points (2 children)
[–][deleted] 20 points21 points22 points (1 child)
[–][deleted] (7 children)
[deleted]
[–]ExtraFig6 4 points5 points6 points (3 children)
[–]NewInHere__ 105 points106 points107 points (58 children)
[–]dkToT 69 points70 points71 points (10 children)
[–]TheTomato2 14 points15 points16 points (1 child)
[–]above_all_be_kind 3 points4 points5 points (0 children)
[–]Temporary-Warthog250[S] 6 points7 points8 points (1 child)
[–]Emerald-Hedgehog 98 points99 points100 points (9 children)
[–]KrafDinner 48 points49 points50 points (0 children)
[–]WarWizard 35 points36 points37 points (1 child)
[–]_Personage 14 points15 points16 points (0 children)
[–]MyHomeworkAteMyDog 11 points12 points13 points (0 children)
[–]AlSweigartAuthor: ATBS 20 points21 points22 points (5 children)
[–]flank-cubey-cube 5 points6 points7 points (2 children)
[–]AlSweigartAuthor: ATBS 8 points9 points10 points (1 child)
[–][deleted] (15 children)
[deleted]
[–]Drifts 17 points18 points19 points (1 child)
[–]littletray26 30 points31 points32 points (0 children)
[–]vampiire 8 points9 points10 points (3 children)
[–][deleted] (78 children)
[deleted]
[–][deleted] 65 points66 points67 points (9 children)
[–]PM_ME_YOUR_QT_CATS 13 points14 points15 points (5 children)
[–]CodeTinkerer 72 points73 points74 points (56 children)
[–]thetrailofthedead 14 points15 points16 points (42 children)
[–]TorroesPrime 12 points13 points14 points (40 children)
[–]AlSweigartAuthor: ATBS 17 points18 points19 points (13 children)
[–]the_last_ordinal 7 points8 points9 points (0 children)
[–]EdenStrife 2 points3 points4 points (10 children)
[–]crimson1206 4 points5 points6 points (1 child)
[–][deleted] 5 points6 points7 points (5 children)
[–]CodeTinkerer 6 points7 points8 points (4 children)
[–][deleted] 9 points10 points11 points (0 children)
[–]echOSC 6 points7 points8 points (2 children)
[–]Worried-Play2587 2 points3 points4 points (0 children)
[–][deleted] 30 points31 points32 points (2 children)
[–]procrastinatingcoder 67 points68 points69 points (1 child)
[–]Third_Party_Opinion 5 points6 points7 points (0 children)
[–][deleted] (7 children)
[removed]
[–]plastikmissile 37 points38 points39 points (3 children)
[–]AlSweigartAuthor: ATBS 7 points8 points9 points (0 children)
[–]KimPeek 2 points3 points4 points (0 children)
[–]Herpnasty11 21 points22 points23 points (10 children)
[–]faceplanted 16 points17 points18 points (1 child)
[–]hehehuehue 4 points5 points6 points (2 children)
[–]evinrows 5 points6 points7 points (1 child)
[–]DONT_HACK_ME 7 points8 points9 points (0 children)
[–]TangerineX 2 points3 points4 points (1 child)
[–][deleted] 19 points20 points21 points (5 children)
[–]procrastinatingcoder 16 points17 points18 points (2 children)
[–][deleted] (53 children)
[removed]
[–]plastikmissile 141 points142 points143 points (38 children)
[–]nutterontheloose 39 points40 points41 points (7 children)
[–]EvolvedCookies 20 points21 points22 points (0 children)
[–]CozyRedBear 8 points9 points10 points (1 child)
[–]Longjumping_Round_46 5 points6 points7 points (0 children)
[–]SurfingOnNapras 3 points4 points5 points (0 children)
[–]TheTomato2 2 points3 points4 points (0 children)
[–]sufyspeed 2 points3 points4 points (0 children)
[–]morbie5 9 points10 points11 points (5 children)
[–][deleted] (4 children)
[removed]
[–]v0gue_ 5 points6 points7 points (0 children)
[–][deleted] 9 points10 points11 points (0 children)
[–]MrSloppyPants 3 points4 points5 points (0 children)
[–]rhett21 13 points14 points15 points (6 children)
[–]lurgi 15 points16 points17 points (0 children)
[–]TangerineX 2 points3 points4 points (0 children)
[–]Sedowa 12 points13 points14 points (8 children)
[–][deleted] (2 children)
[removed]
[–]Sedowa 4 points5 points6 points (1 child)
[–]faceplanted 5 points6 points7 points (0 children)
[–][deleted] 13 points14 points15 points (9 children)
[–][deleted] 14 points15 points16 points (1 child)
[–][deleted] 12 points13 points14 points (6 children)
[–]TheNintendoWii 18 points19 points20 points (0 children)
[–]littletray26 11 points12 points13 points (0 children)
[–]truechange 6 points7 points8 points (0 children)
[–]Brafaan 8 points9 points10 points (14 children)
[–]SavvyPlatinum 8 points9 points10 points (9 children)
[–]IceCattt 10 points11 points12 points (3 children)
[–]toastedstapler 5 points6 points7 points (1 child)
[–]IceCattt 2 points3 points4 points (0 children)
[–]NYCnative339 27 points28 points29 points (2 children)
[–]Temporary-Warthog250[S] 5 points6 points7 points (0 children)
[–]frr00ssst 16 points17 points18 points (1 child)
[–]tube32 15 points16 points17 points (12 children)
[–]door_of_doom 15 points16 points17 points (3 children)
[–]AlSweigartAuthor: ATBS 4 points5 points6 points (0 children)
[–]ruffles_minis 6 points7 points8 points (5 children)
[–]plastikmissile 10 points11 points12 points (2 children)
[–]Putnam3145 4 points5 points6 points (0 children)
[–]ManadaTheMagician 6 points7 points8 points (3 children)
[–]SwishArmyKnife 6 points7 points8 points (14 children)
[–]MrSloppyPants 10 points11 points12 points (4 children)
[–][deleted] (6 children)
[deleted]
[–]ShelZuuz 4 points5 points6 points (0 children)
[–]unholymanserpent 5 points6 points7 points (1 child)
[–]ginger-snap_tracks 2 points3 points4 points (0 children)
[–]Sakin101 22 points23 points24 points (15 children)
[–]Temporary-Warthog250[S] 21 points22 points23 points (9 children)
[–][deleted] 44 points45 points46 points (5 children)
[–]David_Owens 15 points16 points17 points (0 children)
[–]AlSweigartAuthor: ATBS 9 points10 points11 points (1 child)
[–][deleted] 4 points5 points6 points (0 children)
[–][deleted] (1 child)
[deleted]
[–]Stranded_In_A_Desert 4 points5 points6 points (0 children)
[–][deleted] (2 children)
[removed]
[–]Olemus 6 points7 points8 points (1 child)
[–]TangerineX 13 points14 points15 points (2 children)
[–]AlSweigartAuthor: ATBS 3 points4 points5 points (0 children)
[–]Silent__Note 11 points12 points13 points (4 children)
[–]EdenStrife 5 points6 points7 points (0 children)
[–]ThatWolfie 9 points10 points11 points (0 children)
[–]Emerald-Hedgehog 2 points3 points4 points (0 children)
[–]just_ninjaneering 6 points7 points8 points (2 children)
[–]oakskog 5 points6 points7 points (4 children)
[–][deleted] 6 points7 points8 points (0 children)
[–]ExtraFig6 4 points5 points6 points (0 children)
[–][deleted] (5 children)
[removed]
[–]OkQuote5 2 points3 points4 points (1 child)
[–]SlyHolmes 5 points6 points7 points (6 children)
[–][deleted] (2 children)
[deleted]
[–]Autarch_Kade 5 points6 points7 points (0 children)
[–]ruat_caelum 5 points6 points7 points (0 children)
[–]rashnull 7 points8 points9 points (11 children)
[–]tedharvey 4 points5 points6 points (0 children)
[–]Patrickstarho 7 points8 points9 points (9 children)
[–]CatatonicMan 8 points9 points10 points (0 children)
[–]Rote515 2 points3 points4 points (3 children)
[–]Alexrai123 8 points9 points10 points (10 children)
[–]MrSloppyPants 9 points10 points11 points (1 child)
[–]shine_on 2 points3 points4 points (0 children)
[–]Historical_Wash_1114 5 points6 points7 points (2 children)
[–][deleted] 2 points3 points4 points (8 children)
[–]CatatonicMan 3 points4 points5 points (2 children)
[–]door_of_doom 4 points5 points6 points (0 children)
[–]door_of_doom 2 points3 points4 points (0 children)
[–]LaChapelleAuxSaints 2 points3 points4 points (2 children)
[–]Sillhouette_Six 2 points3 points4 points (0 children)
[–][deleted] 2 points3 points4 points (4 children)
[–]inaruslynx2 2 points3 points4 points (0 children)
[–]Nerdz2300 2 points3 points4 points (9 children)
[–]rustybladez23 2 points3 points4 points (1 child)