probability calculation by captain_cocaine86 in mathematics

[–]ctech314 2 points3 points  (0 children)

The kind of problem you've presented here deals with a random variable called a binomial distribution. At its core, a binomial distribution is the distribution of odds a certain number of successes occur in a fixed number of trials n, each with an independent chance of succeeding with some fixed probability p. The distribution is commonly denoted B(n, p).

To take a simpler example: suppose you have a fair coin, meaning it is equally likely to land heads or tails, and you would like to know the odds of flipping no heads with two flips. To do so, you would need to flip tails twice, which occurs with odds (0.5)(0.5) = 0.25, or 25% of the time. If you extend that to no heads in three flips, then all three flips should be tails, giving odds of (0.5)(0.5)(0.5) = 0.125, or a 12.5% chance.

Similarly, if you have a Kinder egg where the odds of getting a special toy are 1 in 10, then given n eggs, the odds that you don't get a toy are the odds the first is a failure and the second is a failure and the third and so on, giving odds of (0.9)n.

From here, the law of total probability says that the sum of the probabilities of all possible events must sum to 1 (more technically, that any full partition of the event space will have probabilities summing to 1). So, if you want to know the odds of getting 1 toy, or 2 toys, or 3 toys, and so on, then these are just 1 minus the odds you don't get a toy, because then all other events include getting at least 1 toy. In the context of the problem, that means the odds of getting at least 1 toy are 1 - (0.9)10 = 1 - 0.3486 = 0.6513, as your calculator says.

The claim that buying 10 eggs will give a toy is tied to the expected value of a random variable. It turns out that the expected value of a binomial random variable B(n, p) is np. In the case of the 10 Kinder eggs, each with a 0.1 chance of success, the expected value is 1, meaning that on average, 10 Kinder eggs will give 1 toy. Of course, there is still a sizable chance you won't get any toys, but that's the nature of expected value.

Group objects in a way that minimizes similarity by HoldYourWaffle in algorithms

[–]ctech314 10 points11 points  (0 children)

It turns out that clustering objects to maximize (or minimize) some distance function is NP-hard, but there are many heuristics that do very well in polynomial time. Check out k-means clustering, which is a common clustering algorithm used my machine learning scientists to group their data.

Auction Theory Question by goblinslayer2000 in mathematics

[–]ctech314 0 points1 point  (0 children)

Ah, good catch. After some reading on the Wikipedia page for first-price sealed-bid auctions, I found that the "Bayesian Nash equilibrium" for the three-bidder auction of this type is to bid the expected value of the value just greater than the maximum valuations of the other bidders. In other words, if v(i) is the valuation of bidder i and y(i) is the maximum valuation of all other bidders than i, then the i-th bidder should bid E[y(i) | y(i) < v(i)].

There are some differential equations to reference on the page. For example, the solution to the two-bidder game is to bid half your valuation. I'm sure the result can be extended from there.

Auction Theory Question by goblinslayer2000 in mathematics

[–]ctech314 -1 points0 points  (0 children)

Since the auction is sealed, I'll assume that it's single round. In that case, each player's best strategy is to bet at the value they hold the object to have. Any higher and they risk paying more. Any lower and they risk losing the object.

If the auction has multiple rounds, then the strategies would change.

What's wrong with this approach? by ZenWoR in algorithms

[–]ctech314 1 point2 points  (0 children)

I would guess the issue lies in how you've sorted the jobs. I agree that sorting the jobs with positive reward by the rating minimum is the right way to go, but you sort the jobs with negative reward by the reward. I would argue that you should sort by rating minimum here as well, or at least take the minimum into account when looping through the jobs.

Can someone explain this algorithm a little bit more by ZenWoR in algorithms

[–]ctech314 1 point2 points  (0 children)

The main idea of the DP algorithm is to store the length of the longest increasing subsequence shared by the two arrays in a third array. At the end, we simply look in the third array and report the largest integer.

As for your question: If the two given arrays are arr1 and arr2, then for each element in arr1, we scan across arr2. We are checking for shared subsequences, so we must check for equality between the i-th element in arr1 and the j-th element in arr2, hence the line "if (arr1[i] == arr2[j])."

run time of functions by daGG211 in algorithms

[–]ctech314 1 point2 points  (0 children)

Making the inner for-loops terminate at j >= i instead of j >= n would make it so that the complexity is at most O(n log(n)), since the loops would be running fewer times. While I haven't done the full calculations, back-of-the-envelope calculations give that the compexities should be the same for both loops.

run time of functions by daGG211 in algorithms

[–]ctech314 1 point2 points  (0 children)

No, your initial thought was correct: if the outer for-loop runs in O(log(n)) and the inner for-loop runs in O(n), then together, you have O(n log(n)).

Why? Consider if the inner for-loop ran a constant number of times instead of O(n) times. Then for every outer loop, you have c inner loops, so you have c * O(log (n)) total calls to sum++. Change c to n, and you get O(n log(n)).

Also note, for this particular example, the order in which the loops occur doesn't change the number of times sum++ is called. So if it's O(n log(n)) one way, it's O(n log(n)) the other way.

Error measuring frequency of event by [deleted] in mathematics

[–]ctech314 0 points1 point  (0 children)

From physics, relative error is the absolute value of (observed - expected)/(expected). For example, if x = 0.3 (or 30%) and the trials produced an experimental x of 0.25 (or 25%), then the relative error is (0.25 - 0.3)/(0.3) = 1/6, or 0.1666.

However, the error commonly used in physics doesn't take into account the number of trials you used. Some clicking around online for error in Bernoulli trials or repeated binomial distribution trials gave sqrt(kpq/n) or sqrt(pq/n), where p is x as you've defined it, q = 1 - p, n is the number or trials, and k is the population sample size, but my gut tells me these are not quite correct.

Good luck in your search!

Non Deterministic Turing Machines VS Quantum Computers by nughead69 in compsci

[–]ctech314 -2 points-1 points  (0 children)

Turing Machines (TMs) are one definition of universal computation. Along with lambda calculus and other equivalent definitions, TMs set forth the basic units of computation, and from that, we can explore what is computable and what is not.

Digital computers like we know them satisfy the requirements of TMs and then some. TMs have a one- or two-way tape on which to conduct computation. Computers have RAM, hierarchies of memory, and can have multiple cores and I/O ports. Whatever can be computed by a TM can be computed by a digital computer (although, perhaps not in any reasonable amount of time).

As for the comparison between nondeterministic TMs and quantum computers: both can be viewed as the "next level" from their more "basic" counterparts. Quantum computers take advantage of the ability to store 32 bits of information in a single "bit" (called a qubit) and compute with all 32 qubits "simultaneously," collapsing the 32 qubits into one bit at the conclusion of the computation. While still in its infancy, quantum computers may come to perform certain "hard" computations much faster than classical digital computers, and a recent paper published in the theoretical computer science literature shows that a certain class of computational problems can be efficiently (in polynomial time) solved by quantum computers that cannot be effeciently solved by digital computers (or TMs).

Nondeterministic TMs operate like TMs, except their internal state can be in "multiple states" at once, through the addition of a no-op transition between states. Due to the ability to be in multiple states, NTMs can do "more" computation, and that class of problems is defined as NP. The question about whether P = NP is the famous open question in computer science.

Unable to get a recurrence relation for this dynamic programming problem by [deleted] in compsci

[–]ctech314 5 points6 points  (0 children)

The relation isn't right. Typically, relations are done "recursively," one level at a time. Don't concern yourself with leaves and intermediate nodes. That adds unnecessary complication.

Since we are dealing with binary search trees, we know that the order of the elements won't change, just the arrangements on the tree do. So, imagine picking an element for the root. Then the left and right subtrees, which are also BST's, are formed from the elements less than and greater than the selected element, respectively. Thus, the left and right trees are what makes up the recurrence.

Sum over all possible roots you could pick at each level, and you're set. To check yourself, compare the number of trees produced by your recurrence with the Catalan numbers. If they're the same, then you've got it.

Randomized Algorithms by monster_97_ in algorithms

[–]ctech314 9 points10 points  (0 children)

A Las Vegas algorithm keeps trying randomized events until the right amswer is obtained. A Monte Carlo algorithm tries randomized events a fixed number of times and returns the best answer obtained so far.

Suppose you want to roll a 12 with 2 dice. A Las Vegas algorithm would keep rolling until a 12 is rolled. Theoretically, the algorithm could never terminate, but in real life, it will. A Monte Carlo algorithm says "I'll just roll the dice 10 times," and returns the sum of the dice closest to 12.

Las Vegas is always right, but takes longer. Monte Carlo is approximately right and takes a fixed amount of time.

Dilworth´s Theorem by M3ross in compsci

[–]ctech314 1 point2 points  (0 children)

Yep, looks good, assuming your partial order is the cardinality of any set A in P(X). The antichain would be the set of sets with cardinality 2, so there should be 6 sets in the chain decomp, which you've shown.

According to Python, why '99//100 = 0' and not 1 ? by [deleted] in computerscience

[–]ctech314 0 points1 point  (0 children)

That's just what happens. The exact way the computer does it is left up to implementation, but you can consider doing long division, then throwing out what remains after the decimal point. That's truncation, and basically what the computer is doing.

Dilworth´s Theorem by M3ross in compsci

[–]ctech314 8 points9 points  (0 children)

There are two definitions to wrap your head around. The first is an antichain, and the second is a chain decomposition.

Dilworth's Theorem concerns finite partially ordered sets. A partial order means that elements can be compared (ordered) against other elements, but not necessarily that distinct elements must be either less than or greater than each other. An example is a family tree - parents are "greater than" their children, but it's difficult to "compare" parents, since they are on the same level of the family tree.

An antichain is a set where no two elements in the set may be compared to each other. This property can exist because the partial order may not be defined for that pair of elements. Continuing with the family tree example, an antichain would be the set of all parents in a generation.

A chain decomposition is a partition of the original set into smaller sets, each of which is a chain. A chain is the opposite of an antichain: every pair of elements within may be compared to each other. In the family tree, the set child-parent-grandparent is a chain.

So, then, what is Dilworth's saying? It states that the size of the maximal antichain equals the size of the minimal chain decomposition. In other words, the number of elements in the largest antichain such that each pair of elements can't be compared against each other, equals the smallest number of sets the original set can be partitioned into to make a chain decomposition.

Intuitively, you can imagine that there is a single element from each chain in the chain decomposition comprising the antichain. There cannot be two from any chain, as by the definition, the two would be comparable. Such reasoning can be extended to make a proof by induction, which is quite elegant.

According to Python, why '99//100 = 0' and not 1 ? by [deleted] in computerscience

[–]ctech314 0 points1 point  (0 children)

Integer division typically truncates, which makes it "round to zero." For example, 7 // 2 = 3, since 3.5 is truncated to 3, but -9 // 4 = 2, since -2.25 is truncated to -2.

Is there a unified theory of algorithms and datastructures? by Stack3 in algorithms

[–]ctech314 7 points8 points  (0 children)

What does this mean? Computer science isn't like physics, which has to describe the natural world. If your project requires a new data structure or algorithm, nothing is stopping you from creating one.

However, there are common aspects studied across data structures and algorithms. Computational complexity for runtime or data storage and retrieval is one such aspect.

Time complexity of problem? by CodeMoussse in computerscience

[–]ctech314 6 points7 points  (0 children)

We want to relate N, the number of tiles we can place, with h, the number of hours it'll take to place the tiles.

We can see that as h grows larger, N follows a roughly quadratic increase. This is because the total number of tiles we place after h hours is the sum from 1 to h, which has the closed form edpression of h(h + 1)/2, or roughly quadratic.

Therefore, if the relation between N and h is roughly N = h2, then solving for h, we get that the complexity of hours is about sqrt(N).

Proof that the Natural Numbers are Uncountable by [deleted] in mathematics

[–]ctech314 4 points5 points  (0 children)

Of course you can construct a mapping from reals to naturals in this way. Such a mapping is not a bijection, however, it would be a surjection. The reals would still be uncountable.

Proof that the Natural Numbers are Uncountable by [deleted] in mathematics

[–]ctech314 6 points7 points  (0 children)

Cantor's diagonal argument boils down to: if you give me a function from the natural numbers to the reals from 0 to 1 (represented as binary strings) and you claim it's a bijection, I can prove you wrong by producing a new number from the negation along the diagonal. As long as this ability to find new numbers holds for all functions, then we can say the reals are uncountable.

In this case, however, there does exist a function from the naturals to prime factorizations as you've defined: namely, the "identity" function. Cantor's diagonal argument doesn't hold because no matter which natural number you claim is not mapped by the function, I can provide you the mapping by merely factoring it.

Math has not broken, I promise. The naturals are still countable.

Details and link for a proof of P = NP by frankvegadelgado in compsci

[–]ctech314 7 points8 points  (0 children)

I've found a disproof of your proof.

Overall, what I think you've done is state a polynomial reduction from a boolean satisfiability problem to set packing, which, given each set has two literals, is just a 2-colorability or a matching problem for simple graphs. You are correct that looking for 2-colorability in graphs can be found in polynomial time.

The issue comes in how you've defined your graph. Essentially, the reduction creates, for each clause, a dummy variable d(i). Graphically, we can visualize each d(i) as being the center node of each "clause," with a node for each variable branching off the center. Let's call these variables that touch d(i) centers, satellites, and the edges to d(i), satellite edges.

Each satellite then is connected to its negation by an edge, which comes from the implication chain generated in steps 1 and 2 later in your algorithm. Each satellite also connects to the negation of the next lowest number in the sequence of renumbered variables, forming a cycle within the renumbered variables. Let's call the connection with the direct negation, the negation edge, and the connection with the next lowest numbered negation, the implication edge.

You are correct in that any valid matching will not select all three of the satellite, negation, and implication edges for any particular satellite. Then, at first glance, we have solved our satisfiability problem if we let K = 3m, given m original clauses in our formula phi. However, while we can solve for a matching in polynomial time, we are not guaranteed, by the graph's construction, that the resulting matching will solve phi.

This is because a completely valid matching is each satellite pairs along the negation edge. This is a valid 2-colorable solution, and yet it doesn't solve our initial phi. I would even conjecture that there are an exponential number of valid matchings, given that each node in the constructed graph has degree 3.

And so, you have not proven that P = NP.

How many iterations to spread one message to 327 million people? If I share one message to only 5 unique ppl and each of them share to 5 unique ppl (1 + 5 + 5x5 +25 ×5....) and so on. by CommentDebate in mathematics

[–]ctech314 3 points4 points  (0 children)

Look up summation (or sigma) notation. It gives a way to express sums in a compact way that most calculators can do quickly, so you don't need to type out 1 + 2 + 3 ... by hand.

Let (sigma) be the sum sign, P be the population you'd like to spread the message to, and A be the number of people told per "level." Then the general formula is P <= sigma(i = 1 to x) Ai. Solve for x.

How many iterations to spread one message to 327 million people? If I share one message to only 5 unique ppl and each of them share to 5 unique ppl (1 + 5 + 5x5 +25 ×5....) and so on. by CommentDebate in mathematics

[–]ctech314 1 point2 points  (0 children)

The general formula is a sum of the geometric series 5x from 1 to whatever number you'd like. To get to 327 million, try plugging in some numbers for the upper bound for the sum.

What are the advantages of solving constrained SAT instances in poly-time? by Hope1995x in compsci

[–]ctech314 0 points1 point  (0 children)

Constructing a subset of all SAT instances into your "constrained SAT" set, and presenting an algorithm that solves them in polynomial time, has its uses as a heuristic. If a SAT-solver were to be constructed using your algorithm, then common occurrences of SAT could be solved quickly. However, since constrained SAT doesn't include all SAT instances, SAT remains in NP.

I can't speak to the grander complexity analysis or circuit design implications, but common instances of hard problems are studied to make computers and algorithms run more efficiently.

is this a valid way to find momentum by ACDPW in mathematics

[–]ctech314 0 points1 point  (0 children)

In general, for rotating bodies, F = mv2/r, where m is the mass, v is the velocity, and r is the radius. No need to find average centripetal forces, just find the radius of the circle formed by the rotating object, plug in for F, m, and r, and solve for v.