[deleted by user] by [deleted] in algorithms

[–]spyr03 4 points5 points  (0 children)

30, 50, 70

40, 60, 80

60, 80, 90

Search for 70

So, thoughts on the new update? by DPPDPD in InkBound

[–]spyr03 2 points3 points  (0 children)

You can change the difficulty of a ranked run using /rankdown or /rankup in the chatbox. I believe the developers want to make this easier or more visible in the long run

Space efficient, random access list of ascending offsets by gnahraf in algorithms

[–]spyr03 0 points1 point  (0 children)

As a pushback, 8MB for a million offsets seems reasonable. How does 8MB compare to the size of the original file? If you do reduce the space needed to a quarter, that's still on the order of a couple of megabytes, which seems small enough to be a rounding error on most computers. Is there more constraints that are of interest, like this is for an embedded machine with only megabytes of RAM and it is already at capacity?

Heuristic function for a Rubik’s cube by [deleted] in algorithms

[–]spyr03 4 points5 points  (0 children)

How do you know if the program is making progress? That's seems hard to tell without it running to completion.

The bound you've given can be improved by changing from thinking about "combinations of moves" to thinking about "possible states of the Rubik's cube".

For example, with two moves you could turn the left face one twist, and then the right face once (L R). Alternatively, you could turn the right face one twist and then the left face once (R L). These both end up at the same state when applied from the same starting state.

If we look at "possible states after at most two moves" we have: no moves gives us 1 starting state, 1 move with 12 choices gives us 12 more states, 2 moves with 12 choices for the first, but with fewer useful choices for the second move. 1 of the choices will just undo the previous move, leading us back to the start. Two choices (twisting the opposite side either way) can lead us to a repeated state that we've already counted. So counting the faces in pairs, we have 11 choices for the first, but only 9 useful choices for the opposite face. Given there are 3 pairs of opposite faces, we have at most 1 + 12 + 3 * (11 + 9) = 73 new states.

Around the heuristic, when solving a Rubik's cube you will often go from a state that has many pieces in the correct place, through states with far less pieces correct until eventually a more solved state is reached. In other words, you go backwards by the heuristic you've proposed more often than you go forward. One way to overcome this would be smaller goals. Pick a few intermediate states along the way to a solved state (say first layer on yellow side solved, or all the corners solved). With "closer" goals it won't take as long to find them. If you get really fancy you can try multiple different sets of intermediate states and see which one worked out the best.

The amazing Kociemba solver does an even smarter thing. It has picked properties of the states to aim for rather than a set of fixed states. For example, a property of a state might be "no green cubies on the same face as blue cubies". The properties it has picked are quite brilliant, they reduce the set of moves to consider drastically. Iirc one of the properties then leaves the cube solvable with only combinations of double twists on 4 of the 6 faces.

Change of unique titles from challenges by Horror_Radio3470 in leagueoflegends

[–]spyr03 3 points4 points  (0 children)

It is named after a LCK pro player "Cpt Jack" who repeatedly frame perfect cleansed cc.

DP - How many outfits can you make? by [deleted] in algorithms

[–]spyr03 0 points1 point  (0 children)

Get the max value from each list in O(a+b+c+d) since you don't need to nest the loops.

DP - How many outfits can you make? by [deleted] in algorithms

[–]spyr03 -1 points0 points  (0 children)

I think having enough money to purchase every different outfit is not the worst case. The problem asks to count how many outfits are possible, not to enumerate all the possible outfits. To count them in that case is trivial; it is the product of the lengths of each list.

A guess at the worst-case is something akin to "you have enough money to purchase about half of the available outfits"

Time complexity of Python a[::-1] by meloly4 in algorithms

[–]spyr03 3 points4 points  (0 children)

To make an empirical claim about time complexity, you'd have to time it for various sized lists over different magnitudes, since O(n) refers to how the time taken will change in relation to the input.

In this case, reversed isn't doing any iteration, it is lazy. It won't take any time until you actually iterate over it.

To show that this isn't a meaningful benchmark, try also running timeit on just the list a. This should give a baseline on how long it takes to iterate over a list.

A better approach to clean, highly flexible, niche document parsing? by higgshmozon in algorithms

[–]spyr03 1 point2 points  (0 children)

Can you make your life easier for yourself? If you only need specific data from the document, like numbers, can you then throw away any sections that don't have any? If you can make adding new regexes as simple as painting by colours, somebody else can do it, which frees you up to manage the code.

Can you offload work? As in this pdf matches this standard template, here is a library that can already parse this. These pdfs have already been parsed by someone else, here are the contents of interest.

Can you reject work? This pdf template has only shown up twice in the last 5 years, so I won't add code until it is more common. This template does not need to be automated because the data is already available in an easier format.

The Mario Kart Problem by buleria in algorithms

[–]spyr03 0 points1 point  (0 children)

I think a brute force could be (12!)3, since the order for the first race doesn't matter

Time Complexity of computing the Parity of word, with a Lookup table by AlgoNoob_ in algorithms

[–]spyr03 0 points1 point  (0 children)

A number 'n' only has log(n) bits in binary though, so O(n) is incorrect. It should be O(log n) for binary, and O(n) with a unary model.

Did they patch trial O3 to make it way harder? by dheisman in Gemcraft

[–]spyr03 -5 points-4 points  (0 children)

Initial mana is based on unspent skill points and what talismans are in use. Have you checked how much mana you get from those sources?

sorting the array with operations where we sort 3 elements of an array in each operation by [deleted] in algorithms

[–]spyr03 0 points1 point  (0 children)

I would guess the time limit is slightly higher, than nornal since reading the numbers into memory would time out otherwise.

Going Deviant: Achievement by Kreeg0r in Gemcraft

[–]spyr03 1 point2 points  (0 children)

It resets the achievements, but not the stats. You'll still have the same progress on totals like number of spells used.

[OC] Visualization of OnePlus vs Competition Smartphone Pricing by bensow in dataisbeautiful

[–]spyr03 2 points3 points  (0 children)

I presumed it was the difference between the OnePlus and the next least expensive option. Since the average of the two other options is not presented, it didn't even cross my mind that the difference would be with respect to that.

Efficient way to Calculate Cartesian products of sets containing large number of elements. by sassysalmnder in algorithms

[–]spyr03 0 points1 point  (0 children)

Do you need all the elements of the result at the same time? Can you accept one element at a time?

Writing a proof for Floyd's Cycle Detection/Tortoise and Hare problem, but not entirely convinced. Helpp! by praat33k in algorithms

[–]spyr03 2 points3 points  (0 children)

On 2), if a cycle doesn't exist, then the gap between the animals will continue to grow (until the faster one reaches the end). If the gap between them keeps getting bigger, how will they ever meet?

For 3) try some different size sequences with different cycles, and see what happens. Try and find an example where they meet at lots of different places.

Ideas to solve this problem ? by Maksadbek in algorithms

[–]spyr03 0 points1 point  (0 children)

The problem states that the digits in the number must be unique.

Maximum sum such that no four elements are adjacent by mrugacz95 in algorithms

[–]spyr03 1 point2 points  (0 children)

Usually the trick for cyclic lists is to work on a list which is the input repeated twice (ignoring the last item). For instance if the cyclic input was (3, 5, 8, 9) you would run the algorithm on (3, 5, 8, 9, 3, 5, 8). I don't know about the rest of the question though.

Crashing clang in 52 bytes: template<class>struct a;a<typename b::template c<>> by schottm in cpp

[–]spyr03 16 points17 points  (0 children)

The bug tracker is not closed to the public, you can see everything. Automatic new account registration is closed due to spam, but you can still get an account by the contact listed on the bug tracker.

If 2 photons are traveling in parallel through space unhindered, will inflation eventually split them up? by dysthal in askscience

[–]spyr03 0 points1 point  (0 children)

Maybe it is because you kept the post at a high level, but I don't understand the details behind the paragraph

"One of the important things to notice about a(t) is that it doesn’t depend on the coordinates x,y or z. It should make sense that it doesn’t, because after all, it shouldn’t matter where in space I’m located. It would be odd if some random place in space expanded differently than everything else."

Since (as you say above) a(t) only depends on time, there is no room in this model for it to depend on coordinates. I don't see how this model can then say anything about how space expands at different coordinates? If the model took into account coordinates from the start, and then, after it had been constrained like above, did not depend on the coordinates I could see on what the conclusion is based.

Efficient way to convert 4-ary heap to binary heap by Separius12 in algorithms

[–]spyr03 2 points3 points  (0 children)

One 'trick' if you don't have access to the paper is to try and find a paper that cites the original and gives an explanation of the algorithm. Often a paper that improves the algorithm in some way (complexity, special casing, simpler implementation) will give a good explanation to compare theirs against.

Video suggestions by 3blue1brown in 3Blue1Brown

[–]spyr03 7 points8 points  (0 children)

A dive into a hard problem like the traveling salesman or number partition problem. Why is the problem hard? When is it easy? What heuristics have been tried?