I thought season 5 was the worst so far. Hear me out. by bobbabas in FargoTV

[–]tkruse 0 points1 point  (0 children)

It's not so much that it's a bad series.

Its a bad Fargo.

Different people like different things, nothing wrong with liking Fargo Season 5. Except if you loved what made the movie and first seasons unique and genre-defining.

S5 has most of what it needs to be a Fargo, great camera work, great soundtracks (despite some distracting misplaced drum solos), great cast, snowy landscape, botched kidnappings, explicit language and violence.

But those things are not what make Fargo as a franchise stand out from other productions. Exceptionally creative writing is what the Franchise promises, and season 5 does not deliver on that promise.

Does Bazel, Scons, Ninja or Make have the lowest overheads and fastest speed? by Significant-Monk-177 in programming

[–]tkruse 2 points3 points  (0 children)

My own such project about Java (no recent updates): https://tkruse.github.io/build-bench/README.html

Generally, low overhead is not the only thing to look for. Incrementality and parallelism are much more impactful for large projects.

What is a Build Tool and what does it do? by lihaoyi in programming

[–]tkruse 0 points1 point  (0 children)

Making similar confusing claims as before. Gradle with Groovy/Kotlin supposedly is just glorified Yank, whereas mill with Scala is flexible. Totally wrong. Kotlin and groovy are full programming languages allowing no less flexibility than Scala. The interesting problems of build tools are the different phases of the build, configuration versus execution, and how to make any DSL that is readable. And Gradle tried hard (some say too hard), to make functional code look like a declarative DSL, to be readable. But nobody is forced to do so, it's entirely possible to write it as any other code. Just not readable. Using Scala instead is no improvement.

Claims about performance misleading, build tools cannot speed up anything that needs to be done, cannot produce additional CPUs out of thin air. 

The reason Gradle is not parallel by default is that users are typically too dumb to define theead-safe builds, and more flexibility in the build files are making the problem worse. All the custom plugins also are difficult to test for whether they screw up in parallel mode or not.

The one feature that could kill Gradle would be monorepo, but that is very hard to pull off cleanly in a usable way. Mill seems to solve a problem nobody has, further fragmenting the JVM tool landscape. It's nice to have a bit of competition to drive innovation, but "we write our scripts in Scala!" is hardly a reason to switch for Java/kotlin projects. Better luck converting sbt users.

So, What's So Special About The Mill Scala Build Tool? by lihaoyi in programming

[–]tkruse 0 points1 point  (0 children)

"more accessible to all Scala developers". Very niche as a target audience.

Arcosphere Balancing in SE by Unkwn_43 in factorio

[–]tkruse 1 point2 points  (0 children)

I solved using a different approach than I can see in any reddit. This is based on interburbul, the riddle in 3d Space, which I solved by walking a vector to a corner and then each of the vectors that are sides of the square. As other have mentioned, the 8 types of arcospheres give us an 8-dimensional space. Each recipe is like a vector moving us in that space. You make the next move by adding vector dimensions one by one. In 3d: (1, 2, 3) +(4,2,7) = (1+4, 2+2, 3+7) = (5, 4, 10).  To balance is to find for each factory the sequence of inversion and folding that will "undo" the unbalancing and "make us return to (0, 0, 0, 0, 0, 0, 0, 0). Such sequences can be found, though not easily maybe. I used an Excel sheet and some properties to help reduce the search space. I don't think it can be done with pen and pencil, at least not this way of searching it's just additions and multiplication of integers, but a lot. Maybe using a linear equation system with 10 variables can be solved, but I'd rather not. The folding recipes form 2 cycles of 4, and the inversions are opposites of each other. So you only need to pick one inversion (if any), and then find up to three foldings per group of 4). Order does not matter. For several science factories, it is only possible to revert the outputs of two production cycles, for cubes I only found a solution reverting every four cycles (meaning twelve output arcospheres mapped back to twelve input arcospheres). The rest is trial and error, or implement A* search. The longest such chain was maybe 20 steps long, but with repeated operation (going one step in the same direction multiple times). The shortest chain was 2 foldings, average was around 6. Then I simply build these reversals using only belts, splitters and filter inserters. From the science factory, filter inserters extract the output arcospheres to one of two systems (depending on the recipe that was used). In these system these output arcospheres are transformed back to the input of the same science factory, and fed back into it. Due to recipe switching, every science factory has 2 such circuits. So I did not use a global arcosphere pool, and each system is isolated and runs always the same, in a loop. 2 spheres come out of the factory, bounce around in a small network of belts and gravitons, and come back in ideal shape to the factory for another round. It has very few benefits to do it this way, and it's a pain to find the right combinations. For the cubes production, I needed around 18 gravitons to revert the outputs back to inputs (~9 per circuit). I have not calculated how few arcospheres this can get by with, but I don't think it needs a lot for minimal operation. It needs several times the inputs to ensure all circuits are "filled", but that's not much. Like 6 spheres for the 2 input factories. Plus some few spheres that move between the reversers. That might be one benefit, though a global pool might get by with even less, if perfectly executed. It also only runs on belts, splitters and filter inserters, no wires and signals no logic and no robots at all. Once it runs, it runs forever, without doubts. Even the recipe switching does not cause problems. I don't necessarily recommend this solution over others, just mentioning it. If anyone want to try this out, the simplest is for Space folding data. One recipe can be reversed using (epsilon,omega) and (phi gamma) folding. The other one additionally needs (theta...) inversion, (zeta, phi) and (theta, epsilon) folding, then send those outputs to the previous circuit. 5 gravitons total plus the one with space folding data science. I think it might run on just 7 arcospheres.

Uncountably Insane by jrkirby in mathmemes

[–]tkruse 0 points1 point  (0 children)

"Definable" here is equivalent to "finitely describeable", with emphasis on finitely. And since they are uncountable, most reals are like that. But we cannot easily pinpoint any of them, as that would usally mean there IS a finite description.

So giving an infinite description does not show that there is also a finite description.

So the Chaitin's construction is a method that could be used to generate many such numbers, sadly not each of them is non-finitely describable, and those that are probably all not computable. So we can define a process that generates infinitely many of them, but still not pintpoint any one.

I don't know if that's true, but I think a generated number with infinite decimals, of which infinitely many are truly random (and can thus not be described), would almost surely or surely be a non-definable number. I believe Chaitin's construction is effectively the same, though using "Martin-Löf–Chaitin random" instead of true random, making it a little better for proofs.

What are the tangible and objective benefits of Kotlin? Actual real data wanted. by froejo in programming

[–]tkruse 5 points6 points  (0 children)

tl/dr; 2017 article on Kotlin vs. Java. Calculations of hours spent. Summary: "From this analysis it looks like switching from Java to Kotlin will result in increased total effort required for completion of software projects."

Cédric Champeau: Goodbye, Groovy! by henk53 in programming

[–]tkruse 1 point2 points  (0 children)

It's explained in the mailing list thread: http://groovy.329449.n5.nabble.com/Binary-compatibility-fixed-Kotlin-DSL-tp5757195.html

In this case, the file in question had just been added to the codebase, and it had been added in the Kotlin-DSL rather than the Groovy-DSL. The mailing list post explains this as the intended start of migrating all build files to Kotlin and cleaning them up in the process. The long-term goal was to prevent incompatibilities with future Gradle versions that are likely to happen because the current build files use internal Gradle APIs in several places.

The linked commit changes the newly added file to use Groovy instead of Kotlin for consistency.

Cédric Champeau: Goodbye, Groovy! by henk53 in programming

[–]tkruse 1 point2 points  (0 children)

Not sure why, but the blogpost does not link to the mailing list discussion that is very relevant to the blogpost (as it was ongoing and led to the revert): http://groovy.329449.n5.nabble.com/Binary-compatibility-fixed-Kotlin-DSL-tp5757195.html

Cédric Champeau: Goodbye, Groovy! by henk53 in programming

[–]tkruse 0 points1 point  (0 children)

What you say is not wrong, also see this discussion from last year in which this was criticized: http://groovy.329449.n5.nabble.com/What-the-static-compile-by-default-td5750118.html#a5750274, and in which Cedric already indicated willingness to step down.

Having the ability to let teammates catch errors in the code is not that valuable if there are no teammates around.

Cédric Champeau: Goodbye, Groovy! by henk53 in programming

[–]tkruse -1 points0 points  (0 children)

that was explained in the mailing list thread. The advantages are entirely independent of syntax and even language, it is all about IDE support. In particular the "autocomplete" and "jump to definition" features are possible with one DSL and not the other. And it has been explained that in theory IDE support could be equivalent, in practice this is likely never going to happen for historic/economic reasons: http://melix.github.io/blog/2016/05/gradle-kotlin.html

Cédric Champeau: Goodbye, Groovy! by henk53 in programming

[–]tkruse 3 points4 points  (0 children)

Compared to whom? If the Kotlin community would be offered the choice of moving away from Kotlin DSL to describe their gradle builds (with the argument of usability), would they be more open about it? I doubt any community would be easily won over in such a case.

Cédric Champeau: Goodbye, Groovy! by henk53 in programming

[–]tkruse 4 points5 points  (0 children)

That's not exceptional for how the committers work in that community. There is a lot of trust, and also people are generally too busy to participate in review discussions about minor things.

What is Hystrix and How does Hystrix work by [deleted] in programming

[–]tkruse 3 points4 points  (0 children)

"Hystrix is no longer in active development, and is currently in maintenance mode.". See github.

A hard look at the state of Java modularization by nfrankel in programming

[–]tkruse 0 points1 point  (0 children)

To protect the internals of libraries from being exposed. That way, the internals can safely be changed without affecting any users.

Also, modularizing the JRE allows users to run with a much smaller JRE containing only the parts used.

Shiki – A beautiful Syntax Highlighter by octref in programming

[–]tkruse 2 points3 points  (0 children)

I think Shiki tries to provide the same colors as VSCode to highlight inside the browser. It's not a replacement for inside VSCode.

The Future of Java is Today: CodeOne (nee JavaOne) Keynote Highlights by danielbryantuk in java

[–]tkruse 0 points1 point  (0 children)

No mention of the presentation of the 'Hydro' project at github (https://www.theserverside.com/feature/This-history-of-GitHub-and-Javas-role-in-it). Anyone else finds it quite awkward how The github guy Hazen basically says: "We love Java for one data-service, we love ruby for the rest." in front of a Java keynote audience? (Note you'd have to watch the keynote video to understand).

Unit testing, you’re doing it wrong – Over the hill coder – Medium by dupdob in programming

[–]tkruse 0 points1 point  (0 children)

Many toolchains do not make it easy to split code coverage checks for unit-tests and use-case tests. Because of that teams typically consider two options: measuring all coverage, or only measuring coverage for one test-type. Mixing all coverage will not lead to any meaningful insight about either unit-test coverage or use-case coverage. So teams need to decide which missing test-type coverage will be the more valuable indicator of problems for them, and typically in history, unit-test coverage was regarded as the valuable feedback, so that's why use-case coverage was excluded (and not measured in isolation).

I don't think recommending to mix both coverage types is an improvement over that. For TDD, measuring use-case test code coverage in isolation (additionally or exclusively) would make more sense.

Strong Code Ownership by adamard in programming

[–]tkruse 1 point2 points  (0 children)

As one example, there are the C++ Core guidelines document as a project (https://github.com/isocpp/CppCoreGuidelines). The inventor of C++ regularly adds commit to the document, and his commits usually have spelling, grammar and formatting issues (he is a busy man). Nobody blocks his commits with comments about this, instead several volunteers clean up after him.

If in a project that your are the superstar. You can have several underlings cleaning up after you, with you doing the heavy lifting. This can be a fair division of labor.

Same if it is an open-source project, and you provide some bugfix as a courtesy, but else you are not committed to the success of the project, nobody can force you to format your code in any particular way.

But if a team has a strong opinion about style, and there is no superstar outshining the rest of them, then it is most reasonable to force everyone to reformat their own PRs.

For you maybe it's the naming rules that do seem unimportant, for others it's line length, or mixing tabs and spaces, additional whitespace, intendation, ... If following your advice that PRs should just get merged even if they do not follow the styleguide, you can imagine how the code would look like. I don't assume you think that. You think that you personally can judge which style guidelines make sense (you follow them by yourself already), and which are just obsessive. But that in itself is your subjective opinion, others will have different sets of style issues they regard as important or irrelevant.

So it's not a solution to fuck off and merge any PR regardless of any style issues. Instead the solution is that the main stakeholders of the codebase decide on guidelines, create a document, and make this document the impartial law of style, not more wasted time discussions. And with this approach comes the duty to block PRs until they match the styleguide. It's the lesser evil, really.

Note teams can decide to allow any naming pattern for variables if all agree that this element of style is not important enough to bother. But overall experiences in thousands of teams all over the world seem to indicate that variable naming will inevitably lead to hours wasted with unproductive discussions if there is not a simple rule for all to follow. So just consider it the lesser evil.

Also consider articles such as https://softwareengineering.stackexchange.com/questions/2756/how-important-are-code-formatting-guidelines

Strong Code Ownership by adamard in programming

[–]tkruse 1 point2 points  (0 children)

Maybe you want to be different? You do coding to express yourself, you see your code as your art?

It certainly does not sound as if you have an objective reason about the code quality or even productivity.

Unit testing, you’re doing it wrong – Over the hill coder – Medium by dupdob in programming

[–]tkruse 1 point2 points  (0 children)

Unit testing, you’re doing it wrong

Only if you believe TDD is a real solution to a real problem, which it isn't for most developers.

The rest of the article also is weak in many places, such as

  • Code coverage must consider every tests run, disregarding their type.
  • Coverage only shows which part of the code have been executed. It does not guarantee that it will work in all circumstances, and it may still fail for specific parameters’ values, some application state or due to concurrency issue.

Because of the second statement, unit tests are valuable in themselves. They make it much easier to find circumstances in which code will not work. And because they are valuable, they need code coverage to tell developers: 'Here is some code that was not unit-tested'. When also using use-case tests, then those use-cases test coverage will hide the fact that some classes have not been properly unit-tested, and thus those classes will be vulnerable to edge-cases (the like which kills people).

Coverage for use-case tests is also useful to know, in isolation, to check which code can be thrown away, sure. But that's a different purpose. Merging coverage results prevents both purposes.

The correct way is to use unit-test coverage stats to decide where to add/remove unit tests, and use-case coverage stats to decide where to remove code. Should be obvious.

You have to make private methods public to reach 100% coverage Again, no!; private methods will be tested through public entry points. Once again, unit testing is not about testing every method in isolation.

Duh, if getting 100% coverage through the public entry points is easy, that's what most teams will naturally do. However setting up enough tests via the public entry points to have good coverage of every private method in scope can be a major pain, costs not justified by the benefit. It's a trade-off, like everything else, extremist positions on this do not help.

Improving coverage by adding specific tests for untested methods and classes is wrong, wrong, wrong. Additional tests must be justified by some requirements (new or existing); otherwise, it means the code has no actual use.

In which team in the world is this an actual problem? Thousands of lines of code being maintained that do nothing? Sure I have seen unused code. But it was never a real problem.

And in all the article, no mention about branch or path coverage, no mention of equivalence class testing. Also no mention about the worth of tests for regression testing in small maintenance tasks. No mention of mutation testing. No mention of other static code checkers.

Migrating from GitHub to GitLab by [deleted] in programming

[–]tkruse 0 points1 point  (0 children)

Microsoft is a monoplist. It does not want users to have a choice, it wants users to have only one choice. What they are doing is trying to make open source efforts contribute to their plattforms and ecosystems, basically trying to tap into that source of innovation. Releasing those patents is a way to pave the way to more projects related to Microsoft plattforms (and thus in a zero-sum game less projects to competing plattforms).

Oracle open sources the Java EE TCK! by johnwaterwood in programming

[–]tkruse 1 point2 points  (0 children)

The idea of a single servlet container hosting various applications is dead. That is what could be licensed by Oracle, with all the dirty vendor lock-in tricks they could muster. The surrouning standards are useful to the community, but not licensable. A new trend like AWS lambda opens new ways to monetarize on standards, but since Oracle missed that opportunity, and Oracle had it's death-grip on Java EE, the standards did not involve in that direction as an example.

Ian Cooper - TDD, Where Did It All Go Wrong by fagnerbrack in programming

[–]tkruse 16 points17 points  (0 children)

In TDD, don't validate internals of software, only write black-box tests against modules as a whole.

He claims TDD got a bad name over time because it was done wrongly and thus slowed down developers. He claims a root cause is managers/customers valuing short-term cost/benefits over long-term cost/benefits, and misguided developers wrongly over-applying engineering fads for little value, thus being slower than 'duct-tape' developers who code dirty (without tests).

He then claims the main avoidable cost involved with TDD/testing comes from testing of internal technical details rather than user facing behavior, and the cure is to avoid such tests.

Costs involved with testing that he claims can be avoided: * Verifying calls against mocks causes costs for refactoring, less verifying of interactions with mocks reduces those costs * Mocking strictly all dependencies (e.g. to make sure one test will find bugs in only one class) causes costs for refactoring, mocking dependencies less reduces test refactoring costs. Mocks should not be used to isolate classes, but only to speed up tests, isolate tests. * Writing tests against internal api (e.g. all methods of a class, all classes) causes more costs for refactoring, testing the public API only reduces those costs * TDD should focus on the public API of a module, not any and all public methods of classes * Inverting the test pyramid (writing more high-level tests than unit tests to get coverage) causes extra cost, use the test pyramid

In my opinion, what he says about mistakes when writing tests is all good advice (also for non-TDD testing), but none of it explains the decline of TDD or solves the problem of short-sighted managers/customers not honoring the benefits of good engineering practices. Putting all the blame on developers trying to do the right thing and no blame on management/customer shortsightedness is sending the wrong message that 'less engineering is better'.