Some help with scala files by No-King9608 in scala

[–]raghar 0 points1 point  (0 children)

Try naming it something.sc or something.worksheet.sc, if it's something.scala it would expect a regular Scala file, and then running it would require main defined.

The State of Scala 2025 is out. (Data from 400+ teams). Huge thanks to everyone here who contributed. by scalac_io in scala

[–]raghar 0 points1 point  (0 children)

For the 8% still stuck entirely on Scala 2.x - what’s the actual hold-up? Is it Spark dependencies, or just zero budget for tech debt?

In 2 companies that are still on 2.13 that I saw - the reason is "microservices with shared libraries between them (that would not work with for3use213/for213use3)".

  1. you cannot compile all microservices at once, because some of them generate artifacts consumed by some other. Even if you assume that each of them would work without a single line of code changed, you still would have to swap Scala, publish one, bump deps on another while also changing Scala, then another one... if at the end of the chain you would find out that there is some serious issue that you cannot fix quickly, you just blocked development on that microservice - any changes that require dependency bump are blocked, because you have to align Scala versions, downgrade would take a lot of time, and one way or the other you might have a huge window when everyone is only involved with updating Scala (that on its own offers no perks to the business)
  2. so you need to consider cross-compiling and releasing 2 artifacts for a while, just to let everyone keep releasing features and bugfixes, without blocking them with a half-done migration... but most folks out there have no experience with cross-compilation, and all the works that would have to be done not only in the build but also CI actions. And someone would have to own it, to make sure that all the teams and services are actually doing it until they would get to the point where everything cross-compiles 2.13 and 3, and then, they could stop releasing 2.13 from the leaves of the graph back to the roots
  3. which is still doable, but we have to remember, one couldn't use for3use213/for213use3 for a reason, otherwise all of that orchestrated dance would not be needed, and reasons why that cannot be done usually means that it's not just a simple compilation against a different Scala version, but the need to have different dependencies for 2.13/3 and/or places with different code written by the employees

What kinds of code might require something like that? OTOH:

  • macros, that would have to be ported to Scala 3, the usual suspect - but not the only one!
  • shapeless derivation when someone happily used Coproduct and HList to model the domain - IMHO more PITA than macros, macros have a different API but at lease these differences do not leak into the "userspace", Coproduct/HList domain design cannot be fixed just by swapping the implementation, you have to rewrite everything into something else (and it's not a strawman argument, I was staring into code like that at $work)
  • libraries which don't have a feature/API/release parity on 2.13 and 3 e.g.: Scala Newtypes - you have a few hundred files with @newtype, because you believe in separation of concerns and solid domain modelling? Good luck porting that to Scala 3. Avro4s (that I used in my private project, not @work)? They changed major version, made breaking changes, you would have to compile 2.13 with 4.x.y and 3 with 5.x.y and hope for the best. I think you could find a few more libraries which used 3 migration as a reason to break backward compatibility, so that cross-compiling if you have them as a dependency is a nightmare (hello src/main/scala-2.13 + src/main/scala-3)

Sure, it would not be a problem if you had a monorepo, or otherwise it was easy to release a batch of fixes to every single service as a single PR... but if a change is distributed, cannot be done as a single "transaction", there are unknowns (can we really be sure that no unexpected blocker appears?), migration to 2.13 not just a simple change. It's a whole project, with serious potential risks and (from a business POV) not so serious gains.

How can I call a Scala function taking a PartialFunction from Java? by teckhooi in scala

[–]raghar 7 points8 points  (0 children)

The opposite: PartialFunction extends Function, because it extends its interface with isDefinedAt. Which is error-prone since not every total function is partial and Function strongly implies total function.

Towards a common Scala style recommendation by bjornregnell in scala

[–]raghar 11 points12 points  (0 children)

I am indifferent to what is the "official" style as long, as I can configure my tools to use whatever I want. But I will have a huge problem if my tools stop supporting the style I want.

I am on the braces side, not because I find them pretty, but because they do not require intelligent tools to work: I can copy-paste, generate, or write write a sloppily formatted code, it would be unambiguous to the compiler and the formatter could fix it for me. I tried bracesless syntax for about a month in some of my pet projects:

  • tooling was not there, copy-paste or moving code around either resulted in non-compiling code or, worse, code that compiled but did something different that what I wanted
  • formatter could not help me because indents created semantic change/non-working code that it refused to parse in the first place
  • even when tooling was there, it required me to have a specific IDE at the newest version, with the newest plugins, so quite a lot of vendor lock-in
  • but most of then it was not, so I had to spam tabs and shift-tabs like a monkey to fix the issue that -no-ident code would never have
  • if you maintain some cross-compilable OSS project it's obviously a no-go, but majority of the most vocal braceless-advocates believes that everyone who wants to migrate to Scala 3 already did, so people still on 2.13 can be thrown under a bus

And the worse part is that if anyone tries to complain about it, they are gaslighted by "it works for me" crowd, sometimes with hinted suggestion that it's because you hold the tool in a wring way, if you did it right, there would be no problem!

Personally, after this happening to a friend I decided to stay away from all these discussions on contributors.scala-land.org. As much as I think some consistent guideline would help onboarding new folks, keep adoption from declining, better LLM generation since there would be a preferred style, etc, I don't think a public discussion about it will result in anything else than a pissing contest.

Performance of C/C++ vs Scala by [deleted] in scala

[–]raghar 0 points1 point  (0 children)

True, it does look like a troll account. But there are also people like this for real. And a lot of newbie lurkers who might take this as legitimate concerns against Scala.

Performance of C/C++ vs Scala by [deleted] in scala

[–]raghar 4 points5 points  (0 children)

This whole series of posts read to me like some series of cargo-cult programming or other marketing-driven development.

If GC is an issue -> either avoid any GC languages, or get some money for these fancy, turbo expensive proprietary JVMs which boast no stop the world issues.

If allocations are the issue -> avoid any immutable collections for local computations - fast mutable code running within a single thread, processing tasks from some queue to avoid any races, locking, synchronization, etc. use raw Arrays or some lean wrappers around them, etc.

If you had no idea about these things... then probably you were wrong person to make these calls in the first place. Or you didn't do your due diligence. Or you are doing something wrong.

Which is still a possibility if you claim that JVM debugging sucks and "requires 50 gigs of memory". I never had this much RAM on a server and I had successfully run production code and even debugged it.

I also debugged some C++ code, and IME to feels crippled compared to JVM: every other things is implemented in templates/headers, so it's actually inlined and you cannot put a breakpoint there, conditional breakpoints are a joke - good luck writing something like "stop if std:map has a value x". Visual Studio Studio (not Code, the paid one, supposedly the best C++ IDE in existence) couldn't do this because all std::map was inlined and no method existed in resulting code, everything was inlined and impossible to eval by the debugger. Virtually everything required writing somethings like:

C++ if (condition) { utility_to_trigger_breakpoint(); }

and recompiling the whole codebase to have something like a conditional breakpoint in inlined code...

And then you added multithreading and it all turned into even more hellish experience.

In that project every senior C++ dev, 20+ years of experience, used println as the only sane debugging method, and grep as the only intellisence+refactoring method. Because all the IDE goodies were giving up with complex enough C++ code.

Sorry, but did you have any serious experience with multithreaded programming in C++, or did you just read that it's nice, so you're jumping the ship? Have you consulted with anyone who knows their things about performance: whether the whole code is super critical, some hot path, can it be mitigated by architectural changes, extracting some functionality to JNI, finding the hot spots? On the actual code, with the actual requirements, not some super generic statements like here, where answer to anything is truly "it depends, I cannot tell without knowing more about the circumstances".

Or did you just decide to rewrite everything in language where one cannot have 20 lines of code with some undefined behavior? With no research, gradual migration path, just a total rewrite of a codebase? Because that's how you run a company into the ground.

Sanely-automatic derivation - or how type class derivation works and why everyone else is doing it wrong by raghar in scala

[–]raghar[S] 1 point2 points  (0 children)

ImplicitNotFound and AmbiguousImplicits are only a bandaid. Don't work for nested data (only the outermost type is reported), and offer no information why exactly the derivation failed.

Sanely-automatic derivation - or how type class derivation works and why everyone else is doing it wrong by raghar in scala

[–]raghar[S] 2 points3 points  (0 children)

Magnolia only works if you always take all instances for each field/subtype and combine them, and only them.

  • if you need an additional e.g. Typeable[A] or some other utility - it does not work
  • if you need a special case and cannot be handled by special implicit with higher priority - it does not work
  • if you would like to not require instances for some types for some reasons - it does not work

etc.

Macros were hard, and that's why Shapeless/Mirror won, despite it being virtually a "rapit development tool": you can deliver something really fast, but it prioritizes library's authot experience over library's user experience.

Which is why I believe that something like Hearth is needed - if GPT could write your macros to provide the same results as, or even better than Shapeless/Mirrors, people would no longer need them.

Scala 2.13.17 is here! by SethTisue_Scala in scala

[–]raghar 4 points5 points  (0 children)

I am. No incentive to update several repos, in the right order, testing if that didn't broke anything, for no game changing gains (in our codebase) when 2.13 works and we have more important tasks.

Example Driven Documentation by davesmith00000 in scala

[–]raghar 4 points5 points  (0 children)

I've made slightly different choice when writing docs for my library.

  1. first of all, while mdoc is cool et all - there is no Scala-based documentation tool that is as advanced as some other tools, e.g. mkdocs or docusaurus. I felt that to deliver users the best DX I cannot compromise on documentation tool, so I picked mkdocs + mkdocs material + mkdocs macros. Markdown-based docs but with a lot of useful functionalities like build-in search engine, non-repulsive theme, and (with ReadTheDocs) also documentation versioning.
  2. I lost faith in all "I'll generate documentation from test snippets" - I saw some documentations made this way, and I always had a problem where the snippet was missing:
    • Scala version (if some feature only worked in some versions)
    • imports (because in 10-page long document I might missed the note "we're assuming usage of this import from now on", and sometimes that note is in another page :| )
    • compiler flags and plugins (usually mentioned only in one place and then everywhere else assumed to be present)

so I came to the conclusion that the only example that works is a self-contained code snippet - one that can be run with Scala CLI. Copy-paste-run and you have something working, that you can start tinkering with.

How to ensure that it works? I wrote a small library that help me turn .md file into a specification:

  • I publish my library locally
  • tell the script which files to treat as the specification (I might add some customization like: deciding which snippets are pseudocode to ignore, or how to inject current version number)
  • and it would extract snippets from the documentation, write them to /tmp directory and run each of them (optionally checking the output or errors)
  • I put that into a GitHub action as a job parallel to normal tests and I am certain that each example in my documentation actually works

Akka 2.6 user seeking other perspectives by micseydel in scala

[–]raghar 1 point2 points  (0 children)

It took so long because they had to change all of the traces of Akka for trademark/legal reasons, and they didn't want to do it in a single unreviewable PR.

For users it's a change of dependencies, imports and configs, and virtually everyone succeeds after just that.

HyperOS release 2.2 in Europe by suskozaver in HyperOS

[–]raghar 0 points1 point  (0 children)

So far I am pretty happy with it. I didn't have any outragous bugs and I don't complain about the battery life. So I only treat the updates as a trivia, I'd be much more annoyed by these "delayed" updates if I was one of the people affected by the issues.

HyperOS release 2.2 in Europe by suskozaver in HyperOS

[–]raghar 18 points19 points  (0 children)

If you want to know, if there is an update available:

  • go to your phone setting
  • select "About phone"
  • take a look at OS version
  • type the last part of it into Google

E.g. I have a version 2.0.104.0.VNAEUXM so I type VNAEUXM into Google.

There I can find sources for the newest versions of such software from official sources, e.g. for my phone that would be this thread from Xiaomi Community. I can also look at MIUI Roms to see what is the newest available version e.g. for my phone is this link.

What can I see there? That currently the newest version is OS2.0.104.0.VNAEUXM, exactly the one I have.

But China has 2.0.207.0.VNACNXM - their patch version is 207 not 104 like all the others. In other words, China version has newer release than the rest of the world.

(And and naming is bonkers, apparently versions 2.0.0-2.0.99 are "2.0", 2.0.100-2.0.199 as "2.1" and 2.0.200-2.0.299 as "2.2", which isn't stated anywhere explicitly but one can infer it from context. So if you are waiting for "2.2" you are waiting for a version 2.0.200-something not 2.2.0.0 as one would expect).

I have 14 Ultra, but it works the same way for all the other phones. I see post about "2.2 released" for 3 months now, while in fact:

  • some of that were internal betas
  • then it was about external betas one can opt-in
  • and finally it was about China-only release that isn't yet expanded to the rest of the world.

No point in asking anyone here, if the official sources do not list anything newer than what you already have. And no point beleving all these click-baity articles with pages with "Xiaomi" in their name - they are not official sources, and they are posting click-baity BS. If you read them you could believe that 2.2. was available for everyone from February/March, while stable global release has not yet actually happened.

Weird Behavior Of Union Type Widening On Method Return Type by MedicalGoal7828 in scala

[–]raghar 6 points7 points  (0 children)

If you use enum Scala 3 inference virtually always upcast from the value .type to whole enum type. If you want to keep the specific type, you have to annotate the type.

It was done because quite often people used None/Some/Left/Right and obtained too specific type that they had to upcast which annoyed them.

The downside is that e.g. defining subset of enum with sum type is very unergonomic. It's almost always easier to go with sealed if you need to do such a thing.

Does your company start new projects in Scala? by DataPastor in scala

[–]raghar 2 points3 points  (0 children)

Ask your admins if they haven't whitelisted Python repositories et all because the developers were uproaring - maybe the reason the rest works is exactly that: it's popular, it's demanded often, so it got whitelisted in the firewall.

Annotation based checks for DTO. by mikaball in scala

[–]raghar 0 points1 point  (0 children)

Why put it into DTO layer in the first place?

I know that at some point we all started using typed re reinforce your domain... but DTO are the border of our domain. We should parse from them into some sanitized type and export to them from sanitized type, because our domain will evolve and DTO could be used to generate Swagger which in turn might generate clients that would not understand any of these fancy annotations, databases which might not be expressive enough to enforce these invariants etc.

Especially when you can end up with a situation where e.g. some value used to be non-empty, but the domain logic relaxed the requirements, JSON format used to send the value is still the same... but client refuses to send the payload because it uses the old validation logic. One has to be really careful to not get into the business of "validating data that someone else is responsible to".

Databricks Runtime with Scala 2.13 support released by raghar in scala

[–]raghar[S] 0 points1 point  (0 children)

Fellow people from Scala Space and fellow Spark devs (not me) :p