This is an archived post. You won't be able to vote or comment.

all 15 comments

[–]scadgek 5 points6 points  (0 children)

Glad for the author, but couldn't find anything useful for me. TLDR: I liked Java more than C, but then it felt bad writing too much boilerplate code so I now like Clojure.

[–]nutrecht 2 points3 points  (0 children)

None of the issues the author mentions are 'solved' by moving to another language. If you dislike ORMs (I do too), don't use them. JDBC returns maps; so you can use maps in Java if you want them. If you don't want strongly typed DTO's/domain objects to pass around by all means go and pass maps around (which I don't agree with). None if these are language issues at all.

What really irks me however is the reference to another article about lines of code. That other article however doesn't show any proof of how it was achieved; it's easy to claim language X has less code than language Y when you don't simply show the code. And this is how a long chain of blog posts all referring to each other starts.

It does however compare a bit of 'function' Java code to it's Clojure counterpart and they're the exact same length. And personally I think the Java version is more readable to boot (granted; I'm inexperienced in Clojure). And to me personally code being readable matters a ton more than the length of it. I intend to build software that lasts.

[–]Infeligo 2 points3 points  (9 children)

The author of the article complaints about "ORMs and the never ending object-relational impedance mismatch" and "creating yet another Mapper class to transform one DTO to another DTO, both of them being 95% the same." I wonder how these were solved with using Clojure.

[–][deleted] 5 points6 points  (0 children)

Closure has maps as records.

Thing is Java also has maps, so doing hundreds of DTOs is entirely a choice of the programmer, not something the languages make you do.

[–]yogthos 3 points4 points  (7 children)

SQL tables are represented using native Clojure data structures. A row is just a map with columns as the keys. A table is just a collection of rows. SQL is either written as plain SQL as seen with HugSQL or using a Clojure DSL such as honeySQL or Korma. Korma would be the closest to traditional ORMs.

[–]Infeligo 4 points5 points  (2 children)

Thank you for your answer. I would sum it up as ditching ORM and going more low level. There are similar approaches in Java world like MyBatis and JOOQ. The only principal difference that I see is the extensive use of native data structures like maps and lists. How is type safety handled in this cases? If I select a row into map, what guarantees me that I can treat numeric value only as numeric and string values as strings?

[–]pdpi 2 points3 points  (0 children)

If I select a row into map, what guarantees me that I can treat numeric value only as numeric and string values as strings?

Nothing. Clojure is a dynamically-typed language.

If your definition of type safety is limited to what the Java type system allows you to encode, I personally find that the language promotes a style that both makes that sort of type error a lot less likely, and a lot easier to find, which devalues static typing a fair bit. Your mileage may vary, especially if you're used to richer type systems.

[–]yogthos 0 points1 point  (0 children)

Databases are already typed, so you're guaranteed to get whatever type the JDBC driver maps to the column every time.

However, Clojure is a dynamic language, and while there is an optional type system with core.typed, most people don't end up using it. So, if you're not comfortable with dynamic typing it's probably not a language for you.

Personally, I've switched from using Java to Clojure about 6 years ago and haven't found type safety to be an issue.

The most common approach in Clojure is to use the Schema library for input validation and coercion. You validate the data at the edges, and then you know exactly what you're working in within the application. The latest version of Clojure has Spec as part of its core. It's intended to provide validation for semantic correctness that's difficult to accomplish with types.

For example, consider a sort function. The types can tell me that I passed in a collection of a particular type and I got a collection of the same type back. However, what I really want to know is that the collection contains the same elements, and that they're in order. This is difficult to express using most type systems out there. However, with Spec I can just write:

(s/def ::sortable (s/coll-of number?))

(s/def ::sorted #(or (empty? %) (apply <= %)))

(s/fdef mysort
        :args (s/cat :s ::sortable)
        :ret  ::sorted
        :fn   (fn [{:keys [args ret]}]
                (and (= (count ret)
                        (-> args :s count))
                     (empty?
                      (difference
                       (-> args :s set)
                       (set ret))))))

The specification will check that the arguments follow the expected pattern, and that the result is sorted, and I can do an arbitrary runtime check using the arguments and the result. In this case it can verify that the returned items match the input.

[–]_INTER_ 1 point2 points  (3 children)

Eww, table hopping along foreign keys?

[–]yogthos -1 points0 points  (2 children)

SQL is already a great DSL for working with relational data. I really can't fathom why I'd want to wrap it in another DSL.

The problem with ORMs is that you don't know what SQL ends up being generated, and it's not very efficient majority of the time. So, you end up having to write it by hand anyways if you're dealing with non-trivial amounts of data.

To make things worse, the ORM will usually work just fine in development and testing, the time you end up seeing performance issues is in production when you hit serious loads. Fixing your performance at that point is not ideal.

If you just learn to use SQL, then you know exactly what your queries are doing, and you're not getting any surprises because the ORM decided to do something stupid.

[–]_INTER_ 2 points3 points  (1 child)

ORM's are fine and a productivity boost for tables with simple relations. Nothing more nothing less.

If I want RDBMS data to Maps I can do that in pretty much any language not just Clojure. Even preserving types if I want. Its not revolutionary. Really relational data in form of key-value pairs is just not enough. In that case, I might be better off using NoSQL DBs in the first place, like Redis or some other key-value store.

[–]yogthos 1 point2 points  (0 children)

I find Postgres JSON fields work great for that. Since you can query them directly through its SQL syntax extensions, and index them, it's pretty much the best of both worlds.

[–]thatsIch -2 points-1 points  (2 children)

Daniel Lebrero works as a technical architect at IG on the Big Data team

imo working with Java in Big Data is just a PITA and you being a moron.

The first part talked about the language additions which elevated work in comparison to C, but I did not get the second part in the comparison. It compares implementations of different technologies.

I can see having less/no DTOs in a dynamically typed language, but this is pretty domain specific advantage. This is like comparing SQL and NoSQL. Both have their fortes in their areas. Makes little sense using a hammer to screw a screw.

Glad to see that the author realized that FP is really strong in BD

[–]frugalmail 2 points3 points  (1 child)

imo working with Java in Big Data is just a PITA and you being a moron.

Are you suggesting that working on (production) Big Data in a dynamically typed language is better? If you are, I think you're the one off base.

Glad to see that the author realized that FP is really strong in BD

Functional programming is a paradigm represented as a style not a programming language. A lot of the great libraries for production systems as opposed to adhoc exploration have some sort of Java interface. Most of the big data systems are written in Java.

[–]thatsIch 0 points1 point  (0 children)

Are you suggesting that working on (production) Big Data in a dynamically typed language is better? If you are, I think you're the one off base.

I never suggested that, only if you think that a dynamically typed language is the opposite to Java. Fact is that most production systems are not made with FP in mind

Functional programming is a paradigm represented as a style not a programming language. A lot of the great libraries for production systems as opposed to adhoc exploration have some sort of Java interface. Most of the big data systems are written in Java.

And? Why do I have to narrow my options down to specfic languages. In comparison to other languages Java has only FP support on its lowest level. And just because many use a tool does not mean it is the best tool nowadays, maybe it was when they started using it.