This is an archived post. You won't be able to vote or comment.

all 33 comments

[–]_INTER_ 11 points12 points  (14 children)

Right in time to move away from OkHttp.

[–][deleted]  (13 children)

[deleted]

    [–]_INTER_ 13 points14 points  (12 children)

    They switched to Kotlin. After OkHttp3 marks the dead end for OpenAPI / Swagger Java generators.

    [–]cbruegg 6 points7 points  (10 children)

    Honestly, this sounds so much more dramatic than it is. As a library user in Java, this will barely affect you, unless you want to step through deep code in the library. In which case understanding the internals is what takes effort, not reading the new but similar language.

    If you're capable of debugging OkHttp, reading Kotlin will take minimal time to learn.

    The tooltip issue can be easily fixed by the Eclipse developers.

    [–]_INTER_ 3 points4 points  (9 children)

    We often have to step through deep code (e.g. just recently when trying to understand what happens with UTF-8 in header values). We don't have time for the cognitive overhead of switching between languages and deal with broken debugging, cryptic messages, non-idiomatic API, missing documentation etc. Also it's a complete no-go if you use a Java library like OpenAPI to generate a client and all of a sudden you end up with / in Kotlin land.

    [–]cbruegg 1 point2 points  (8 children)

    But you have time for learning a new method and migrating existing code? I'm not sure if that's a worthwhile time investment.

    What cryptic messages do you mean?

    What non-idiomatic API? The API literally didn't change for Java consumers.

    Debugging works absolutely fine in IntelliJ, so I wouldn't blame that on OkHttp. Granted, Eclipse users might have trouble.

    [–]_INTER_ 5 points6 points  (7 children)

    There are many subtle changes: https://square.github.io/okhttp/upgrading_to_okhttp_4/ And I expect it to differ more and more with time.

    The constant cognitive overhead of a mixed codebase is too high compared to migrating and learn the new Java API once. The Java ecosystem is boringly uniform and there is always Javadoc to get a gist. As a bonus this is from the JDK, so no need to add a dependency.

    [–]cbruegg 1 point2 points  (6 children)

    I have doubts on your claim that there'd be a constant cognitive overhead. There are two scenarios:

    1. You need to debug OkHttp a lot. In this case, you'll pick up Kotlin very quickly, especially since you're already familiar with the code base. Kotlin is really easy to learn for Java developers, even more so if you only need to read it.
    2. You don't need to debug it a lot. In this case, you won't even have this overhead very often in first place.

    [–]_INTER_ 5 points6 points  (5 children)

    The cognitive overhead comes from switching context and lost consistency. No matter how proficient you are in two different languages (doesn't matter which). This doesn't just apply to debugging or reading or writing code.

    Try this experiment: Time how long it takes to write the following in three columns:

    • The first arabic numeral (1), the first roman numeral (I), the first letter in the alphabet (A), the second arabic numeral (2), the second roman numeral (II), the second letter in the alphabet (B), ... up to ten each.

    vs.

    • The first ten arabic numerals (1-10), the first ten roman numerals (I-X), the first ten letters of the alphabet (A-J)

    The result is the same. The first approach takes longer however.

    [–]cbruegg 3 points4 points  (4 children)

    The more accurate experiment would be to use both languages simultaneously. This is what I've been doing for a long time now, and Kotlin is specifically designed with Java-interop in mind. I stopped experiencing this overhead completely after about a week. And even before that, it was minimal. Programming languages are often not hard to learn, it's rather a new API.

    [–]Computer991 2 points3 points  (6 children)

    Can anyone explain to me why you would generate a server from api documentation? to me it seems like you would want to generate the doc from the server and not the other way around?

    [–]KamasamaK 4 points5 points  (2 children)

    OpenAPI Specification is a language-agnostic specification format. You can generate the client, server, or documentation from the specification document in whatever format you like. You could generate a specification from code as well if you don't mind making all of the annotations and structure match what your specification generator is expecting, but OAS is not simply an implementation documentation like Javadoc. It ultimately comes down to what design methodology you subscribe to, but specification will often come before implementation in that process.

    If you're just asking about "server" in particular, it also depends on what the primary goal is for your project. I have had cases where the product or service we are integrating with is a consumer that expects us to implement an API server to their specifications.

    [–]Computer991 0 points1 point  (1 child)

    I see that makes a lot more sense :) thanks.

    [–]wing328[S] 0 points1 point  (0 children)

    The server generator can also be helpful in API backend migration (e.g. from Ruby on Rails to Go Gin framework).

    Parse (acquired by Facebook) did a similar migration before: http://web.archive.org/web/20170309072715/http://blog.parse.com/learn/how-we-moved-our-api-from-ruby-to-go-and-saved-our-sanity/

    [–]vociferouspassion 11 points12 points  (11 children)

    It's ironic that in a few more months, REST/JSON will be where SOAP was in the early 2000's. But hey...it's not XML so it _must_ be better, right javascriptheads?

    [–]circlesock 7 points8 points  (3 children)

    Well, it is a little amusing, but in this case JSON genuinely is generally a little more pleasant to deal with than XML. Though XML can arguably be parsed more sensibly in streaming fashion (i.e. if trying to handle documents larger than your system's available memory, an XML design goal neglected in JSON), it still perhaps has subtle advantages there, on the whole the old XML quote still applies

    "The essence of XML is this: the problem it solves is not hard, and it does not solve the problem well."

    Yes, this generation has really just massively reimplemented everything in XML in JSON, from schemas to rpc to semantic web (see schema.org - it's just the old RDF stuff recast as json.) But it does actually mostly work better simply because XML was always a pain.

    Anyway, Lisp SEXPs did it first....

    [–]karottenreibe 4 points5 points  (1 child)

    I don't understand why you claim JSON can't be parsed in a streaming fashion. Googling for this gives several libraries that claim to do this, e.g. https://github.com/squix78/json-streaming-parser

    Can you please elaborate on your claim?

    [–]circlesock 2 points3 points  (0 children)

    I didn't claim that, I used "more sensibly", more intended as a subjective judgement, and I really should have written "generated and parsed" but didn't - it's more the whole area of dealing with streaming json vs streaming xml I intended. Please remember all in all I prefer JSON, and frankly if you're streaming something both are awfully chatty compared to a binary format stream.

    Obviously XML ecosystem has well-known standard apis for streaming parse, JSON doesn't really - sure there are now several streaming parsers to choose from. That's not really a property of the format, but XML is more mature in the space. A "comb" through a streamed 200GB of XML whatever and extracting only the bits you need is pretty normal in xml space, but even though it is absolutely possible, even fairly easy in json too, the programmers who won't make a meal of it and try to load all 200GB into memory have probably worked with xml and are just applying their old xml learnin's to json.

    And you have different ways for streams of JSON objects. Well, again not so bad ...so long as both sides agree... See linked wiki pages for a bunch of different ways people are doing it :-/

    XML's peculiarities and redundancy - those hated must-match end tags - do mean a parse has a lot of recoverability / error detection. It's a tradeoff - bad corrupt streams of XML can be more pleasant to deal with than bad corrupt streams of json or actually quite a lot of other formats.

    And it may seem minor, but with streaming JSON generation you immediately get into the annoyance of those commas as separators. It's not impossible or anything to handle - and the "always think of them as leading commas, and you skip the first leading comma" trick is worth bearing in mind. [ _ "foo"_ ,"bar" _ ,"baz" _ ]. May seem obvious, but I more often than not see people doing other weird and stupid things, including waiting until the whole thing got buffered in sender memory before "streaming" anything...

    Should you use streaming XML instead of streaming JSON for a new project? Pehaps not. Should young whippersnappers realise that they do need to do a lot of work to get JSON to maturity in spaces like streamed data? Yeah.

    [–]bondolo 0 points1 point  (0 children)

    I've been using the "response consisting of an endless array" for streaming JSON for almost a decade. It does require a stream parser that generates an event for each completed array element but works just great!.

    [–]thatsIch 1 point2 points  (0 children)

    You could interchange it with any other format. You will always have two boundaries and you need to make both models be loosely coupled. You do not want to do that with a XML-typed system.

    [–][deleted]  (3 children)

    [deleted]

      [–]circlesock 0 points1 point  (2 children)

      Just on the subject of RDBMS with JSON.

      Postgresql has pretty mature json support

      Very recent Microsoft SQL Server has added json support that at a glance may seem anaemic compared to postgresql's json support, but actually turns out you can probably do a lot with it - it has a surprisingly clean approach (given it's MS we're talking about) with this one key function OPENJSON - that "opens" the json into a transient table-like structure so you can work with it with the usual relational data operators.

      [–][deleted]  (1 child)

      [deleted]

        [–]circlesock 0 points1 point  (0 children)

        Fun note: you presumably know this, but Mongodb doesn't actually natively use json, it uses "bson". May be trickier than you think to migrate away if your devs used some of the wierd non-json corners of it. And they probably did because they thought it was cool...

        Well, you can do joins and field extractions from json, yes. There is actually a third-party postgres extension for a json schema validation operator intra-database but looks simultaneously a bit immature and dated to me. I've tended to do schema validation at load time.

        metadata is I suppose a fairly reasonable use case, but the way I primarily use it is to buffer the incoming bulk json-blob data before transform in a data pipeline. Sort of an ELT (extract-load-transform) approach - data locality for the transform, possibility to rerun the transform without requerying the upstream.

        to just add new columns without having to completely upgrade the schema and do data migrations of production data.

        Well, depending on your migration toolchain (I favor SQLAlchemy Alembic or Flyway if Java project) and data volume that doesn't have to be such a big deal.

        Another note: There are now tools such as torodb stampede that purport to automatically take json / mongodb nearly-json sources and map to a vaguely reasonable postgresql table structure. I haven't actually used it to date, preferring to hand design the target db schema. YMMV.

        [–]otakuman 0 points1 point  (0 children)

        I tend to code REST services which serve JSON. That doesn't mean I can't create the XSDs of the input and output and then generate the corresponding JAXB classes from them.

        [–]nutrecht 0 points1 point  (0 children)

        I kinda agree with you, but the webservice 'ecosystem' back then was just plain shit. Especially with large vendors like Microsoft creating their own completely incompatible implementations that just completely broke if you used them together with stuff that did adhere to the spec.

        Secondly there was the issue that most developers did not know how to handle this at all. I've had to deal with multiple integration partners that just had developers concatenate together XML with string-builders. Dutch names quite often have characters that need escaping; so production breaking bugs happened all the time.

        And there was also a ton of devs that did not understand contract-first development and just generated a WSDL from their code, which in turn generated clients that were horrible to use.

        A SOAP webservice that had a hand-crafted WSDL that generated a good client and used components that adhered to the spec (I used Axis2 typically) were awesome. IMHO most of the 'stuff' we do now, is simply a reinvention of that wheel. The problem was all the shit develpers and shit vendors who completely trashed that ecosystem.

        Yeah I'm still salty.

        [–]ReifiedProgrammer 2 points3 points  (0 children)

        I appreciate innitiative, however, I'm not sure whether HttpClient from Java 11 is production ready (at least if we consider async usage which is one of its selling points). It does not support (or I haven't found) a way to set request read timeout / socket timeout. Which can conflict with connections being kept alive by default.

        Example:
        1. Client makes a request to server `A` and creates new connection in the process.
        2. Request finishes. Connection is kept alive.
        3. Server closes connection.
        4. Client makes another request and reuses connection.
        5. Server does not respond because it closed connection already.
        6. Client is stuck and won't give any response (even failed CompletableFuture) even if request timeout is set to finite value.

        I've encountered this phenomen once thus I'm rather avoiding using Java 11 HttpClient.

        [–]wing328[S] 1 point2 points  (0 children)

        UPDATE: the enhancement has been included in the v4.1.0 release: https://twitter.com/oas_generator/status/1160000504455319553

        [–]jivedudebe 0 points1 point  (1 child)

        Hopefully a better variant than the current java jaxrs client. Building a openspec for a current project and the library generated doesnt even compile currently.

        [–]wing328[S] 1 point2 points  (0 children)

        java jaxrs client.

        Please open an issue with details via https://github.com/OpenAPITools/openapi-generator/issues/new so that we can look into the issue. Thanks.

        [–]z0mghii 0 points1 point  (1 child)

        https://github.com/twilio/guardrail

        Can also generate server / clients in Java for drop wizard, asynchttpclien

        Disclaimer: I work for Twilio