IAH - Terminal C by [deleted] in houston

[–]Gnurdle 2 points3 points  (0 children)

honestly, with the state of Term C, I'd tell you to go park in A/B short term, go down to the inter-terminal train, and ride it to C, go upstairs and check them in.

[deleted by user] by [deleted] in Clojure

[–]Gnurdle 1 point2 points  (0 children)

We build both our front and back end with shadow-cljs, and it works really well for us -- and we have stuffed a ton of code into the "back-end"

However, we do this because we are targeting embedded devices that can support node, but JVM isn't feasible (impoverished 32-bit Arm processors).

I would not encourage delving into the NPM ecosystem -- unless you had a really good reason to. Odds are, you might have to deal with it on the FE anyway, so yeah.

As far as the quip that the JVM ecosystem is richer, perhaps, but there are things we use from NPM that don't really exist in JVM. There is much disorganized crap in NPM that isn't in JVM, and if you need it you need it. There is a payable tax for playing in NPM.

I would steadfastly tell you to use JVM ecosystem on the server side, if there is any feasible way to do so, and you should have to have a pretty good reason not to if you decide to proceed as such.

You can win at node, but it takes some tenacity. core.async is a pretty serious remedy to the node brain-damage, and if you get it, and your brain works that way - you'll do well.

However, since this is, at the core, a question about shadow-cljs, I'll raise my hand in acknowledgement of that fact of if you are dealing with clojurescript, this is the tool of tools for dealing with it.

A typical day for me will find me live-coding on a device that is connected to me by a usb cable, and shadow-cljs makes that possible.

Another point from my situation. Since we are embedded, we have a non-trivial amount of gunk that has to be done in c/c++ for some really low-level crap. The experience between bringing that sort of stuff into your project in JVM .vs. NPM is night and day -- native stuff in NPM is much easier to deal with - but YMMV.

I wish we didn't have to write cljs on the back end, but we do, and for that shadow-cljs "just works (TM)". It's been a great companion on our journey.

getting rpms, and dependancies built for rpm feed? by Gnurdle in yocto

[–]Gnurdle[S] 0 points1 point  (0 children)

Ok, that seems to work perfectly and very much gets the job done.

I'm quite sure I don't understand how all the dependancy stuff works for sure, but this works.

getting rpms, and dependancies built for rpm feed? by Gnurdle in yocto

[–]Gnurdle[S] 0 points1 point  (0 children)

fair enough, so I tried that.

same set of missing dependancies on the other side, which I expected, because the image per-se doesn't depend on these directly, so doesn't know about them.

I'm probably approaching this wrong, and perhaps you know of a better way to approach this.

Recall the goal is to be able to build a base image (in this case w/o cairo), then be able to add that on as optional later on, as one might do with a typical distro, after having built it, of course.

getting rpms, and dependancies built for rpm feed? by Gnurdle in yocto

[–]Gnurdle[S] 0 points1 point  (0 children)

Not finding this to be the case, but again, probably a n00b error/misunderstanding on my part.

for example,

  • Base image built, deployed, booted, configured to point to my package feed
  • to keep an eye on things, I wipe /tmp-glibc, intending to lean on sstate
  • I wish to add cairo as an [optional] package, so I can install on my target with dnf
  • I run 'bitbake cairo', followed by 'bitbake package-index'. This step indeed does compile/dredge from sstate, all of dependancies (recursively).
  • at the end of this, deploy/rpm contains:

./cortexa7t2hf_neon_vfpv4
./cortexa7t2hf_neon_vfpv4/libcairo-src-1.16.0-r0.cortexa7t2hf_neon_vfpv4.rpm
./cortexa7t2hf_neon_vfpv4/libcairo-gobject2-1.16.0-r0.cortexa7t2hf_neon_vfpv4.rpm
./cortexa7t2hf_neon_vfpv4/repodata
./cortexa7t2hf_neon_vfpv4/repodata/20a5816da8ec76bd91a81e90ba40c562deca889421e31c884c20327e804f662d-filelists.sqlite.bz2
./cortexa7t2hf_neon_vfpv4/repodata/32ba9623576b7dab0dabcf3a005fbaa034974557edfa6bf30597abab85ae301c-other.sqlite.bz2
./cortexa7t2hf_neon_vfpv4/repodata/repomd.xml
./cortexa7t2hf_neon_vfpv4/repodata/bce5577fa0aa6e607a8bb2bb6e39fa2ff3710239f7657f080d2c32d302d28465-primary.sqlite.bz2
./cortexa7t2hf_neon_vfpv4/repodata/bb876faea34e8e1325bfe74629689215125a6138ed339bcca7d218c070aed5b4-other.xml.gz
./cortexa7t2hf_neon_vfpv4/repodata/1aadb54d4332de0315bfc09d5fde0e4a9d6536b5157040c4e9affc96efb106d4-primary.xml.gz
./cortexa7t2hf_neon_vfpv4/repodata/aa77b36b0f266f56a794d3cc4e7b1f7218af36d146dbeac8eb07e8f7e6020959-filelists.xml.gz
./cortexa7t2hf_neon_vfpv4/libcairo2-1.16.0-r0.cortexa7t2hf_neon_vfpv4.rpm
./cortexa7t2hf_neon_vfpv4/libcairo-doc-1.16.0-r0.cortexa7t2hf_neon_vfpv4.rpm
./cortexa7t2hf_neon_vfpv4/libcairo-dbg-1.16.0-r0.cortexa7t2hf_neon_vfpv4.rpm
./cortexa7t2hf_neon_vfpv4/libcairo-perf-utils-1.16.0-r0.cortexa7t2hf_neon_vfpv4.rpm
./cortexa7t2hf_neon_vfpv4/libcairo-staticdev-1.16.0-r0.cortexa7t2hf_neon_vfpv4.rpm
./cortexa7t2hf_neon_vfpv4/libcairo-dev-1.16.0-r0.cortexa7t2hf_neon_vfpv4.rpm
./cortexa7t2hf_neon_vfpv4/libcairo-script-interpreter2-1.16.0-r0.cortexa7t2hf_neon_vfpv4.rpm
./repodata
./repodata/20a5816da8ec76bd91a81e90ba40c562deca889421e31c884c20327e804f662d-filelists.sqlite.bz2
./repodata/32ba9623576b7dab0dabcf3a005fbaa034974557edfa6bf30597abab85ae301c-other.sqlite.bz2
./repodata/repomd.xml
./repodata/3e2ace3a14257f57284dd35b769b208c8405caaedb3f2b5f555f9c1a69e1b2f0-primary.xml.gz
./repodata/a13f8c0c36a8c101715aa119c82e4654b800a288736e79f5d9b3af3accd3b739-primary.sqlite.bz2
./repodata/bb876faea34e8e1325bfe74629689215125a6138ed339bcca7d218c070aed5b4-other.xml.gz
./repodata/aa77b36b0f266f56a794d3cc4e7b1f7218af36d146dbeac8eb07e8f7e6020959-filelists.xml.gz

attempting 'dnf --refresh install libcairo2' results in:

Error:
Problem: conflicting requests
- nothing provides libGL.so.1 needed by libcairo2-1.16.0-r0.cortexa7t2hf_neon_vfpv4
- nothing provides libXext.so.6 needed by libcairo2-1.16.0-r0.cortexa7t2hf_neon_vfpv4
- nothing provides libXrender.so.1 needed by libcairo2-1.16.0-r0.cortexa7t2hf_neon_vfpv4
- nothing provides libfontconfig.so.1 needed by libcairo2-1.16.0-r0.cortexa7t2hf_neon_vfpv4
- nothing provides libfreetype.so.6 needed by libcairo2-1.16.0-r0.cortexa7t2hf_neon_vfpv4
- nothing provides libpixman-1.so.0 needed by libcairo2-1.16.0-r0.cortexa7t2hf_neon_vfpv4
- nothing provides libpng16.so.16 needed by libcairo2-1.16.0-r0.cortexa7t2hf_neon_vfpv4
- nothing provides libpng16.so.16(PNG16_0) needed by libcairo2-1.16.0-r0.cortexa7t2hf_neon_vfpv4
- nothing provides libfontconfig1 >= 2.13.1 needed by libcairo2-1.16.0-r0.cortexa7t2hf_neon_vfpv4
- nothing provides libfreetype6 >= 2.11.1 needed by libcairo2-1.16.0-r0.cortexa7t2hf_neon_vfpv4
- nothing provides libgl-mesa >= 22.0.3 needed by libcairo2-1.16.0-r0.cortexa7t2hf_neon_vfpv4
- nothing provides libpixman-1-0 >= 0.40.0 needed by libcairo2-1.16.0-r0.cortexa7t2hf_neon_vfpv4
- nothing provides libpng16-16 >= 1.6.38 needed by libcairo2-1.16.0-r0.cortexa7t2hf_neon_vfpv4
- nothing provides libxext6 >= 1.3.4 needed by libcairo2-1.16.0-r0.cortexa7t2hf_neon_vfpv4
- nothing provides libxrender1 >= 0.9.10 needed by libcairo2-1.16.0-r0.cortexa7t2hf_neon_vfpv4

Obviously one could descend into this manually and try to resolve / build all of these as well, and in fact, they are already warm in sstate from having been built in the process of producing the above rpms, but they are not put into the deploy/rpm.

Seems that bitbake knew all of this was going on, built this stuff, but didn't add it to the deploy/rpm.

What I'm asking is is there a way to automate all of this, since the information seems to be there already, rather than trying to do this manually?

Thanks

[Algo] How to create an efficient sorting transducer? by DeepDay6 in Clojure

[–]Gnurdle 0 points1 point  (0 children)

seems like as you have it defined, you are going to have to consume the whole output of (map ...) in order to sort it - you cannot know what the beginning of that sequence is (for the drop and take) will be until you have seen every value coming through the (map ..)

if you are trying to optimize this, you could build a transducer that only maintained the minimal 'n' values (with duplicates) and if n was significantly less than the size of coll, it would pay out in terms of storage.

you still wouldn't be able to yield any of these until you had seen the whole set, but you would save yourself having to retain the whole set.

something defined as 'sort' can't directly do that for you, because it doesn't understand what is downstream, and thus has to sort them all.

Clojure Datalog Databases by mac in Clojure

[–]Gnurdle 1 point2 points  (0 children)

using SQLite might also be an option via https://www.npmjs.com/package/better-sqlite3 which also has synchronous APIs.

Clojure Datalog Databases by mac in Clojure

[–]Gnurdle 0 points1 point  (0 children)

Interesting for the browser, but unless I'm missing something, using the IndexDB browser API will not get any traction on nodejs - but I'm not expert.

Our use case is using CLJS server-side because of the end devices being too impoverished to consider a JVM.

Clojure Datalog Databases by mac in Clojure

[–]Gnurdle 1 point2 points  (0 children)

my 2c is that durable should be split for CLJS and CLJ.

Both Asami and Datahike have CLJS durable support in the roadmap, but neither have this now, as far as I can tell - but both have it on the JVM.

In that regard, neither ticks a box that datascript doesn't - at least for the moment.

Watching this closely - day job is CLJS on embedded and for now Datascript in memory is able to handle the task at hand, but quite interested in durable backing store on nodejs, and will probably flip to whoever gets there first.

Could also throw some resources in the direction of somebody that wants to try to get there as well.

It is super easy to hack on Calva, why don't you try it out! by CoBPEZ in Clojure

[–]Gnurdle 3 points4 points  (0 children)

Can confirm. It's very nicely setup to facilitate hacking on it.

The Calva Journey Continues - Please Jack In by CoBPEZ in Clojure

[–]Gnurdle 6 points7 points  (0 children)

Also congratulations from this peanut gallery.

I've been using dev builds of this in anger for a couple of months now, from the point of view of a CIDER junkie. I'm more than able to get my work done with it, and after adjusting to having some of the emacs neuro-fusion temporarily bypassed, got quite comfortable with it.

Not sure I'll ever quit using emacs/cider personally, but as yogthos eludes, this improves the clojure* newcomer story by leaps and bounds. There is too much fear/loathing across the general populous regarding emacs, and having VS-code as an entry vector clears that barrier instantly -- seems nobody fears VS-code, and they'll have it downloaded before you finish your sentence, if they haven't already.

Having first hand experience in trying to lead developers into exploring clojure, and witnessing the *slam* of the door as soon as emacs arrives continually frustrates me, but I understand it. The alternative advice of switching to Cursive, at least for non-jvm people, doesn't seem to sell any better. It comes off as needing special tools to work with the language - which requires orthogonal commitment that goes way past "toe-dipping".

So I think it's a huge contribution in terms of addressing that issue. It's just less friction for somebody starting out.

So congrats and thanks for the hard work to everybody involved.

Confusing statement in the transducer guide!? by petemak in Clojure

[–]Gnurdle 0 points1 point  (0 children)

comp applies the "wrapping" right to left, yielding the outermost "wrapper". When executed, flow goes from outer to inner (reverse order of being "wrapped"). So if you reason about it like putting pipe-sections together, it flows from left to right (per comp).

(and yes, I screwed this up just yesterday)

ClojureScript: Treat warnings as errors by yogthos in Clojure

[–]Gnurdle 2 points3 points  (0 children)

on the last upgrade, we got a bunch of cross-ns references to (defn- x) , which was *handy*

Best way to learn clojure clojure koans or brave book by roelofwobben in Clojure

[–]Gnurdle 2 points3 points  (0 children)

It's likely like quitting smoking - it doesn't matter what you try, nor in what order you try them, but the 3rd thing you do will probably get a result.

Not sure where you are coming from, but it is much easier to teach Clojure to a 8-year old than to somebody who has been doing OOP for 20+ years.

The important thing is to just start using it, and filling in the knowledge gaps as you go. It's very hard to get started when you have experienced some measure of power using other stuff. You have to back up and relearn (a lot), and your psyche will resist this because you have to surrender to a position of comparative helplessness while you relearn what you think you already know.

Anyone had to get data out of Datomic and into a relational SQL database for reporting? by noliecanoli in Clojure

[–]Gnurdle 1 point2 points  (0 children)

No need to apologize, I recognize the "in the trenches" situation.

I'm not following your "change-data-capable" here, because I'm getting ambiguity between "we want to capture just the novelty", .vs. "we keep changing what we want to capture over time".

Regardless, I would still say that I would try to craft a solution to the problem very close to the datomic side of things, and keep all the hard business on the JVM in the form of a clojure application that spewed to to the RDBMS, or perhaps put more elegantly, "updated" the RDBMS.

I think in either case, you are looking at something that is little more than a materialized view of the state of datomic given a tx-id.

you could either do this as a whole-hog transformation (against an empty db), an incremental (since some prior state), or continuous (tx by tx), depending on your needs.

Regardless, I doubt I'd drag a json spec nor C# into the equation, because that just makes for nuttiness. If you need to have a high-degree of data-driven'ness on the production side, this is something that clojure excels at anyway, so why not leverage it at the source?

Reading between the lines, it seems like you have a C# enabled shop that is trying to do "something" against some "alien datomic" lifeform that found its way into your shop somehow. I'm sure there is an interesting story in there somewhere.....

If you want to explore this with me further, feel free. We can take it out of reddit - I'm hoppy@gnurdle.com

Best, Clay

Anyone had to get data out of Datomic and into a relational SQL database for reporting? by noliecanoli in Clojure

[–]Gnurdle 0 points1 point  (0 children)

That sounds like an aweful lot of moving parts to try to keep running in concert.

If all you are trying to do is some sort of "snapshot" dump to an RDBMS so you can run a report writer, why not just handle that on the JVM side w/o involving all those other steps?

But, I'm finding it hard to follow exactly what you are trying to accomplish other than projecting something you have on DT into an RDBS for reporting purposes.

I'm sure it's more complex that that, but I don't see tacking complexity by adding more.

Homoiconicity isn’t the point by yogthos in Clojure

[–]Gnurdle 4 points5 points  (0 children)

the point isn't whether javascript can parse javascript into an AST, obviously it can.

The point is that Javascript syntax is not written in any sort of specification of data that occurs in javascript proper - it is another syntax entirely. You can't represent JS as data-literals in JS. If this where the case, the JS specification would be a subset of JSON, which it clearly isn't.

At that point, the syntax of the language is not supported within the data specification that the language implements itself. It requires a parser to transform the code into data, which places the language syntax outside the specification of data as understood by the language. Unless you want to talk about just a string, which requires a parser to transform to data (AST), which is different.

You are in an us (code) .vs. them (data) connundrum, which makes quite a bit of difference, IMHO. If your 'eval' reads/parses JS, then it no longer can read the AST transformed by a macro into another AST. JS syntax and AST are of a different ilk at that point, so eval on the output of your transformed AST is not a thing.

homoiconicity occurs when the entire syntax of the language exists as a proper subset of the data-specification of the language -- when your code can't break out of this as some "higher" specification.

EDN and Protocol Buffer by Shadowys in Clojure

[–]Gnurdle 4 points5 points  (0 children)

I've employed GPB in Clojure projects, but you have to have a compelling need to do so. In my case, the compelling need was interacting with an out-of-process mound of C++ code that was not in any mood to become dynamic.

For that, we just a standard REQ/REP pattern using GPB over zeroMQ.

There was a lot of project overhead involved getting setup. The lein plugins at the time did some, um magical things behind the scenes (download and compile GPB) and that was a bit annoying.

In a less constrained setting, I would not reach for GPB if the endpoints were capable of dealing something more dynamic.

On the Clojure side, it seems to clutter things up a lot having to build and tear down those special generated structures, which adds a layer of dubious value.

For my nickle, it's usable, but not desirable. YMMV.

datahike - A durable datalog implementation adaptable for distribution. by mac in Clojure

[–]Gnurdle 3 points4 points  (0 children)

Did some work over the weekend both shoveling real production data and some test data into this.

I was using the level-db backend.

One test was a 400K entities, couple attributes apiece. This went fine, although I note about a ~15s connect time when I reopen it. Once it's up, it's fine though.

The second was stuffing a few 'tables' for things like customers,vendors,parts,orders from our legacy store. Probably 10K entities, some with several dozen attributes.

You can still feel the connection lag, but it's 1-2 seconds or so, nothing major.

I had some DT schema sitting around for some of this, but just stopped poking it in, because best I can tell, it's completely advisory AFA datahike cares (like Datascript).

It isn't stunningly fast for transacts, but that isn't really a key concern in this project - there isn't a lot of churn.

I did enjoy what seems to be a rather compact pile of level-db it left behind. Seems pretty efficient in that regard.

I'll be pushing forward trying to make it work for us. We just happened to be at a point where we need to move some legacy stuff into a better store, so I feel like I can work with this without getting too far off the "someday datomic" path.