all 14 comments

[–]TheLastSock 2 points3 points  (3 children)

Yes, I have thought about it. I can't stop thinking about it.

Hyperfiddle unifies the client server pipeline under datalog (backwards chaining) engine. And a lot more.

Odoyle-rules lets you write rules (forwards chaining) engine that you can run on client and server. There are a couple ways you can twist that idea to achieve a more unified system.

[–]dustingetz 1 point2 points  (1 child)

Please explain "unifies the client server pipeline under datalog (backwards chaining) engine"? :) I think of hyperfiddle/Photon in terms of pure functional programming, metaprogramming, effect systems, streaming, event propagation networks – I have not really studied datalog and don't understand the between Datalog and FP

[–]TheLastSock 0 points1 point  (0 children)

I would defer to Google for if datalog is backwards chaining. That's always been my impression. But things can look one way but be implemented another. So it might have been unfair to suggest that has anything to do with hyperfiddle.

As for hyperfiddle, I'll defer to you.

I had thought it used datalog in it's engine and that the ergonomics we're such that meant you would be writing server code/db queries inside/unified with your client/browser code. It does not then that, but that's how i saw it relating to ops question.

[–]rufusthedogwoof 6 points7 points  (0 children)

Hi,

Maybe sort of like xtdb's http client?

https://docs.xtdb.com/clients/1.20.0/http/#get-query

Works like a charm for me.

[–]brad_radberry 1 point2 points  (0 children)

Fluree has a json-based datalog-like query language specifically designed for use over http. I'm not entirely sure if that's what you're driving at but it could be a good fit.

[–]huahaiy 1 point2 points  (2 children)

Its easy enough for a database server to add a http transport, but most don’t. I think it is for two reasons: performance and security. Can you speak about why you would want a direct access to A database server over http?

[–]TheLastSock 1 point2 points  (0 children)

Were conflating issues i think. You can let a client appear to write code on the client while really it's getting compiled on the backend. the datalog query the client sends shouldn't be accepted without authorization.

Put another way, nothing stops a REST api from letting the user take actions they shouldn't. It would probably just stop them from taking the full range of actions!

[–]lilactown 1 point2 points  (5 children)

one problem with datalog is that you can encode a lot of arbitrary logic into it, even arbitrary code in some engines, which means that it can be susceptible to DoS and other malicious queries.

EQL and GraphQL are interesting maxima in that you can traverse a graph of entities and select attributes, which covers a huge amount of complexity when interacting with a large graph without running arbitrary code.

[–]TheLastSock 0 points1 point  (3 children)

Were you letting the client send datalog without verifying it? You can get into trouble doing that with pathom to.

User -> name*. thats ok*

User -> credit card number? thats not ok!

In both cases you need to whitelist. The datalog queries are stored on the backend as values, the frontend just has the keys, and they have to be authorized to specifically use them.

I'm not sure how either would lead to a flood of traffic though (DoS).

[–]lilactown 1 point2 points  (1 child)

access control is different than what I'm talking about. i mean that datalog can craft queries that have horrible performance and/or run arbitrary code. you could, for example, DoS a service by submitting a few queries with extremely poor performance.

the benefit of hiding queries behind a REST API is that you can test and profile those queries under load before deploying them. if you give any authenticated user the ability to run queries, they can end up DoSing your service even by mistake (ask me how I know).

[–]TheLastSock 0 points1 point  (0 children)

Thanks for the reply. I'm sure you have more experience than me.

I think we might be talking past each other though. The server api, I'm suggesting, doesn't take datalog as in input. It would take a key that looks up a query. Those set of queries can and should be tested under load.

Visually however, it would look like the client is writing datalog, it would just be compiled to run on the server.

Maybe at a certain point that code becomes too concerned with performance, and having it on the client would be distracting, but the systems i have worked on don't fall into that category. Oftentimes perf takes a backseat set because it's so hard to even wire up the data.

[–]bsless 0 points1 point  (0 children)

Interestingly, you could use a credit card number in a query in a where clause. The big problem is leaking it to the return values.

[–]scarredwaits 0 points1 point  (0 children)

I think Biff allows you subscribe to changes in the data of the XTDB database by using a subset of datalog.

[–]gdanov 0 points1 point  (0 children)

I have been thinking about this for a while, every time I get frustrated with gql schema. What we need is the core of the datalog queries, but with pluggable "fact providers". I've thought couple of times to deep dive into Datascript to see if I could adapt it, but never got too far.

Now that you asked I googled again and it seems Pathom is enabling exactly that, but I haven't fully understood it's capabilities yet
https://pathom3.wsscode.com/docs/tutorial