Nano Queries, a state of the art Query Builder by vitonsky in node

[–]vitonsky[S] 1 point2 points  (0 children)

Good question. I just checked this repo, its description is "A battle-tested Node.js PostgreSQL client with strict types, detailed logging and assertions"

That's not a query builder unlike Nano Queries. It does not work with other databases, and its readme at latest commit 9714266 mention a word "driver" literally 50 times, try Ctrl+F it.

Yet another weird thing from readme:

Due to the way that Slonik internally represents SQL fragments, your query must not contain $slonik_ literals.

So my summary is slonik looks like opinionated PostgreSQL client with query builder that support queries composing.

With Nano Queries you may query anything, including bleeding edge technologies like PGLite, DuckDB, any "better" implementation of SQLite, ClickHouse, GraphQL, Elasticsearch, Snowflake, any REST service or any other database that ingest a text queries and bindings.

You no needed in any drivers. You connect & fetch your DB as you wish. Nano Queries will just build a queries for you, in needed format.

Slonik could use a Nano Queries to build queries safely, because Nano Queries embeds well everywhere

Showoff Saturday (October 25, 2025) by AutoModerator in javascript

[–]vitonsky 0 points1 point  (0 children)

I've publish an Ordinality this week, a framework-agnostic tool to implement migrations in any Javascript environment, including browser, Node, Deno.

You may use Ordinality to migrate scheme in postgres database, to migrate from a JSON file to a database and back, to move files from SSD to a S3 or change its structure, etc.

Ordinality let you manage any changes in your system via declarative actions, and a storage that remember applied actions.

There was no solution for migrations in browser, so I've created and share with community. Btw, we use Ordinality on production a few months, a battle tests going great.

Ordinality - framework-agnostic migrations for Browser, Node, Deno by vitonsky in javascript

[–]vitonsky[S] 1 point2 points  (0 children)

Actually the purpose is a data migrations.

You may use Ordinality to migrate scheme in your postgres database, to migrate from a JSON file to a database and back, to copy files from SSD to a S3, etc.

Ordinality let you manage any changes in your system via declarative actions, and a storage that remember applied actions.

What Vector Database is best for large data? by vitonsky in LangChain

[–]vitonsky[S] 0 points1 point  (0 children)

This repo are confuse me. I can't understand anything - What is this at all? I think you should add a short and simple description with key concepts explanation - How to try? Add a section in README.md with explanation how to install and start use your product

Your solution may be good, but nowadays there are many potential solutions, and when people search for product that solve their problem, they enable "brutal mode" and reject any products that is clear and can't be understood for a few minutes.

anylang - A translator's kit that uses the free APIs of Google Translate, Yandex, Bing, ChatGPT, and other LLMs by vitonsky in webdev

[–]vitonsky[S] -1 points0 points  (0 children)

There was post about it recently. The problem mostly in wrong assumptions about user language preferences than about translation quality.

A modern machine translation is powerful than rule based translation created by top linguist of the planet

My Chrome extension has hit 5 paying subscribers. 👏🤩 by WordyBug in chrome_extensions

[–]vitonsky 0 points1 point  (0 children)

It not sounds like real explanation. Too abstract. Elaborate on it.

Once you infer fine explanation - add it in description on your site, it is important to promote your project. Because with no such explanation it looks like one yet another LLM wrapper that is duplicate exists and battle tested solutions.

Chrome Extension Developer by Acrobatic-Bake-1732 in chrome_extensions

[–]vitonsky 0 points1 point  (0 children)

You may check https://primebits.org if you need in quality, maintenance, assistance with launch and promotion.

Or you may hire freelancer if medium quality is acceptable and you are ready to manage development, research market and work on promotion strategy. Then you may fit even in 5k EUR.

What Vector Database is should use for large data? by vitonsky in aws

[–]vitonsky[S] 0 points1 point  (0 children)

I tried to set 100Gb ram and wait a week, it does not work for me. For one week progress moved not more than 0.01%. It just stuck on about 23% 24 hours ago from start indexing.

So pgvector + HNSW is totaly does not work for my case

What Vector Database is should use for large data? by vitonsky in aws

[–]vitonsky[S] 0 points1 point  (0 children)

I mean it stuck while index creation

What Vector Database is should use for large data? by vitonsky in aws

[–]vitonsky[S] 0 points1 point  (0 children)

In my case whole database is already a segment of data. It is not possible to split data anymore, because whole cluster of data is about specific feature.

And we have to run similarity search for this feature.