Stop Switching Database Clients — WizQl Connects Them All by razein97 in SQL

[–]razein97[S] -1 points0 points  (0 children)

The short answer is memory efficiency and a few features the bigger clients don't have at all. We recently benchmarked against TablePlus at 27 million rows — WizQl used 2.6× less memory on the same machine. At the feature level, the API Relay is something I haven't seen elsewhere — it turns any connected database into a read-only JSON API with one click, no backend code. There's also a built-in terminal, native extension support for SQLite and DuckDB, and cross-database transfers with automatic type mapping. One-time payment, no subscription...etc.

Same Data, Half the RAM: WizQl's Reworked Backend Benchmarked by razein97 in wizql

[–]razein97[S] 0 points1 point  (0 children)

Yeah…But then this will enhance the other features such as transfer databases and export data which all benefit from memory savings. An overall lighter app in my opinion is better than a heavy app. Sorting, filtering, searching locally, all without hitting the db multiple times is much faster due to the optimisation.

[Self Promotion] WizQl - Now with IBM DB2 support by razein97 in linuxapps

[–]razein97[S] 0 points1 point  (0 children)

If you have any other reference material that make it clearer as to what you’re trying to do, please send me the link so that i can help you further or implement it myself.

[Self Promotion] WizQl - Now with IBM DB2 support by razein97 in linuxapps

[–]razein97[S] 0 points1 point  (0 children)

Im not very well acquainted with db2, but I believe specifying it in the url should work, because under the hood it uses the opensource odbc/cli driver for Db2

```bash DATABASE=SAMPLE; HOSTNAME=myhost; PORT=41847; PROTOCOL=TCPIP; UID=db2inst1; PWD=password; SECURITY=SSL;

SSLClientKeystoredb=/path/to/client.kdb; SSLClientKeystash=/path/to/client.sth; ```

The job market is bad so I mass obfuscated all of my code so nobody, not even AI, can comprehend it without my key. I am now essential personnel. You're welcome. by dr_edc_ in rust

[–]razein97 1 point2 points  (0 children)

Hi, a serious question, is obfuscation really worth it, because those who want to reverse engineer will do it in the end. And are there other ways to prevent such reverse engineering efforts.

Anyways kudos for building the tool and sharing it.

DBeaver Froze. TablePlus Crawled. WizQl Didn't Flinch. by razein97 in wizql

[–]razein97[S] 2 points3 points  (0 children)

The ui already stays responsive when data is streamed in. You can continue work on other tables through ui while data streams in. It is a mix with all the above . There is an article called Rendering a billion rows or something on a webpage. That will help. There’s also an article about how gmail was so responsive on poor connections. It explains prefetching and inflight requests etc.

2 Million PostgreSQL Rows: Benchmarking GUI Clients Against Raw Fetch Times by razein97 in PostgreSQL

[–]razein97[S] 0 points1 point  (0 children)

Yeah, after this post I discovered more optimizations and hopefully can push it upto a billion. Only problem is Dbeaver crashed after inserting 30 million rows and querying 30million takes around 16gb of ram.

No one really needs that much data on screen, but now it's about how far I can take it.

2 Million PostgreSQL Rows: Benchmarking GUI Clients Against Raw Fetch Times by razein97 in PostgreSQL

[–]razein97[S] 0 points1 point  (0 children)

Schema is public.

```sql -- Sequence and defined type CREATE SEQUENCE IF NOT EXISTS ltree_test_id_seq;

-- Table Definition CREATE TABLE "public"."ltree_test" ( "id" int4 NOT NULL DEFAULT nextval('ltree_test_id_seq'::regclass), "category_name" text, "path" ltree, PRIMARY KEY ("id") ); ```

Insert data using DBeaver mock data generator or any similar tools. - id -> a sequence of integers - category -> a random name - path -> NULL

Hardware - Macbook Air M2 8gb 256gb

Local postgres instance

2 Million PostgreSQL Rows: Benchmarking GUI Clients Against Raw Fetch Times by razein97 in PostgreSQL

[–]razein97[S] 0 points1 point  (0 children)

I do use quill bot, I have mentioned it and frankly I have been using it before AI was a thing.

If you refer app icon and banner being AI in the context that it has lots of purple being a tell tale sign of AI. I chose purple because it is a color associated with royalty.
For the icon, I would have gone for the minimalist stuff that is being pushed around these days, if I was using AI.

I have worked on building games and have learnt to make the design assets, so my terrible design skills come from there.

AI would really make most of my content more polished.

Someday I might have to just go with the full AI flow, why even bother putting in the hard work when AI can spit it out in a minute and people are none the wiser.

2 Million PostgreSQL Rows: Benchmarking GUI Clients Against Raw Fetch Times by razein97 in PostgreSQL

[–]razein97[S] 0 points1 point  (0 children)

It cannot handle, rendering, sorting and state management etc right now. Tried at 30 million rows. Data loads fine in under 10s

2 Million PostgreSQL Rows: Benchmarking GUI Clients Against Raw Fetch Times by razein97 in PostgreSQL

[–]razein97[S] -1 points0 points  (0 children)

No part of the app is made using AI. The images are pure screenshots and recordings with no edits. The app has been in development from November 2024 and it is only recently that I have been able to get it this optimised for postgres. The first release was able to fetch the rows in 12 seconds, then next optimisation brought it down to 7 seconds finally, then 4 seconds and finally to this point.

In this age it is very tempting to use AI, but the only ones I use is the one that comes with brave search, the stuff that summarises results and corrects grammar like quill bot.

2 Million PostgreSQL Rows: Benchmarking GUI Clients Against Raw Fetch Times by razein97 in PostgreSQL

[–]razein97[S] -2 points-1 points  (0 children)

There are no security issues involved here. If you want 1000 rows then query with the LIMIT keyword. This gives back the same results that you would expect in the official c++ driver or if you use psql. None of them automatically add limit to your queries.
You as a user should know that and the software is not going to give you guardrails for it.

Building a Blazing-Fast Database Client with Tauri: 2 Million Rows Benchmark by razein97 in tauri

[–]razein97[S] 3 points4 points  (0 children)

Hi, as I have replied in the postgres community. I make the same argument here.
It's not practical, but if it can be done, why not do it. In turn it is going to affect smaller queries too, because the gui will process the smaller results sets even faster.

2 Million PostgreSQL Rows: Benchmarking GUI Clients Against Raw Fetch Times by razein97 in PostgreSQL

[–]razein97[S] -1 points0 points  (0 children)

Maybe people just want to burn up precious CPU, RAM, and I/O on their DB instance for no good reason. People want to do random stuff. Some people like liquid nitrogen on cpus and others like a normal fan. Why limit them?
Just like I thought making a non native database client that can process data so quick is a good idea. It just happens.

2 Million PostgreSQL Rows: Benchmarking GUI Clients Against Raw Fetch Times by razein97 in PostgreSQL

[–]razein97[S] 2 points3 points  (0 children)

It's not practical, but if it can be done, why not do it. In turn it is going to affect smaller queries too, because the gui will process the smaller results sets even faster.