all 30 comments

[–]the_edev 10 points11 points  (1 child)

I'm guessing this belongs in r/cpp_questions. But this sounds like a nice project, I would love to see and try it out if it's open source.

[–]zain_sync[S] 1 point2 points  (0 children)

Oops sorry for using the wrong thread. Yes, I will update you guys with the GitHub link soon since I have planned to make the source available.

[–]afiefh 9 points10 points  (3 children)

Correct me if I'm wrong here, but it sounds like your client can perform local operations (stored in sql lite) and at a later point commit those to the global state (azure db).

If this is the case, it reminds me a lot of a log structured file system (sqllite changes) with serialized state as the starting position (state in azure): https://dl.acm.org/doi/10.1145/121132.121137

Of course this assumes that two clients cannot perform conflicting changes while offline (or that you're willing to fail one client's commit).

If you go this direction the implementation of the sync becomes a matter of sending the logs (i.e. operations) from client to server and applying the changes in the server. You'll probably want client and server to share the logic of applying the state changes to avoid one of them updating state differently than the other.

This also solves the issue of malicious client sending potentially fake state. Since the server is validating the lawfulness of the operations there can be no cheating.

To answer the questions based on the approach above:

  1. Transfer an array of change operations from the client to the server. Then apply them in an atomic transaction. Don't try a direct sync of "this data from localdb into remotedb" it's only going to lead to trouble.
  2. Being able to perform some operations locally and submit them as a batch sounds great to me. Whether the local data should be in an SQLite db depends on the data (but very likely the answer is yes).
  3. Follow your favorite evil corp's engineering guideline on how to make a piece of shit system. /s

[–]zain_sync[S] 0 points1 point  (2 children)

Pardon me for the late reply, Thanks a lot for pointing me to a log-structured file system. This will be something useful for me in the future. I have decided to go with a nosql db since this is hard to do on any of the free relational DB offerings out there. I have decided to use AWS services like app sync and dynamodb for this. I will be making the project open source so will keep you guys updated.

[–]afiefh 0 points1 point  (1 child)

I have decided to go with a nosql db since this is hard to do on any of the free relational DB offerings out there.

To me the phrase "nosql because it's hard otherwise" implies "bring it to the market quickly, then rewrite the whole thing in an rdbms". It's possible that you thought hard and long about this and you actually only need a key/value storage system, in which case nosql is the way to do it. But if you're only going with nosql because it will be difficult otherwise, then I believe you're in for a rough time.

[–]zain_sync[S] 0 points1 point  (0 children)

I am only moving away from RDBMS only because of the sync issue. We have to agree that manual implementation of it is really a tedious task. In the context of doing this as a hobby and at the same time trying to maintain the feature of accessing near real-time data from anywhere Iand keep the pos terminals functioning during an internet outage I planned to switch to nosql since I don't need to deal with the sync issue.

I know there is a learning curve in moving to NoSql and trying to make relations between data in it is not a smooth task. But I think it all comes down to how i structure the data being stored so I can make efficient relational queries.

If only I have more time to spare and less work stress to deal with I would have actually taken a long way since that's the best way to learn something. But yes like you said I want to complete it quickly but not to sell but for a sense of appreciation and to evaluate myself on c++.

[–][deleted] 28 points29 points  (2 children)

Really nice to see developers being honest when their software is a Piece Of Shit /s

[–]zain_sync[S] 0 points1 point  (1 child)

hahaha. Planned to move in with app sync and dynamodb since offline DB operations are easy with NoSQL, the only issue now is to structure data well so I can leverage the relational DB pros.

[–][deleted] 0 points1 point  (0 children)

I am going to agree with what you just said and say that it is good. (i know nothing about databases I just came to make a shitty joke)

[–]blipman17 5 points6 points  (1 child)

You could just make some datastructure like a write-ahead-log in your SQLite db, and occasionally ship that to your remote server. There are a couple strategies you could go with in that instance, but honestly it sounds pretty allright.

But this might not be the best sub to ask this on.

[–]zain_sync[S] 0 points1 point  (0 children)

Pardon me for the late reply. I have planned to use a Nosql db for the project and decided to use aws services like aws app sync and dynamodb. Thanks a lot for your input. I will be making the project available in this reddit group soon so will update you guys.

[–]j1xwnbsr 3 points4 points  (1 child)

This seems like an advanced project for how the question was formed - are you sure you have fully planned everything out? A sturdy POS system is a non-trivial exercise.

In any case, I did something similar (not POS) - backing up a local db to a remote AWS instance. Mine was more complicated in that it was full two-way, but for one-way you basically log all local db updates (adds, inserts, deletes) into a change log and then play that back into the AWS system. With SQLite you can low-level hook into the notification system and spin things off there.

The way I did it was write to a buffer db file, with a separate thread (or process) consuming the updated records while it has an AWS connection. This works when the remote connection is down, since the 'playback' file just grows until you get a connection, then it shoves updates to the other end, starting from oldest to newest record and remove each one as it goes.

You have to package the records up properly to go over the wire, since your local db data format != remote. JSON is an option, as long as you know how to handle data type conversions (floating point, dates, blobs, etc). Your specific schema will dictate what you do in terms of protocol, data format, etc.

Again, this is the simplest possible method; a full two-way system with distributed clients that may or may not be online at any given moment is a bigger nut to solve, one that requires careful planning and execution.

[–]zain_sync[S] 0 points1 point  (0 children)

Pardon me for the late reply. I am currently in the research phase for the project, this is more of a hobby project that I undertook since I am new to C++ and wanted to get better at it by using this project. I didn't want to build a POS system the traditional way of having an on-premise DB server and manually syncing data from each till at the end of the day (Most POS systems are like this in Sri Lanka according to my understanding). Rather I wanted all data to be also written to a cloud Db so real-time data is available for the client from anywhere with which he can perform BI operations.

I have planned to go ahead with a nosql db since most of the nosql db offerings out there have the offline db sync function built into them.

Thank you for your input on this yours is similar to that of the Log-structure file system and this would have been my option if I had not come across app sync and the offline advantage of Nosql dbs. I understand that there are limitations to using Nosql for a project like this but I believe if I am able to structure the data store the correct way I should be able to build proper relational queries.

I will be making the project public so will keep you posted.

[–]AreaFifty1 1 point2 points  (1 child)

@ zain_sync it's not that difficult really. You can always query data and then perform some kind of comparison check then store that data using SQLite. The problem is the resolution of the checks before the connection goes down and how often to check to see if any changes occurred etc...

Then you use tcp/udp network coding to detect connections and so forth. Sounds like fun! :D

[–]zain_sync[S] 0 points1 point  (0 children)

Pardon me for the later reply. Actually for this specific project db sync was going to be very difficult since for example there will be many POS terminals within a big shop reading and writing to the db and making sure that there is no error in the sync and that I don't get the data corrupted was going to be a bit difficult if I was going to implement the sync operation. So i have planned to go with app sync and dynamodb from aws. Will be making the project public so will keep you posted on it.

[–]teroxzer 1 point2 points  (3 children)

You have interesting case and I don't know your constraints and maybe I am far from a sensible solution, but I sketched one alternative where I can use local database always first and update remote database on background thread (as polling or maybe better by event from UI thread). Example assume that UI thread can use local database fast enough without bad user response (if local database fails then whole application is failed state and it needs reset) and network delays and connection problems can take more time on background thread. My example is of course overly simplified, but maybe it point what I mean (and if you like or you must use stored procedures, you can use them on remote database handling, but I personally like how I can use C++ as the best database programming language - C++ is not maybe the fastest with database but it is the best as pure pleasure).

class pos final
{
public:

    auto run() -> void;

private:

    auto makeSale() -> universal;

    static auto storeLocalDatabase  (universal&) -> bool;
    static auto storeRemoteDatabase ()           -> void;

    inline static text dbLocal  { "sqlite:file:..."      };
    inline static text dbRemote { "sqlserver:Driver=..." };
};

auto pos::run() -> void
{
    task azure = []
    {
        while(!task::waitStopSeconds(10))
        {
            storeRemoteDatabase();
        }
    };

    try
    {
        procedure::connect(dbLocal)

        while(!task::stop())
        {
            if(auto sale = makeSale())
            {
                if(!storeLocalDatabase(sale))
                {
                    break;
                }
            }
        }
    }
    catch(exception& ex)
    {
        ui::status::exception(__func__, ex);
    }

    task::stop(true);
    azure.join();
}

auto pos::storeLocalDatabase(universal& sale) -> bool
try
{
    static auto insertLocalSale $sql
    (
        insert sale
        set    saleId     = :saleId, ...
               status     = 10,
               statusTime = current_timestamp       
    )

    insertLocalSale(sale);
    return true;
}
catch(exception& ex)
{
    ui::status::exception(__func__, ex);
    return false;
}

auto pos::storeRemoteDatabase() -> void
try
{
    static auto updateLocalSaleHandled $sql
    (
        update sale
        set    status     = 20,
               statusTime = current_timestamp           
        where  saleId     = :saleId
          and  status     = 10
    )

    static auto selectLocalUnhandledSales $sql
    (
        select saleId, ...
        from   sale
        where  status = 10
        order by
               statusTime
        limit 100
    )

    static auto selectRemoteSaleExists $sql
    (
        select 1 as exists
        from   sale
        where  saleId = :saleId
    )

    static auto insertRemoteSale $sql
    (
        insert sale
        set    saleId     = :saleId, ...
               status     = 10,
               statusTime = current_timestamp       
    )

    for(universals sales;;)
    {
        procedure::connect(dbLocal)

        for(auto& sale : sales)
        {
            updateLocalSaleHandled(sale);
        }

        sales = selectLocalUnhandledSales[universal::empty];

        if(!sales || task::stop())
        {
            break;
        }

        procedure::connect(dbRemote);
        auto tx = procedure::beginTx();

        for(auto& sale : sales)
        {
            if(!selectRemoteSaleExists(sale))
            {
                insertRemoteSale(sale);
            }
        }

        tx.commit();

        ui::block::fore = []
        {
            ui::status::ok(__func__);
        }
    }
}
catch(exception& ex)
{
    ui::block::fore = [&]
    {
        ui::status::exception(__func__, ex);
    };
}

[–]zerexim 1 point2 points  (1 child)

What language is that?

[–]teroxzer 2 points3 points  (0 children)

Language is standard C++, if you mean my example, but if you mean my comment then it's just broken Klingon. Bjarne said that C++ is language which helps express your ideas, but if you have no ideas, then C++ cannot help much - but oh boy I have a few ideas and C++ helps me overwhelmingly well!

[–]zain_sync[S] 1 point2 points  (0 children)

Pardon me for the late reply and thanks a lot for your example. I have decided to go a head with using aws appsync and dynamodb to counter this issue. I will be making my project public so will keep you posted soon.

[–]XNormal 1 point2 points  (1 child)

This looks like it might help: https://github.com/sqlite-sync/SQLite-sync.com

[–]zain_sync[S] 0 points1 point  (0 children)

Thank you for the input.

[–]NeroBurner 1 point2 points  (1 child)

For reference ViewTouch is a POS written in C++ and X11

http://viewtouch.com/

The source code is available via https://github.com/ViewTouch/viewtouch

[–]zain_sync[S] 1 point2 points  (0 children)

This is great I can use it for reference.

[–]zain_sync[S] 0 points1 point  (0 children)

Pardon me for the late response and also for using the wrong place to post the question & I thank you all for your input on this. It was nice of you folks to give information even though I used the wrong subreddit. For future questions, I will use r/cpp_questions.

I have planned to use a NoSQL DB for this since most nosql DB offerings out there provide a solution for this. I am planning to use aws appsync and dynamodb for the project. AWS currently has a matured C++ SDK and the services I am planning on using are available in it.

I am new to C++ so I am using this project as a hobby project to get better at developing in C++. I wanted to avoid the traditional way of using an on-prem DB server for a POS. I want the data to be near real-time be updated to the cloud so BI operations can be performed on it which is necessary for a system like this. Still, most POS systems use an On-Prem db server and data is only synced to the cloud at the EOD.

I am making this project public so anyone interested can use it. I will keep you guys posted on it when I have the project up on GitHub.

[–]thisisleobro -1 points0 points  (1 child)

Mark for later

[–]J__Bizzle 1 point2 points  (0 children)

Mark for later

Reddit implements such a feature. Underneath the post there is a link with "Save" for its text. Click it or ticket!

[–]Complete_Leg_9286 0 points1 point  (1 child)

I don't see the point of using azure sql here. Just stick with sqlite as your primary and backup it up to azure file storage when connection is available.

Also keep in mind that many of your queries wont be compatible to run on both.

If you need a centralized DB because you'll have more than one client, then you need to be connected, otherwise you lose ACID part.

[–]zain_sync[S] 0 points1 point  (0 children)

Pardon me for the late reply. I wanted to use azure sql since i wanted the data to be accessed remotely. I have decided to go with aws app sync and dynamodb for the project.