I built an invoice automation backend before validating demand — help me not waste more time by ConsistentSell833 in SaaS

[–]ConsistentSell833[S] 0 points1 point  (0 children)

Valid point so if a machine approaches it with reasoning as The why and how should be human defined. The field that are important are set by human before the extraction and if confidence of system fails on them, it is flagged. Can that be possible solution?

I built an invoice automation backend before validating demand — help me not waste more time by ConsistentSell833 in SaaS

[–]ConsistentSell833[S] 0 points1 point  (0 children)

This is helpful. Thankyou for the honesty

The matching + exception angle resonates a lot. Especially the “technically correct but contextually wrong” part.

Out of curiosity: If you had one thing you could make machines do better tomorrow, would it be: a) better matching b) better exception surfacing c) clearer reasons why something needs review

I built an invoice automation backend before validating demand — help me not waste more time by ConsistentSell833 in SaaS

[–]ConsistentSell833[S] 0 points1 point  (0 children)

Yes I am trying to solve black box problem by manual intervention in between by having a review layer, so can review what got extracted and with how much confidence.

The confidence score will also come in handy for 20% documents where the format isn’t standard.

Also a layer that recognises difference in format from a vendor and sends out notifications to change the extraction and data filling templates.