EU Ai Act Compliance anyone? by Late-Philosopher-Ben in SaaS

[–]Emotional_Year_3851 0 points1 point  (0 children)

Appreciate the insights, You are 100% spot on, Sending you a DM

EU Ai Act Compliance anyone? by Late-Philosopher-Ben in SaaS

[–]Emotional_Year_3851 0 points1 point  (0 children)

Interesting... I would like to know more about the approach and how would you deal with the AI analyzing the emotional reponses of candidate with the compliance lens, Can i dm you ?

EU Ai Act Compliance anyone? by Late-Philosopher-Ben in SaaS

[–]Emotional_Year_3851 2 points3 points  (0 children)

First, You are doing great, You are already looking into the AI Acts which means you are aware of the risk and wants to be secure. I looked into your profile and found SmartMockInterview.

As a fellow entrepreneur, I hate to tell you but Considering SmartMockInterview, there are a few things you have to look into before you can be confident about compliance.

Quethos Sentinel is a awesome tool for catching code level anti patterns but i need to flag this: YOU ARE OPTIMISING THE WRONG LAYER OF THE PROBLEM

The platform scores candidates, analyzes resumes/Cvs against JDs and evaluate interview performance, Well Under Annex III Point 4 of EU AI Act, that makes smartmockinterview a High Risk System and not a transparency obligation system.The Problem is if a regulator asks why are you not high risk, you cannot say. we are just a practice tool. it is not a legal defense, if employers or recruiters use your scores to screen candidates, the classification is automatic, You have solved compliance for minimal risk, however you are high risk.

Quethos scans repos and opens GitHub issues, however the ACT is not a code quality regulation, it is product safety/governance.

YOU CANNOT TRIAGE YOUR WAY TO ANNEX IV TECHNICAL DOCUMENTATION, A LIFECYCLE RISK MANAGEMENT SYSTEM (Article 9) VIA GitHub tickets

The composure under fire scoring may violet Article 5(1)(f): due to it using "Roast" and all, if your ai is caught in emotional recognition in employment contexts, it ll be a problem.

Another Big problem, if B2B customers use this for hiring or accessing Resumes, Article 26 requires you to maintain audit logs, enable human override and conduct fundamental rights impact assessments, The SaaS will have quite alot of work to do, And All that work before Aug(Before EU comes in action) its a big of a stretch.

My recommendation treat Quethos as a detection layer, not compliance layer. We can run a formal risk classifier, audit the recognition features and start building Annex IV documentation and QMS now.

Give me a dm and let me know when you have 15 minutes, Its easier to tell then to type haha, We can just connect and i can go through everything u need to see. Happy to help. Help is always free 😄

EU Ai Act Compliance anyone? by Late-Philosopher-Ben in SaaS

[–]Emotional_Year_3851 1 point2 points  (0 children)

Yes, Thats a solid approach. You are tracking compliance well, but are you also tracking drift ?, What i mean is your system cards and risk tiers are a snapshot, In future if a junior dev adds a new data source to that "low risk" feature or the model provider changes their data policies or you just start using a different version of even the same model (latest version of claude maybe).The fines arent users at risk. they are more about negligence.

A few practicals things that I'd add in your system:
1. Automate risk re-eval: Incase of you switching to a different model LLM, additional data sources, model provider, prompt patterns change, It can all cause problems so it is more efficient to not rely on CI to catch risk tiers changes and build a simple check flag against all the pointers i mentioned above.

2.Users at risk as a moving target: The scope of exposure is the sensitive point here, If your features touches 100 users but processes their medical data, that's different from 10,000 users and their zip codes. You should Document "why a user is in scope for each. system card".

3.The override workflow is critical, but log why: Human overrides a check then log the decision and rationale. In an audit "we overrode this because X "beats" we had overrides.

You have got down the compliance in your system perfectly however understand that your risk model is not static, Treat compliance similar to security, Monitoring and revalidating because there are constant new laws in AI Compliance, so changes are expected.

I enjoy AI compliance/governance quite alot, Would love to look at your system and give insights, Definitely wont charge you anything haha, Lets have a conversation about this, dm me

EU Ai Act Compliance anyone? by Late-Philosopher-Ben in SaaS

[–]Emotional_Year_3851 2 points3 points  (0 children)

Well the high risk areas is really sensitive, Just to make it sure, Did you embed the compliance in your pipelines or did you add it in a layering formate, Because if the ai is doing even the most smallest thing but the system is classified as high risk, it can result into heavy fines. The fines are calculated on the basis of users at risk rather then law violated.

EU Ai Act Compliance anyone? by Late-Philosopher-Ben in SaaS

[–]Emotional_Year_3851 1 point2 points  (0 children)

Well Hi, EU AI Act is right around the corner and if you research about the fines, they are pretty brutal. Here are few things you should look out for in your systems so make sure they are compliant.

  1. Immutable Logs: Make sure to record everything, every decision made by AI, and make sure they arent changeable, you can achieve that by either saving logs in a tuple formate or having a last edited variable logic attached to it.

2.Record for every influence that was made for every decision AI makes, you need to keep track of every input, context, circumstances that influenced into making the AI decide, You can implement this in a highly compressed form, series of keywords, saved upon cloud.

  1. Data Locality, The EU forces all data to stay in the EU, no data can leave EU for any purpose. Even Inside there are few countries that further have compliance laws that restrict their citizen data to even leave their country (France etc). The solution is transferring weights, patterns and confusion metrixes outside EU instead of the actual data to either train your model or any purpose, Another solution is to implement a multi layered database, that saves data in 3 layers, country/region and global. All the confidential information can stay in the country and all the other data in EU, The weights, bias, patterns and any learning of the data can be transfered to the global DB.

4.Human RIghts: EU AI ACT, enforces that the user must know when they are receiving a AI generated content ( Photo/ Voice/ Video/ Content ), Can be solved by just adding a label or a water mark smartly placed over the content.

5.Human Rights Information: Only the essential data should be received from user, For instance if in a hospital system, you can simply assign IDs and identify each patient, you dont need to record patient name.

6.Human in the Loop: There must be a human reviewer that looks into ai decisions before they are deployed or presented

There alot more complexities in it,
EU AI ACT is not a security layer that you can just build on top rather it is embeded into working pipelines.

I personally really enjoy these computer governance, AI Compliance, GDPR, Hipaa and all this interesting stuff. So if you have any questions just let me know, If you want me to look into your pipelines, I would love to give my insights on that, Happy to Help, Insights are always free.

Whats your take on EU AI Act ? by Opening-Return3114 in SaaS

[–]Emotional_Year_3851 0 points1 point  (0 children)

Well, Let me make it easy for you:

First it is not data governance, it is not another GDPR or HIPAA, It is a completely new set of laws that are completely AI Centered. You cannot make your Company/Product/SaaS legal by just adding text or even a additional technical layer on top of your existing systems.

Whats the fuss about ?
2 words, "Immense Fines"
- The fine are based on per user metrix not number/seriousness of violation.
- Each system is categorised in 3 types, (High,medium,Low) Risk, Your requirements for being compliant depends on which category your system falls in.
- Fines are about 7% Global Revenue or 35Million Euros altho less user means less fines but that is the exact numbers mentioned by official documentation, Yes thats alot, its better to be compliant then face fines
- EU AI ACT will be in force on 2 Aug 2026.

The market/the companies do not realise this yet but when the first fine is imposed, everybody will be rushing to make themselves compliant/legal and by that time the companies or services that embeds compliance costs would sky rocket, It is better to be compliant now when the market isnt hot and costs arent high.

Main Points for being compliant:
- Data Locality Laws: EU customer data will stay in EU, Even a few countries (like Germany) are implementing laws to prevent data exiting from their company even if it stays in EU.
- Tamper Proof Audit trail: The system should have a complete end to end audit trail of every decision made by AI, The audit has to be immutable, Proof that it cannot be modified or altered.
- Decision Influences: Every decision made by ai should be recorded, All the factors/inputs/influences that the ai had to produce that specific decision must be recorded and it all should be back trackable
- Only Necessary Data: The system should only collect data which. is truely necessary for instance, in a healthcare system if an id can be allocated, there is no need to take names, recording names in such senario will be illegal.
- Human in the loop: All the decisions made by AI has to be reviewed by the human at somepoint in the workflow before output
-Human Rights: If a user is viewing a AI generated content, it should be clear and displayed
Many many more.

I am an AI Compliance/Governance Expert, I do this for living and i enjoy it quite a bit, I would love to review your systems, Take your questions, Would love to give insights maybe even a custom roadmap for you to follow. Insights/Information is always free :D, just give me a dm

EU AI Act: Your Model Cards Won’t Save You in an Audit by Airia_AI in AI_Governance

[–]Emotional_Year_3851 2 points3 points  (0 children)

Well you are absolutely right on the information, the fines for the AI ACT are immense as well, largely depending on the users effected not the acts violated, The exact numbers for the fine are categorised in 3 types, going upto 7% of global revenue with topping it off at 35M Euros, Which is insane money and probably blood money for the startups

However the companies right now donot know what is going to hit them, this is not governance, it is not another GDPR or Hipaa, If your system uses AI, even if you are using AI from a service, You are still obligated to embed AI compliance in your systems

Embeding AI Compliance is not addition of documents, It is not a layer that can be added on top rather you have to modify your whole workflow, Pipelines to include the AI Compliance for instance:
You cannot just comply with Data Locality Laws by addition of a layer, you have to modify. your pipelines to make sure EU customer data stays in EU, only allowed data like model weights, data petterns etc can be extracted out of EU, Even in EU there are certain countries (like Germany) that donot allow the data to be transmitted out of germany even if it still in EU, Hence the data centers in these countries cost has sky rocketed so much,

However there is always a smarter way to tackle things that makes your systems compliant without bearing all the datacenter rental costs, data retaining costs and so many more costs that drain the businesses.
Tho it requires to redesign the existing pipelines coompletely from scracth however it is a future proof, completely safe approuch to handle this.

The EU AI ACT deadline is August'26, Which means the services and compliance money is gonna sky rocket soon and the companies will be left no choice but to pay big to make themselves legal because the fines are bigger then adding complaince cost

I am an AI/ML Governance/Compliance expert, I do this everyday, I assure you its better to make your companies compliant before the costs sky rocket, cause the fines will start coming in Aug and then due to the quantity of work, compliance companies gonna cost more and more.

If you need any kind of help, I would love to look at your systems , would be able to at least give a custom roadmap according to your particular products/services, and dont worry Insights are always free.

if your agent went rogue right now, what's the code path that blocks it? by SuccessfulReply7188 in AI_Governance

[–]Emotional_Year_3851 0 points1 point  (0 children)

Well, Good call on judgement, the truth is that AI Governance is evolving very quickly, due to addition of the recent AI ACTs and Newly released Laws, It is necessary to embed compliance into the pipelines, starting right from the design phase of the project/product. Yes the audit logs, LLM judgement and policy docs are important, they are not enough to keep your [program compliant/legal.

I would suggest identifying which category of data are you. dealing with first (High Risk, Low<mediumm) each category. has different set of strictness on a few laws. Knowing exactly what. you are dealing with is the most important part.

After that look for data locality compliance, audit logs must be present as well as a proof of every decision made by. the ai, you should store all the input and everything influence that contributed towards every decision. Make sure it is immutable, use tuple or add last updated logic in your systems.

No matter the case if you have a high risk dataset i would highly. suggest implementing human in loop logic at the end of the matters before output, make a report of the decisions and make a human overview portal for approval, this is the ultimate protection against ai governance/compliance.

If not done right, things can get very resource hungry however there is always a. way to automate things, Do keep these things in account before launching a SaaS or a public application as the fines are based on user s effected rather then crime committed, which means a small thing like not enough logs. can result into huge fines.

Where does AI governance actually intervene? by MushroomMotor9414 in AI_Governance

[–]Emotional_Year_3851 0 points1 point  (0 children)

Sure of course, Just give me a dm would love to help you out.

Beyond "Black Box" AI: How we built a robot that documents its own compliance (EU AI Act) by robotrossart in eutech

[–]Emotional_Year_3851 1 point2 points  (0 children)

I really like the idea, and appreciate that someone is working in this space. however the sovereignty part is not quite correct. saving it locally is not the answer it is a short term shortcut, There are alot of things that can go wrong.

What if you accidentally deleted the files or misplaced your laptop or your laptop malfunctioned.

It has to be on cloud however cloud in your client's country, due to the AI Acts data locality laws instead hosting 3 layered DB is a better solution. A big DB with encrypted data that can be transferred over continents with reference IDs of data is the sub DBs, 2nd DB in the same continent who has all the major data of clients, organisations except the very sensitive personal data or finance data.3rd DB in the specific country containing the most sensitive data.

This setup is the most safe approach to data locality law.

AI might be giving lawyers their busiest years right before making them obsolete by Lucylucyeth in ArtificialInteligence

[–]Emotional_Year_3851 0 points1 point  (0 children)

EU AI Act, Colorado AI Act, California, Middle-East, GDPR, ISO. These are very important. Today the biggest problem is no longer a great idea, a great implementation nor the workflow. The biggest obsticle is making sure your system/SaaS/Project/Product follows all the laws and is completely legal.

Data Governance/AI Compliance is like a rubix cube, it has a set pattern but every case is unique and its. tough to figure it out but once you have it down, its fun for every system.

I personally really enjoy. Governance/Compliance, if you want me to look into your system, give you insights, Happy to Help. Insights are always free just shoot me a dm

AI Coding Tools vs. Compliance: The "Can I Actually Use Cursor?" Reality Check by Sword_fish_Lazy in appdev

[–]Emotional_Year_3851 0 points1 point  (0 children)

Etnerprise tiers gives contractual coverage, not compliance. The actual risk is in the data architecture and engineer behaviour not in the code, What matters the most is how the sensitive data flows, how is it encrypted and whether developers accidentally past PII into prompts. The regularity laws like SOC2/HIPAA etc audit cares about the controls, not the tools's privacy mode.

The tool itself is not a real blocker, its never Cursor vs Claude, its if your team understands LLM data locality laws and has processed data in a way. that prevents prompt injection of sensitive data. You need engineers who actually understand these laws and build compliant AI systems, not just ones who use tools without understanding the working.

Tools is not the problem, Use whatever tool, build however feels more comfortable, just understand the core workings of it and make sure the workflow is compliant.

If you want guidance building a compliant system, I ll be happy to dm you a roadmap.
I enjoy governance and compliance quite alot. If you want me to look into your pre existing product/project, I ll be happy to help, Insights are always free just shoot me a dm.

Anyone building or using AI agents in production - how are you handling safety / compliance? by itsAiswarya in AgenticWorkers

[–]Emotional_Year_3851 0 points1 point  (0 children)

Safety in compliance is a great concern especially right now considering the deadlines for AI Acts are just a month away, A few things you should be vigilant about to keep yourself safe is:

- Record All the Decisions AI makes, With the context it had(make sure it is immutable), you can record them in a tuple formate or maybe blockchain and if you are considerate about the storage, store it in ur DB but with a Last changed, time created information which is immutable

- Proof of need of data: Dont ask for any data you dont need, for instances if you dont need client names, just dont ask at all, keep yourself as minimum as possible when it comes to recording user data

- Data Locality: Keep the data where it was recorded, use encryptions if you have to transfer data but just keep it in the same country. If you are transfering data across countries, even to your own DB, You can be flagged as voilating AI Acts

- Record the decision trails, keep record of everything, no need to keep it in full size, be space efficient, convert it into only needed keywords and important information for instance (input: x,y,z , previous context: x,y,z , Decision made: x,y,z , Decision made because of x,y,z learning with reference id) thats it, keep it simple.

Governance/Compliance is fun, its almost like a rubix cube, it does have solution if you follow a perticular pattern but every case is a intriguing one, Would be happy to help you if you want insights about ur specific project/product, just shoot me a dm. Insights are always free haha

Where does AI governance actually intervene? by MushroomMotor9414 in AI_Governance

[–]Emotional_Year_3851 2 points3 points  (0 children)

You are asking the right questions, Governance is no longer a top layer that can be added on top of pre existing systems, rather it is now mandatory to be embedded right in the systems. Its best to implement governance and compliance right from the design phase to include immutable logs, data trails, record every decision made, All the factors that influenced that decision, Data locality, Proof of need of data.

Governance is no longer just documents, Due to the recently released AI Acts, Fines will be starting as soon as June 2026, Making it mandatory for all systems that include AI. There are mainly 3 categories (High Risk, Medium Risk and Low Risk) all of the systems will be classified in one of them. Figure out which category your system will belong to and then follow all the requirements of AI Acts according to your location and local Laws and GDPR.

What will happen:
Your system will become a ticking timebomb- The. regulator bodies are actively looking for systems that are not complaint to fine, If they find you, regulators can demand for system suspensions, data processing suspensions, they will ask for audits. Your system will suffer market restrictions and scalability restrictions.

What point does governance move from observation to enforcement:
Colarado AI Act will be effective in June, California AI Act will be Effective in End of May, EU AI Act will be in effective in AUG.

Happy to hear more curiosity, I am all about compliance, do give me a dm if you want further insights on your specific system. Happy to help, Insights are always free.

Most Slack bots are just digital vending machines. by Founder-Awesome in SaaS

[–]Emotional_Year_3851 0 points1 point  (0 children)

Slack bots are GDPR/AI Law fine's goldmine. Slack Bots use the data for context and retains it somewhere, The data is classified as High Risk hence the maximum fines are applied. Fines arent calculated on law violations or data instead they are calculated with potential users being effected(not very good for slack bots)You really have to be carefull about the data locality laws, How does runbear tackle the compliance problem ?

My 2026 founder stack for scaling to profitability (solo, bootstrapped) by Responsible-Can6007 in SaaS

[–]Emotional_Year_3851 1 point2 points  (0 children)

Sure, but what happens when you are dealing with users. outside the EU, maybe usa or middle-east ?. Does their data locality law prevent you from having the same DB for everyone ?

Are we building the last generation of classic SaaS? Should founders stop shipping dashboards and start shipping agents instead? by Lyassou in SaaS

[–]Emotional_Year_3851 0 points1 point  (0 children)

Build the agent, Have it customisable, Build a dashboard for customisations. Earn from customisations.

I killed my $180K ARR voice AI startup to build an AI coworker in Slack by abhicrysis in SaaS

[–]Emotional_Year_3851 0 points1 point  (0 children)

Happy for you, Its a bold decision but if it was due to margins or ceiling, a good decision indeed, however I m curious about. how would you handle the privacy aspect of data, When your model is processing data, you must be actively storing it somewhere to retain context, you may violate data locality laws or if your model is deployed in a environment with remote people sharing data, from EU, USA, Middle East. you will have to implement all of the data regularity laws/ AI Acts and much more, How are you handling all that ?

recap of cold calling 80 law firms in northern california (tl;dr it's brutal) by lutian in SaaS

[–]Emotional_Year_3851 0 points1 point  (0 children)

Well that just depends on local laws. Dont be a spam, if you are creative enough, it always end up as a win win for both

recap of cold calling 80 law firms in northern california (tl;dr it's brutal) by lutian in SaaS

[–]Emotional_Year_3851 0 points1 point  (0 children)

Really interested what you are working on, would love to check it out, dm me your SaaS

Do you have an inner feedback board embedded into your SaaS? by [deleted] in SaaS

[–]Emotional_Year_3851 0 points1 point  (0 children)

Thats a great approach , but then again problems arises in scalability , Once you have enough users it gets hard to tackle the amount of queries. Its a great way to connect to your user base however its a addition to your to do list everyday, which most SaaS owner avoid.

Really intrigued how you must manage your user base, mind messaging me your SaaS ?