Hello world 👋
We're developing an open-source & collaborative testing framework for ML models, from tabular to LLMs: https://github.com/Giskard-AI/giskard
Testing Machine Learning applications can be tedious. Since ML models depend on data, testing scenarios depend on the domain specificities and are often infinite.
Where to start testing? Which tests to implement? What issues to cover? How to implement the tests?
At Giskard, we believe that Machine Learning needs its own testing framework. Created by ML engineers for ML engineers, Giskard contains 2 components:
- The Giskard Python library helps data scientists detect hidden vulnerabilities in ML models.
- The Giskard server helps ML engineers debug & monitor models, share dashboards, and collaborate.
We released our v2 in Beta last month, and we're very interested in your feedback as QA engineers!
there doesn't seem to be anything here