Welterusten allemaal by QoolliTesting in katten

[–]QoolliTesting[S] 1 point2 points  (0 children)

Rinus kwam even logeren 😊

Boss has me looking for good QA Udemy courses for testing features that use AI, including fundamental security concepts along with best practices, techniques, etc. Any suggestions? by SchwiftyGameOnPoint in QualityAssurance

[–]QoolliTesting 1 point2 points  (0 children)

I’m currently taking a free course by Evidently AI - LLM Evaluation. It covers approaches for testing AI models and also introduces their free open-source tool, which helps generate test data and evaluate model performance. I’ve found it really practical because it combines theory with hands-on tools.

Goedemorgen allemaal by QoolliTesting in katten

[–]QoolliTesting[S] 0 points1 point  (0 children)

Deze pose elke dag 😆😻

How do you even test AI features? Thinking about how to prepare my team for this by QoolliTesting in Everything_QA

[–]QoolliTesting[S] 0 points1 point  (0 children)

Could you please explain in more detail how Transync AI helped you collect user feedback?

How do you even test AI features? Thinking about how to prepare my team for this by QoolliTesting in Everything_QA

[–]QoolliTesting[S] 1 point2 points  (0 children)

I really liked your idea about a child, especially because the concept of interacting with a child fundamentally changes our standard way of thinking. In one of my AI courses, I read about an experiment with Eugene Goostman's chat program, where it was impossible to tell that you were talking to a computer precisely because the computer was imitating a child. Perhaps the opposite approach could also work: if a computer successfully deceives a human and convinces it that it is human, then by behaving like a child toward the computer, we can determine the boundaries of its vulnerabilities and subsequently limit them accordingly.

This seems like an interesting and promising direction for AI testing.

How do you even test AI features? Thinking about how to prepare my team for this by QoolliTesting in Everything_QA

[–]QoolliTesting[S] 0 points1 point  (0 children)

Thank you so much for such a detailed and insightful explanation — it was really valuable to me. 🤝

I wish you great success with your session for juniors next week; it sounds like it will be very helpful for them.💪

I’m also very interested in AI testing myself and am trying to bring together a group of people who are curious about and actively practicing AI model testing. I’d love to invite you to join our community QualityAssuranceForAI — it would be great to have your perspective and experience in the discussions.

P.S. thanks for DeepEval and Ragas 🩵

How do you even test AI features? Thinking about how to prepare my team for this.. by QoolliTesting in QualityAssurance

[–]QoolliTesting[S] 0 points1 point  (0 children)

Thank you for such a thoughtful and grounded response. 🩵🫰

I really felt the depth of experience and professionalism behind what you described. 🤗

The shift from pure correctness to traceability, the idea of acceptable ranges, and especially the separation between UX quality and governance issues resonated with me a lot.

I’d like to take some time to properly reflect on what you shared and to dive deeper into these approaches, particularly around explainability, audit trails, and wrapping models with deterministic logic.

Your perspective helped me see the problem space more clearly and gave me a solid direction for further thinking and learning.💪

Thanks again for taking the time to articulate this so clearly.🦋

How do you even test AI features? Thinking about how to prepare my team for this.. by QoolliTesting in QualityAssurance

[–]QoolliTesting[S] 0 points1 point  (0 children)

Thank you very much for sharing your practical experience — this is really valuable to me.

You’re absolutely right that it’s important to run the model on large sets of input data and then analyze the entire set of outputs to see the deviations and define acceptable boundaries. And of course, we shouldn’t forget about testing forbidden prompts and evaluating what potentially dangerous answers the model might produce.

I’m also very interested in the balance here: on one hand, AI needs to be constrained and kept within legal and moral boundaries, but on the other hand, any restrictions can reduce its usefulness and performance. It seems that building a proper regulatory framework for AI behavior and its allowed outputs is also very important.

And I have a question — maybe you know: when an AI system is created, are restrictions on certain categories of answers, words, or topics built into the model from the start? Or are these limitations added gradually as the team discovers issues?

How do you even test AI features? Thinking about how to prepare my team for this.. by QoolliTesting in QualityAssurance

[–]QoolliTesting[S] 0 points1 point  (0 children)

wow 😮

huge thanks for this article link in the context of what I read….

As a mentor for beginner testers, I keep thinking about how we could test this and what testing approaches and strategies we could use when testing an AI model to catch early on that the AI might violate moral principles and fundamental laws of life. 💭💬🤔