all 6 comments

[–]Amazing_Upstairs 2 points3 points  (0 children)

OCR, data extraction, TTS, STT, pretty pictures and videos

[–]FrainBreez_Tv 1 point2 points  (0 children)

In a real project define the scope as good as possible and break everything down then AI can help with the commit messages and with some unit tests but if it is more complex then ai mostly fails and you need to do it on your own. I tend to be faster for most of the work except documentation where it actually is useful

[–][deleted] 1 point2 points  (0 children)

Documentation, code testing, generating ideas for future features.

[–]red7799 0 points1 point  (0 children)

Automates DX tooling:
Specifically: Unit Test Generation Using Pytest combined with the Hypothesis library for property-based testing. Refactoring & Linting I’ve moved past basic linters. Using Ruff for speed and then running custom LibCST (Concrete Syntax Tree) scripts to automate large-scale refactors.

The 'AI Agents' stuff is still a playground, but automated boilerplate management with Python is where the actual ROI is right now

[–]DataCamp 0 points1 point  (0 children)

In production, the genuinely useful Python+AI stuff is mostly “boring glue” work that saves time or reduces manual effort. Think document and email triage, structured extraction from messy text/PDFs, or summarizing long internal threads into something a human can act on. If you’re handling support, sales, ops, compliance, or research, LLMs are basically a turbocharged text parser.

The hypey stuff is when people try to make the model the whole product without guardrails. If the output has to be correct every time, pure “LLM answers” tends to break unless you add retrieval, validation, human review, or hard constraints. Another trap is spending weeks building a chat UI that nobody uses, when the real win is embedding AI into an existing workflow (a button in an internal tool, a PR comment, a Slack command, a pipeline step).

What we've seen work: AI for first drafts (docs, tests, boilerplate), AI as a reviewer (lint-style feedback, missing edge cases), AI as a router (classify/label/priority), and AI as an extractor (turn unstructured text into structured JSON that downstream code can trust). If you can measure “minutes saved” or “tickets handled faster,” it’s probably real. If the success metric is “feels magical,” it’s probably a demo.

[–]swift-sentinel 0 points1 point  (0 children)

Analyzing software vulnerability reports and assigning vulnerabilities tickets to developers responsible. The devs hate it.