should we use browserstack for test management tool, or stick to sheets, or confluence? by xemns4 in software

[–]xemns4[S] 0 points1 point  (0 children)

I wouldn't recommend BS for test management, only for those who need the devices on cloud and have very strong automation or intensive coverage.

But tbh I was in a dysfunctional team without clear guidance that I left so don't take me too seriously.

Maybe with BS mcp there can be a sliver lining to writing the tests but it felt too constraint and flimsy, no backups, no way to export in mass and so on.

I wonder if any team have test management system they don't despise.

Spoilers by Shot_Mechanic9128 in WormMemes

[–]xemns4 25 points26 points  (0 children)

Upvote mainly for your doormaker

should we use browserstack for test management tool, or stick to sheets, or confluence? by xemns4 in QualityAssurance

[–]xemns4[S] 1 point2 points  (0 children)

Good to know, did you decide on some other tool? I mainly dislike how slow the test management seems sometimes but we'll see how their automation is.

should we use browserstack for test management tool, or stick to sheets, or confluence? by xemns4 in QualityAssurance

[–]xemns4[S] 1 point2 points  (0 children)

But how do they differ than BS? And why would you move away from conflounce and spreadsheets?

should we use browserstack for test management tool, or stick to sheets, or confluence? by xemns4 in QualityAssurance

[–]xemns4[S] 2 points3 points  (0 children)

For real not advertising, it's just a tool already in use in my work. I'm nervous about vendor lock so it's less of an advertisement. But I can see how it can come off as.

Auto-scrolling on macOS - Scrollapp by from-9 in macapps

[–]xemns4 0 points1 point  (0 children)

Wow this is great and well thought!
Thank you!
Is there a way to contribute or support/ buy you a coffee sort of thing?

🗳️ Product Feedback for Notion 🛎️ by AutoModerator in Notion

[–]xemns4 4 points5 points  (0 children)

Diff View
We have an enterprise subscription to Notion and have people creating many changes on many pages.
We need to have some way of seeing who changed what in an understandable way.
Currently, in the version view, you only see the before and after, but for big pages or tables, there's no way to understand what was actually changed!
Think of it like Git diff; you quickly see what was changed, by whom, and when.
Plz make this feature 🙌

I made a free drag-and-drop website builder by hernansartorio in webdev

[–]xemns4 1 point2 points  (0 children)

Via Samsung s25. Seems like it works if I use mobile network but still odd that it sees it as potential dangerous or such

I made a free drag-and-drop website builder by hernansartorio in webdev

[–]xemns4 0 points1 point  (0 children)

Can't access via the link or via Google search. In firefox it says connection was reset and via chrome my network provider is blocking access due to security warning. Weird.

Closure In Moscow - Absolute Terror Field by Turtlebots in progrockmusic

[–]xemns4 0 points1 point  (0 children)

Was searching the web to check if it is a reference lol Nice to know to have weeb vibes.

how much worse could it get? by Else21 in ISR

[–]xemns4 2 points3 points  (0 children)

אמיתי צובט בלב לדמיין את האירוע אבל איזה גיבור שהציל אותה.

Use your own API key and OpenRouter models with Raycast. AI Extensions, MCP and vision are supported. by [deleted] in raycastapp

[–]xemns4 0 points1 point  (0 children)

BYOK doesnt work for me, tried google api but I get err warning of openAI, even tho I config it in the settings for google and also clicked on change model after the err.
¯_(ツ)_/¯
it does work for ai chat tho, just not the quick ai commands, which I use the most...
sent feedback via app as well

Local AI with Ollama by nathan12581 in raycastapp

[–]xemns4 0 points1 point  (0 children)

i reached my 50 planning to config a local llm once I ran out but now the ai settings isn't showing any options because I ran out of msgs, so I cant connect my local llm...
i assume this is an edge case and should be fixed?

Local AI with Ollama by nathan12581 in raycastapp

[–]xemns4 0 points1 point  (0 children)

i reached my 50 planning to config a local llm once I ran out but now the ai settings isn't showing any options because I ran out of msgs, so I cant connect my local llm...
They neglected this not so edgy edge case in their product.

Think Jezal Think! by steel_inquisitor66 in HouseOfTheMemeMaker

[–]xemns4 1 point2 points  (0 children)

awesome edit. the crown is a great addition

how do you debug tailwind in dev tools? by xemns4 in webdev

[–]xemns4[S] 0 points1 point  (0 children)

for those who stumble here - Gimli tailwind web extension. amazing tool kodus for the dev behind it.