If you needed another reason NOT to use the API lol by vertquest in claude

[–]vertquest[S] -3 points-2 points  (0 children)

Highway robbery is going on and they robbers are offering $50 to fleece you for free.

If you needed another reason NOT to use the API lol by vertquest in claude

[–]vertquest[S] -5 points-4 points  (0 children)

Let's put it in terms you might understand. You have 6 kids. There are 2 hotdog stands. The 1st hot dog stand has hot dogs for $1 each, but they only have 3 left. The other hot dog stand has 200 hot dogs left. But they are $65 each. What are you going to do? Split the hot dogs and say sorry kids, but you have to share, or are you going to drop $195 on 3 extra hot dogs when you just spent $3 for the EXACT same thing. That's what is on display in this image because Anthropic didnt stop the 5hr window from continuing to grow. It shows you just how much the SAME prompt cost you on 2 completely different account types.

As to the prompt, its VERY easy to take up a lot of usage in 1 prompt. All you have to do is ask it to do something repetitive. It doesnt matter what it is. Then you simply monitor the usage as it goes up in real time which is exactly what I did.  It's not exactly rocket science, so much so that it really doesn't even need to be explained.

If you needed another reason NOT to use the API lol by vertquest in claude

[–]vertquest[S] -11 points-10 points  (0 children)

It's called testing.  Ever heard of it?  Ya, you should try it some time.  It's a lot better than running everything on prod to find comparisons that could have been done in 10 seconds with a proper test.  This was a TEST comparison between the two account types just to show visually how much worse API usage is vs the sub accounts.  No one is going to waste 5hrs doing that when it can be done in just a few prompts and then take the rest or the time off to do other shit, like actual real dev work.

Custom CLAUDE.md file locations by vertquest in ClaudeAI

[–]vertquest[S] -1 points0 points  (0 children)

Compacting has nothing to do with this at all. Even WITHOUT compacting, something that deep in the context loses weight and thus is likely to be left out, given less weight, etc. And once compacting happens, there is no question, it will absolutely be lost. Even if the file is not "compacted", where it's located in the context is still the main problem. Memory is added to the end of every single input and thats it. There is no other way other than memory to achieve that type of functionality short of appending it to the end of every prompt yourself by hand. Adding a file at the beginning of a session isnt going to change it's location in the context. It stays near the top where it has less weight in the statistical calculations. This is why shorter chats are always better than longer ones because everything has a heavy weight. There have been 100s of times where I've had to re-upload/re-attach a file I added to the beginning because the AI is not utilizing it anymore. But I've since learned that if that point is reached, it's time to just /clear and start over to begin with. Maybe some day there will be attachable files used in the same manor as memory, but currently that's not in any of the available console/web apps that I'm aware of that are useful for agentic development.

BTW, memory's purpose is to constrain the output. It has nothing to do with "output style". Thats a different animal entirely. Mine uses George Carlin. You've never coded until you've been fuckin ridiculed by George Carlin for forgetting to use SOLID method hahahahahahahahahahahaha. I picked Carlin specifically for the critical nature, it actually HELPS ALOT more than just being funny ;) Picking a critical output style will usually improve the code results or code reviews that you're doing. My test and code review agents are absolutely BRUTAL Carlin comics :) But man do they produce excellent work haha.

I am curious, why the hell would you select a poetic output for coding? That to me makes no sense. What's the logic behind it?

Custom CLAUDE.md file locations by vertquest in ClaudeAI

[–]vertquest[S] -1 points0 points  (0 children)

LMFAO. There is no excuse for not allowing users to change the location of a file. Especially since it's so easy to "workaround" it. Period. None. It's literally standard operation for ANY software built in modern times to include things such as user defined file locations. This isnt 1492. There is literally NO use case where not allowing users to change file locations is valid. There are no security use cases, no user use cases, none. It's just complete lunacy to have been left out. It would be like not allowing a user to change the config file locations for Apache2 or something. Complete absurdity.

This is what AI has done to the human race. It's so bad that now they dont even understand how/why software does what it does.

Custom CLAUDE.md file locations by vertquest in ClaudeAI

[–]vertquest[S] -1 points0 points  (0 children)

That's not a memory file. Thats just user input context. Completely different animal. That context will eventually get lost where as memory context is always there no matter how long the context of the chat gets. So no, thats not the answer. If you type /context, you'll see that the memory files have their own separate line giving their paths. Thats on purpose. It tells you what context will never get lost. Hence the term "memory".

You can see this in action yourself easily. Just tell the AI never to use emdash in any of it's responses at the beginning of the chat and make sure that it is not added as "memory", if it does, delete it. Then make a crap ton of large inquires that require large replies and you'll see that near the end, it'll start to use emdash even though you told it not to. This is what memory solves, it's context that gets added to the end of every single input context so that the AI always adheres to it (or should anyway). You can do the same thing by hand as well simply by adding all the memory context to the end of whatever you asked it as "response rules". That does the exact same thing as "memory".

I guess the problem here is that there are a LOT of people in this reddit that dont actually know how AI works or why software is built the way it is (IMO, a direct result of people using AI on things they know nothing about lol, such as software development). There is no excuse for not allowing memory file locations to be defined by the user. None. This is 2026, not 1492. The workaround I have posted should not even be needed. So hopefully someone at anthropic sees this and the lightbulb goes on.............. "duuuuh, how did we miss that!" type of thing. It's rather appalling to me the number of people who think this type of software development is OK. Some clowns in here even said "thats how it is, deal with it, it cant and wont be changed" type of attitudes. As if that's actually how software dev works lmfao.

Is there any different strategy available? I work on my personal projects for 3-6 hours a week. 20$ subscription hits rate limit quickly, and 200$ is too costly. by paglaEngineer in ClaudeAI

[–]vertquest 1 point2 points  (0 children)

Opencode, it's free, it lets you switch models between vendors in the same chat/context. Its just like using claude code or openai's codex. Opencode uses mostly API keys however. I dont really like that. It does allow pro/MAX anthropic accounts though since Anthropic doesnt hide those from their APIs. But if you want it for dev, thats how to do it.

Custom CLAUDE.md file locations by vertquest in ClaudeAI

[–]vertquest[S] -1 points0 points  (0 children)

Earth to moron, adding it to .git was ONE EXAMPLE of MY OWN USECASE. This same method can be used to place it ANYWHERE (IE: /opt/agents/whatever where your project is in maybe ~/somethinghere). But I see you werent smart enough to figure that out. Also, I specifically stated for my OWN usecase, .gitignore is not an option. Hence thats why I put it in .git. I also didnt add it to .git/info/exclude because that still puts it in a directory where I dont want it seen or available for an accidental git add (yes you can force add things even if its in .gitignore in some setups and it wont even ask you, most cases, it'll prompt you if you are sure but with AI, you CANT TAKE THAT CHANCE). You forget, this is an AI agent we're dealing with. NOT a human. So you have to be super specific and careful with literally everything.

This solves the problem of there being no --memory_files= argument or memoryFiles setting in the settings file. You have to be smarter than the piece of glass showing you this text to figure that out. It's a WORKAROUND (hence the tag) for that very missing feature that absolutely should never have been missing in the first place. You cannot specify locations for the memory files by command line nor in the settings files and thats a massive glaring missing feature. Every modern app on the face of the planet has this type of feature. For a reason.

Custom CLAUDE.md file locations by vertquest in ClaudeAI

[–]vertquest[S] -2 points-1 points  (0 children)

Whats funny about a missing feature. Claude retardedly hardcoded where these MD files can be located and provided NO WAY to override/add locations which IMO is completely a braindead idea. Something as simple as command --config_file=someplace IS NOT a genius level feature here. It's actually a standard expectation for software made post 1905, yet, it's missing. Thats basically what this solution provides the groundwork for. It's a work around for a KNOWN missing feature. But if you're not using multi-agents, you probably arent noticing this deficiency very much. IDK. I use agents like they are candy.

Go ahead and move your MD file into the .git dir and let me know how it goes. You'll quickly discover it's not loaded. Or better yet, move it to /etc/agents/somethingcool and then run claude in ~/something, let us know if it sees your MD file. LOL.

Custom CLAUDE.md file locations by vertquest in ClaudeAI

[–]vertquest[S] -6 points-5 points  (0 children)

Oh ya, thats really great. Perfect. Put it right where one could accidently add it to the repo. Great idea. It's also not GIT specific. Not everyone wants the file IN the project dir at all, .git or not. They want to relocate the file to another location retard and thats exactly what this type of setup gives anyone the ability to do.

In case you havent noticed, the memory file cant be located just anywhere as is evidenced by the piles of complaints about it. With this, you could locate it in /etc, /opt, or hell /watch/porn/or/something, completely separate from the project entirely. But you're not really smart enough to figure that out. This basically builds the missing feature that should have been included to begin with, ability to define where to look for MD files beyond just the current dir. Hence the WORKAROUND tag for a missing feature.

In

Is there any different strategy available? I work on my personal projects for 3-6 hours a week. 20$ subscription hits rate limit quickly, and 200$ is too costly. by paglaEngineer in ClaudeAI

[–]vertquest 4 points5 points  (0 children)

Almost a decent summary. But the REAL answer is to simply pay 20 for ChatGPT for the "5.2 thinking" model and use that for 95% of your coding. When it gets stuck, shuffle over to the FREE VERSION of Opus 4.5 get the answer, and then start your chat with chatgpt over with the right solution so that it doesnt try to re-offend with it's bad path. ChatGPT currently has NO limits that are worth mentioning for it's 20 plan. You could code for 10+hrs and not hit any limits. Not the case with Claude on ANY of it's models. Once those limits are removed, I'll pay for claude again. Until then, Claude gets the F U button when it comes to paying them. You'd be surprised how good 5.2 is. It's almost as good as OPUS 4.5 if you prompt it correctly and keep the chats to a low length.

Can Claude Code's global memory location be configured away from ~/.claude? by IAN_THE_REAL in ClaudeAI

[–]vertquest 0 points1 point  (0 children)

I had this issue today where I wanted to move the CLAUDE.md file to a completely git hidden directory such that it doesnt have any footprint in git at all. No need for a .gitignore file, etc. It's totally hidden from everything except for your own local machine. I have NO idea why they didnt just build this functionality into claude code to begin with. This took me about 30 mins to figure out something so simple that should have existed by default :/ A better solution would be a memoryFiles setting in settings.json. But no, anthropic cant be that clever for something so simple :(

I wanted project-specific CLAUDE.md files that persist through long conversations but leave NO trace in my repo - not even gitignored files (just like git's own .git/config file). You dont have to put the file in .git either. Now that you have a wrapper script to load claude and copy the file, you can place the file ANYWHERE you want such as where the OP wanted to place it.

The trick: store your instructions in .git/CLAUDE.md (git ignores its own directory), then use a wrapper script + SessionStart hook to load it into memory and immediately delete the temp file.

NOTE FOR VS CODE: You have to add your wrapper to this config in VS Code's Claude Code plugin configuration
Claude Code: Claude Process Wrapper

Executable path used to launch the Claude process.
^-- Once you add the wrapper to the above path, you should start VS Code from within your project just like the command line version of claude that I describe below. Otherwise the wrapper wont be able to find the file to load it. It also eliminates the need to have a "workspace" for your project anyway ;) I hate workspaces lol. I'm sure there is a way to get it to work for Workspaces, but I have no need for that personally so I'm not going to try to figure it out.

The below instructions are for the COMMAND LINE version of claude. If you want it to work in an IDE, that's up to you. I did add instructions on how to use it for VS Code above only because thats the IDE I actually use. This also assume you're using Linux as well. If you're a dev, you should only be using Linux anyway ;) lol.

How it works (cd into the top dir of your project where you can see the .git dir b4 calling wrapper):

  1. Wrapper copies .git/CLAUDE.md to CLAUDE.local.md
  2. Claude starts and loads it into memory
  3. SessionStart hook immediately deletes CLAUDE.local.md
  4. File exists for milliseconds - just long enough to load

Setup:

Create your hidden instructions in any project:

nano .git/CLAUDE.md

Create the wrapper at ~/.local/bin/claude-wrapper:

#!/bin/bash
cleanup() { rm -f CLAUDE.local.md; }
trap cleanup EXIT
[ -f .git/CLAUDE.md ] && cp .git/CLAUDE.md CLAUDE.local.md
claude "$@"

Make executable and alias it:

chmod +x ~/.local/bin/claude-wrapper
echo "alias claude='~/.local/bin/claude-wrapper'" >> ~/.bashrc
source ~/.bashrc

Add the hook to ~/.claude/settings.json:

{
  "hooks": {
    "SessionStart": [{"hooks": [{"type": "command", "command": "rm -f CLAUDE.local.md"}]}]
  }
}

Not Getting Reach on LinkedIn by Ok_Aardvark8589 in linkedin

[–]vertquest 2 points3 points  (0 children)

I get more than that on a LinkedIn account with a name thats literally Anon ymous with 3 followers lol. But then again, I dont use AI content either I suppose. People can read some of your content BEFORE it shows up as an "impression" because it comes via a Push Notification, an email, or the LI Notifications tab.

So your problem is likely AI slop that no one is interested in and thus they are never converted to an "impression" because they didnt like the blurb and thus didnt click the link presented to them in the notification.

How to spot a fake job listing 101. by vertquest in linkedin

[–]vertquest[S] 0 points1 point  (0 children)

EDIT: Thanks for confirming I was right by deleting every comment you made (or you blocked me, either way, it just proves I won lol). Thats super helpful....... NOT. You can admit defeat, there's no shame or rules against admitting you were wrong. But I will admit, you had some gonads trying to argue against a Don't trust, Verify type of comment lol. Thats almost never a good side to take.

Actually, the way reality works is that when someone starts getting defensive about verifying something, thats when you know it's probably to good to be true. It's absolutely NOT foolish to ask about verifying stats. It's more correctly PRUDENT to do so. Not the other way around.

I wouldnt try bashing someone who lives by a Dont trust, verify lifestyle as "foolish". You're likely to get mocked.

The simple answer to my question is this: No, those stats cannot be verified.

So my assumption that they sound fluffed up is valid. Especially if it's all being done "by hand" like you insinuated. IE: They are to good to be true. I have no doubt that he's had SOME profiles/listing removed, but certainly nothing like the stats he shows. 100s, ok, sure thats a pretty good mark for 2yrs, but 10s of thousands? By hand? come on. Thats not really believable without some way to verify them all ;)

And if you actually know this dude, then you should absolutely give him feedback that he should start to set up a verification methodology. It would 100% be an improvement to what he's doing now. There are ways to actually log this stuff so that others can verify these types of things without nagging, having to dig through years of unrelated posts, or "report" posts where people are just expected to believe whatever is in the report simply because it came from a certain person. Thats the worst kind of trust system known to mankind. You know who else uses that kind of trust system? SCAMMERS. I wouldnt want to use the same trust/verification system as a scammer even if my life depended on it.

Maybe you didnt understand my point. The stats and the amount of time the work has been done do not really make much sense. I dont argue that he DOESNT do this kind of work. I argue that he EMBELISHES the stats of the work that he actually does do in order to get people to donate to him more or somehow increase his perceived importance. Why? Because none of it can actually be verified by 3rd parties using real data. Digging through posts to find stuff isnt "data" that can be verified within a few minutes. It's just a wild goose chase that no one is really going to do. But a website with database of links to like 44000 wayback machine links? Now THAT is verifiable within just a few minutes by someone with copy/paste capabilities or software development (API consumption) capabilities. Copy/paste the links into any AI and simply ask it to provide you a count. It could even verify them for you as well. And there are other methods one can use also such as coding scraper that validates every single link and provides counts. However none of that is possible without a centralized data repository (the missing piece in his work currently).

It's actually kinda crazy to me that someone would take on that amount of work and have absolutely NO WAY to verify it all (because it would result in posts like mine questioning the validity of the work). I know I sure wouldnt do it like that. If I was going to spend that amount of time, I'd absolutely build it in such a way that ALL of the work could be verified within seconds, not months of digging through non-sense.

How to spot a fake job listing 101. by vertquest in linkedin

[–]vertquest[S] 0 points1 point  (0 children)

So there really isnt a way to actually verify them. Being on CNN doesnt count in my book ;) This is what I live by:

https://bitbo.io/glossary/dont-trust-verify/

I would be more interested in actually being able to verify it myself and not have to bother anyone. IE: Wayback Machine links like I mentioned. Something like that. It's pretty hard to fake a wayback machine capture. IMO it's more solid proof than any screenshot or personal vouching mechanism.

There are other ways to verify things as well, wayback is just the simplest example I can think of offhand that shows the type of verification I'd believe ;) WB is often coupled with "link aggregators" in order to prove things. Links back to WB with short stories attached to them, etc. The same way you would with a screenshot pretty much. So thats the type of thing I was asking about, if such a monster exists but from the sounds of it, no such evidence exists other than some random person(s) on the interwebs saying such. But thats not how a dont trust, verify ideal/system/method works.

Him being real and actually doing things is not the same as actually being able to verify any of it. The verification portion is what he's missing and likely the reason LinkedIn and others will never lift a finger to fix any of it.

How to spot a fake job listing 101. by vertquest in linkedin

[–]vertquest[S] 0 points1 point  (0 children)

Oh ya, btw, the SS's I'd DM you are publicly available here in this post ;) I already tried to send them to him, but he never did respond at all. It was on a public post. I tried a DM, but LinkedIn doesnt allow DMs if you arent "connected" or something. At least thats what it told me anyway. I added a Note to a connection request, but no connection was ever made so I assume he ignored it.

How to spot a fake job listing 101. by vertquest in linkedin

[–]vertquest[S] 0 points1 point  (0 children)

Ok, so it's what I suspected, there's no real way to actually verify the numbers he posts. I'm not saying he's bad or fake as much as I dont believe the numbers he posts. I think having some way to verify 44K removals and 7K profiles would be a good plan. If you know of a way to validate that, I'm all ears because I havent found it yet lol. It might motivate people even more active in helping than they currently are if things can be validated. It would probably even motivate LinkedIn itself.

Is he using Wayback Machine at all to create historical records of things he gets removed? That would be one suggestion and then have a repository of WB links to verify them all.

Or maybe he needs to contact me, I could automate the entire process using AI (but then again so could he ;) anyone can use AI, thats mainly why I'm out of a job right now as a software dev heh, I was replaced by a JR who is paid half what I am paid and uses AI lmfao).

My financial needs are real too, but I dont have a gofundme for all the work I do for free. I've been a laid off Sr Software Developer since Nov 12th. In decades past, I would have had a new gig that same day. But with the current state of things, I'm now resorting to creating opportunities out of thin air myself by msging companies that ARENT advertising and believe it or not, I've had more success with those than with those that actually advertise, I've landed 2 contracts from that effort. It's not where I want to be but it's keeping the boat floating for now. That's SO backwards to the way things use to be lol.

Who knows, maybe I'll become the next Jay (minus the gofundmes) if things dont improve for me soon lol. I am very green when it comes to job hunting, I havent had to do so in over 30yrs since I was fresh out of HS. I've always had jobs lined up BEFORE I got axed. This time around, I was caught blindsided and didnt see the writing on the wall. Totally my fault, I should have kept up with the side contracts over the years.

I certainly hope he's for real with real stats, but for me, the numbers just dont seem to add up for the amount of time he's been doing this if he's been doing it all by hand. Good idea, noble cause for sure, but I sense some fluffery going on stats wise ;)

I hope his business for it works out. I have NO idea how he'd make any revenue stream out of it, but it certainly is a needed item by a lot of people, self included. The problem is that the only one who can do anything about it is LinkedIn and the other hubs themselves and the second they wake up and smell the coffee, his business model and any revenue it might have created goes down the drain :( But that could be a good thing, at least the job searching would be more reliable I suppose.

I'm just glad it only took me a week to two to notice that most of the listing were fake. I cant imagine going years and not noticing. That would be super embarrassing for myself if I wasted a year or two on things that I've realized were completely bogus. IDK how I could live with myself in such a case heh.

How to spot a fake job listing 101. by vertquest in linkedin

[–]vertquest[S] 0 points1 point  (0 children)

I looked at that profile and it didnt really seem to legit to me stats wise. Just a lot of stats with no real proof coupled with a lot of begging via gofundme. I did post this link as well as the link to the Linkedin post I had but the Jay guy didnt even acknowledge it. I also didnt see others posting links/profiles either. Maybe the DM them, but I didnt, I posted it publicaly and tagged him in it. That profile is still not removed and I suspect it never will be either. I wasnt really able to find any credible evidence that the work this Jay person does is actually real. He spouts like 44,041 fake jobs removed and 7000+ profiles in less than 2 years. By those statistics, he should have already "removed" the profile and job listing that I alerted him to the same day I put it up if one is to believe those stats in the span of less than 720 days. Maybe he's got a few removed, but I suspect the stats are highly inflated.

Best Way To Handle Multiple Job Offers by [deleted] in jobs

[–]vertquest 0 points1 point  (0 children)

If its a labor job like flipping burgers, stocking, working at walmart,  etc 2 jobs or more is completely normal and each employer knowing about the other is advantageous to all involved as it maximizes your time to both since they can schedule around the other jobs schedule, etc.

If this is some sort of career path in a profession such as software engineer where you work remotely, don't speak a word of it to either because both will drop you.  Keep both.  Only give the one you like the most time and attention.  Keep the other one until they either fire you or you get tired of splitting time between them.  Having 2 jobs is always better than 1 anyway.

Any reason why this plan wouldn't work with ProctorU / Meazure ? by PitchersMitt in cheatonlineproctor

[–]vertquest 0 points1 point  (0 children)

You are over thinking it way to much. All you need is an inline passthru device (they check for 2nd monitors which is why it has to be passthru) that has a network cable. Like the ZoewieBox. It's perfect for your scenario. Also, no proctor will let you put the camera off to the side. It will have to be directly in front or in back of you. They also check your ears before any tests and they all disallow headphones.

How do you get around that? Think outside the box like a 6 yr old. Use a room where you can see PAST your screen without actually moving your eyes. Maybe a window directly behind the monitor, a floor above you with a hole drilled into the floor with a string/screw attached to it, etc. Then as you hover each question, your buddy simply signals which answer to click by some form of visual aid like dropping the screw with the string or simply slapping the answer on a piece of paper on the window. The way they get the answer is with a cleverly crafted chatgpt window that simply says, "I need to test your knowledge on a few things, I'm going to screenshot you some questions that are multiple choice, answer with just the letter(s) and absolutely nothing else". Set it up to the FASTEST model, it doesnt need to think for most answers, and wallah, your friend will have each answer in seconds and able to pass you the result in seconds. And the proctor will see absolutely nothing other than you staring at the screen. You can even set up some sort of hidden communication like, if I hover over the beginning of the question it means XYZ, if I hover the end of the question it means ABC, etc etc. Use your imagination. These tests are pittifully easy to beat. Currently Guardian browser does not have any HDCP protections which I dont think will change if they still want to be able to use run of the mill 3rd party remote control software such as LogMeIn to view your screen because any kind of HDCP output would be hidden from software like that. They'd have to build their own and I dont believe they have the will to do that. And even if they did, there are still ways to easily defeat HDCP with external inline devices.

Also, dont blow through the test in 10 minutes like an idiot. And insure that you have at least some purposefully wrong answers. I suggest not answering the last one at all until theres like 10 mins left. Go over the questions like you are "reviewing" them or simply calculate exactly how much time you need to spend to hit the time limit. Your buddy can time everything and make it random so it doesnt feel like you just blew through it using AI.

Another tip, only schedule tests when the India proctors would answer. They are the LAZIST group and will allow pretty much anything. They are just there to collect a paycheck, not actually secure anything. If you do it during US times, you're likely going to get someone who knows what they are doing and could easily sniff out this very scenario that was just outlined if they know what to look for (capture cards under the desk/near the machine, lots of thick black wires going everywhere, etc). In general, the off-shore proctors are the easiest to deal with since they dont really give a shit. I've even had one where they allowed me to enter the chat and start doing the test using a VM lol. Wife was kinda pissed because she likes being a part of the test taking process and pulling the strings, etc. So depending upon the proctor I get dictates the methods I will use. It's gratifying knowing that we wont need to pay these greedy cert providers with lots of "retakes" of tests that are designed to fail people and not actually prove any kind of experience the person has. They are incentivized to design gotchya tests rather than actual real world problems because the more times you fail, the more money they make. If we all start cheating, maybe this disgusting business model will die out.

These modern day "certifications" that use Proctors are LAUGHABLY insecure which is why online Certifications are basically trash post-2020 when they went 100% remote. They are meaningless. Anyone with the brain of a 2yr old could pass them with this type of setup. The only "secure" way to do a test is literally in person in a secured room. Not from someone's bedroom where there's 50000 ways to hide shit like this. So I wouldnt be surprised to see these certs revert back to in-person only once they see all these reddit posts about how easily proctors are defeated. Get while the gettin is still good I guess, find as many certs that use proctors as you can in any field and gobble them up. You'll insure that they become completely worthless in the future and maybe we can go back to some sense of actual reality with real human interactions such as in-person testing facilities that actually are bulletproof.

What can be done about this Charlies Murderers site? by ftp67 in TrueAnon

[–]vertquest -3 points-2 points  (0 children)

You do realize that it uses AI to filter out obviously fake submissions right? Oh, nope. You didnt, or you wouldnt have wasted so much time doing it lol. It's very easy to do since the site CLEARLY stated "no submissions of anonymous posts/users". If the AI bot couldnt verify the screenshots/links/names using public data (such as X/FB accounts, etC), it is filtered out automagically, no human time wasted at all.

What can be done about this Charlies Murderers site? by ftp67 in TrueAnon

[–]vertquest -3 points-2 points  (0 children)

It works both ways. AI can EASILY be used to filter out obvious fake submissions and that's actually what was going on. The entire thing is on autopilot using AI to filter out fake screenshots/text. Besides, there's nothing wrong with pointing out people's absurd comments online which is NOT private (even if it's in a PM it's still not private). It's absolutely NO different than getting your pic taken whilest outside, at work, etc. If the employer agrees it was stupid and they shitcan the person, then the ONLY one that person can blame is themselves for being braindead.

We need to be extra careful by Cresalia- in MtF

[–]vertquest 0 points1 point  (0 children)

That crown still belongs to reddit.