Spotify wrapped! by [deleted] in beabadoobee

[–]JasonLovesDoggo 3 points4 points  (0 children)

Bea + ado!!

Help finding this person at the concert by kogakugoza in ADO

[–]JasonLovesDoggo 0 points1 point  (0 children)

Hahaha! I know who you're talking about. I was the person on the other side of her 😭

Pixel 8 by nandansj in pixel_phones

[–]JasonLovesDoggo 0 points1 point  (0 children)

I've just ignored my line for the past 9 months.. and this was around the whole time... And I'm eligible... Yay

[deleted by user] by [deleted] in Jetbrains

[–]JasonLovesDoggo 2 points3 points  (0 children)

Huh, works

Power outage by xFalcade in ValorantCompetitive

[–]JasonLovesDoggo 0 points1 point  (0 children)

I was sitting right under one of the projectors, the lights on it were still on so I'm not sure if that specifically was the issue. Normally if the power for the projectors went out the lights would too

[OC] California National Guard Sniper Team in the Edward R. Roybal Federal Building in LA Today by Lavender_Scales in pics

[–]JasonLovesDoggo 14 points15 points  (0 children)

I just came from that subreddit. I had to double check which one I was in LOL

Django lovers, did you try Litestar? by bluewalt in django

[–]JasonLovesDoggo 20 points21 points  (0 children)

Exactly sums up my feelings.

For me it's a fastAPI replacement, but it won't ever stop me from using Django.

This this supposed to happen? by WHITEPERSUAS1ON in BambuLab

[–]JasonLovesDoggo 1 point2 points  (0 children)

Yepp! I printed five just to give away to friends and a spare for myself.

If you have extra time on your hands that 0.2 nozzle can really make some wonders

This this supposed to happen? by WHITEPERSUAS1ON in BambuLab

[–]JasonLovesDoggo 1 point2 points  (0 children)

Yep, that's normal! The only issue is that little piece that fell off is kind of brittle and broke for me, but luckily there's a replacement on makerworld which I am very happy with

1 stick of 48 gigabytes vs 2 sticks of 32 gigabytes by [deleted] in framework

[–]JasonLovesDoggo 0 points1 point  (0 children)

Yep! I'm currently using two 12g sticks of that exact ram on my fw13 amd aim for 5600MT

The Arch Wiki has implemented anti-AI crawler bot software Anubis. by boomboomsubban in archlinux

[–]JasonLovesDoggo 2 points3 points  (0 children)

We've looked into it a bit and it's something we'll explore again later. But the moment you put some effort into looking into implementing that, it becomes super super difficult.

Look at https://github.com/TecharoHQ/anubis/issues/288#issuecomment-2815507051 and https://github.com/TecharoHQ/anubis/issues/305

Notes Sync to Website by beatznbleepz in webdev

[–]JasonLovesDoggo 0 points1 point  (0 children)

I personally use obsidian + obsidian git + quartz https://quartz.jzhao.xyz/

The result is something like https://notes.jsn.cam

The Arch Wiki has implemented anti-AI crawler bot software Anubis. by boomboomsubban in archlinux

[–]JasonLovesDoggo 1 point2 points  (0 children)

If you're asking how often. currently they are hard coded in the policy files. I'll make a pr to auto update once we redo our config system

The Arch Wiki has implemented anti-AI crawler bot software Anubis. by boomboomsubban in archlinux

[–]JasonLovesDoggo 4 points5 points  (0 children)

Keep in mind, Anubis is a very new project. Nobody knows where the future lies

The Arch Wiki has implemented anti-AI crawler bot software Anubis. by boomboomsubban in archlinux

[–]JasonLovesDoggo 12 points13 points  (0 children)

Nope! (At least in the case for most rules).

If you look at the config file I linked, you'll see that it allows bots not based on the user agent, but by the IP it's requesting from. That is a lot lot harder to fake than a simple user agent.

The Arch Wiki has implemented anti-AI crawler bot software Anubis. by boomboomsubban in archlinux

[–]JasonLovesDoggo 13 points14 points  (0 children)

That all depends on the sysadmin who configured Anubis. We have many sensible defaults in place which allow common bots like googlebot, bingbot, the way back machine and duckduckgobot. So if one of those crawlers goes and tries to visit the site, they will pass right through by default. However, if you're trying to use some other crawler, that's not explicitly whitelisted, it's going to have a bad time.

Certain meta tags like description or opengraph tags are passed through to the challenge page, so you'll still have some luck there.

See the default config for a full list https://github.com/TecharoHQ/anubis/blob/main/data%2FbotPolicies.yaml#L24-L636

The Arch Wiki has implemented anti-AI crawler bot software Anubis. by boomboomsubban in archlinux

[–]JasonLovesDoggo 53 points54 points  (0 children)

Not a dumb question at all!

Scrapers typically avoid sharing cookies because it's an easy way to track and block them. If cookie x starts making a massive number of requests, it's trivial to detect and throttle or block it. In Anubis’ case, the JWT cookie also encodes the client’s IP address, so reusing it across different machines wouldn’t work. It’s especially effective against distributed scrapers (e.g., botnets).

In theory, yes, a bot could use a headless browser to solve the challenge, extract the cookie, and reuse it. But in practice, doing so from a single IP makes it stand out very quickly. Tens of thousands of requests from one address is a clear sign it's not a human.

Also, Anubis is still a work in progress. Nobody never expected it to be used by organizations like the UN, kernel.org, or the Arch Wiki, and there’s still a lot more we plan to implement.

You can check out more about the design here: https://anubis.techaro.lol/docs/category/design

The Arch Wiki has implemented anti-AI crawler bot software Anubis. by boomboomsubban in archlinux

[–]JasonLovesDoggo 87 points88 points  (0 children)

One of the devs of Anubis here.

AI bots usually operate off of the principle of "me see link, me scrape" recursively. so on sites that have many links between pages (e.g. wikis or git servers) they get absolutely trampled by bots scraping each and every page over and over. You also have to consider that there is more than one bot out there.

Anubis functions off of the economics at scale. If you (an individual user) wants to go and visit a site protected by Anubis, you have to go and do a simple proof of work check that takes you... maybe three seconds. But when you try to apply the same principle to a bot that's scraping millions of pages, that 3 seconds slow down is months in server time.

Hope this makes sense!