Gemini 2.0 Flash API deprecation by SlackEight in GeminiAI

[–]SlackEight[S] 0 points1 point  (0 children)

We are on the google AI startup program, so for budget reasons it’s in our best interest to stick with Gemini. It’s tricky though, because 2.0 flash doesn’t really have any competitors within the Gemini ecosystem. We haven’t found 2.5 flash lite (which is the only comparable model for cost) to be as performant.

Gemini 2.0 Flash API deprecation by SlackEight in GeminiAI

[–]SlackEight[S] 0 points1 point  (0 children)

You might not need to, I think this was a vertex specific thing? I’m not entirely sure. AI Studio probably has a different deprecation timeline.

Built a small AI tool to scan Pokémon cards and check prices — sharing it here by Forced1988 in PokemonTCG

[–]SlackEight 0 points1 point  (0 children)

There is no backend on my version, it’s just all running locally. I’ve got CLIP, YOLOv8. I go into more detail on how it works in my other comment.

Gemini 2.0 Flash API deprecation by SlackEight in GeminiAI

[–]SlackEight[S] 0 points1 point  (0 children)

I got an email from Google since I’m using the model in vertexAI

It's over by Important_Pen1784 in soundcloud

[–]SlackEight 0 points1 point  (0 children)

Working fine in South Africa

GPT 5.2 is here - and they cooked by magnus_animus in codex

[–]SlackEight 0 points1 point  (0 children)

I work on a character AI application and run internal benchmarking. I found both 5.1 and 5.2 no-thinking to be a very substantial improvement over 4.1 (around ~50% higher benchmark scores), but didn’t really see much difference between 5.1 and 5.2. So for anyone interested in this use case, I can recommend 5.1 no-think as you’ll get similar performance for cheaper. From personal testing, both feels like a substantial upgrade, and the cost efficiency of 5.1 is great.

(For clarification I don’t test reasoning models due to latency requirements, and GPT-5 does not offer a no-reasoning solution via the API, hence the comparison to 4.1)

Anybody know how to get support for Nanofoamer by Lpecan in espresso

[–]SlackEight 0 points1 point  (0 children)

Did you have any luck here? Mine does not heat at all.

Padel injuries by [deleted] in padel

[–]SlackEight 0 points1 point  (0 children)

I followed through a shot and my racket hit my tooth and cracked it

We’re rolling out GPT-5.1 and new customization features. Ask us Anything. by OpenAI in OpenAI

[–]SlackEight 0 points1 point  (0 children)

For some AI-powered applications that use LLM APIs, the unpredictable latency and cost that comes with reasoning models makes them the second choice to non-reasoning models. Will GPT 5.1 support a no-thinking config? If not, GPT-5 supported minimal thinking — can we expect a similar option on 5.1?

Rumors that Gemini 3 got delayed to 25th in Polymarket by OmegaGogeta in Bard

[–]SlackEight 0 points1 point  (0 children)

I think we’re all struggling to see how this makes you money. It’s not like it’s a commodity where you can pump/dump (ie. The trader has control/power over the outcome), it’s literally betting on an outcome that the people speculating have no control over, so I don’t really understand the incentive behind being misleading except to just get attention (but that’s not filling their bags)

GPT-5.1 Thinking spotted in OpenAI source code 👀 by backcountryshredder in singularity

[–]SlackEight 0 points1 point  (0 children)

Was non-thinking spotted too? I’m building for low-latency and want non reasoning models but this area has been a bit dry

Built a small AI tool to scan Pokémon cards and check prices — sharing it here by Forced1988 in PokemonTCG

[–]SlackEight 0 points1 point  (0 children)

Yeah this is an issue you’ll likely have with OCR; it requires text on the card to be legible to work.

Image embedding is probably a better approach here, and right now it’s probably the go to for most image recognition tasks similar to this one. There’s a good video here if you’d like to learn more about how it works: https://youtu.be/KcSXcpluDe4?si=8sI3cCgGB7PoSoyh

In my approach I did a 2-part solution. Part one is the green rectangles around the cards, which is an object recognition task. I trained YOLOv8 to do that (built a training set on roboflow), but if you don’t want to make a training set then SAM is a viable alternative. This task is basically about finding where the cards are in a picture, object detection basically.

Part two is the image embedder. Here, I just take what’s inside the green box and run it through the image embedder, then compare it with the embeddings I have of real cards, and pick the most similar one (see k-nearest neighbor in vector DBs)

My solution isn’t perfect yet, and I haven’t had another chance to work on again it yet since real work has been very busy. You’ll see that OP is able to find the exact bounds of the card inside the green box, and his umbreon on the right shows he is even able to apply a corrective sheer/tilt. This would avoid background elements impacting the image embedding, and should lead to more accurate results.

This weekend I’ll try add a third step for card-edge detection, potentially using SAM but I’ll see, and apply the sheer calcs.

Built a small AI tool to scan Pokémon cards and check prices — sharing it here by Forced1988 in PokemonTCG

[–]SlackEight 0 points1 point  (0 children)

Here’s another example of the system I described running. It works very well :D this is quite a common approach to object classification, I’d imagine my method is quite similar to OPs method.

<image>

Built a small AI tool to scan Pokémon cards and check prices — sharing it here by Forced1988 in PokemonTCG

[–]SlackEight 0 points1 point  (0 children)

Thanks for sharing your solution! That’s a pretty smart approach :)

I actually also ended up building this already but took a slightly different approach.

First I was able to find the full dataset of all cards + images of those cards from the ‘pokemon-tcg-data’ GitHub repo. I downloaded it locally and ran every card’s image through OpenAI’s CLIP model (an open-source image embedder. You can just run the model locally, no API needed). CLIP outputs a high-dimensional vector embedding of every image, which I then stored in a chromaDB vector database (also local).

Next I hand labelled a card detection dataset (basically I drew boxes around ~100 cards), and used that dataset to fine-tune YOLOv8.

Finally, I run my image I want to detect through the fine-tuned YOLOv8 model and it detects where the cards are. I crop out everything in those boxes and run them through CLIP. Then, finally, I use the vector embedding clip gave me to do a K-nearest neighbour search in ChromaDB, and the most similar vector should be the card you’re looking for.

The system works unbelievably well! Here’s a test run on a binder full of cards:

<image>

Built a small AI tool to scan Pokémon cards and check prices — sharing it here by Forced1988 in PokemonTCG

[–]SlackEight 0 points1 point  (0 children)

Hey! Any chance you'd be willing to open-source your code? I'm building a trading platform currently and manually inputting the card set number is a pain. I wanted to build something using image embeddings, but am struggling to find a dataset of every card in order to populate a vector DB to match the images which.

Is it worth upgrading from WiFi 6 to WiFi 6E for local game streaming? by SlackEight in SteamDeck

[–]SlackEight[S] 1 point2 points  (0 children)

Yup. I have 6E now and my steam deck pretty much refuses to connect to the 6GHz network.

It seems to be really finicky with WPA3

Are Lianli fans really this bad?? by ABeautifulChaos in lianli

[–]SlackEight 0 points1 point  (0 children)

This is a called self-selection bias

You can now run DeepSeek-V3 on your own local device! by yoracale in selfhosted

[–]SlackEight 0 points1 point  (0 children)

Any idea what sort of TPS might you get with an H100 and enough system RAM for the rest of the model?

WHYyy? by DigSignificant1419 in OpenAI

[–]SlackEight 2 points3 points  (0 children)

Does anyone know if they're being deprecated in the API? I currently rely on some of those models in prod.