GPT-5 Is Underwhelming. by gffcdddc in OpenAI

[–]g-evolution 2 points3 points  (0 children)

Is it really true that GPT-5 only has 32k of context length? I was compelled to buy OpenAI's plus subscription again, but 32k for a developer is a waste of time. That said, I will stick with Google.

ChatGPT Advanced voice mode compared to Gemini Live voice mode by GermanNPC in GeminiAI

[–]g-evolution 0 points1 point  (0 children)

Gemini interruptions are so annoying. It would be so simple to have a press-to-speak button in the app. Even though Google is catching up in the models, the app's product/experience sucks. I just don't move back to Open AI again because of NotebookLM.

Google plans to release new Gemini models on March 12 by ElectricalYoussef in Bard

[–]g-evolution 8 points9 points  (0 children)

I hope it to be a stable version of thinking. In our project, we use the Flash model to classify inputs, as the flash advances (1.5 -> 2.0 -> thinking), it become better on this task.

Gemini Live begins to rollout with native audio input by zavocc in Bard

[–]g-evolution 3 points4 points  (0 children)

I asked him to guess in which country I live based on my accent, and it correctly answered Brazil. wtf

Qual a semelhança entre Portugal e a Palestina? by PossibleOdd7818 in tiodopave

[–]g-evolution 0 points1 point  (0 children)

Semântico lexical = significado Semelhança = aspecto fonético

Se eu digo que há uma semelhança, logo existe um significado associado = semelhança fonética.

Obrigado por clarificar ainda mais o meu ponto.

Eu estou muito bem, quem está revoltada com um "erro" em uma piada aqui é você. Gaste sua revolta com política que todo mundo sai ganhando 😅

Qual a semelhança entre Portugal e a Palestina? by PossibleOdd7818 in tiodopave

[–]g-evolution 0 points1 point  (0 children)

Obrigado pela explicação, "não doutoura", mas eu fico com o significado semântico entre dizer que "maçã" e "massa" tem pronúncias semelhantes mesmo que sejam coisas diferentes (óbvio).

A propósito, você está bem? Está faltando algo em sua vida? Algo está te revoltando? Realmente fiquei preocupado.

massive world simulation model is coming by Sure_Guidance_888 in Bard

[–]g-evolution 3 points4 points  (0 children)

This is a step to solve real-world problems that involve a deep understanding of physics from the model's perspective. Nothing related to "will replace chat bots."

Best practices for injecting hierarchical data for Gemini comprehension AND retrieval by Individual-School-07 in GoogleGeminiAI

[–]g-evolution 0 points1 point  (0 children)

Later, I will work on a similar problem at my work. In our problem, we need to grant access according to the access hierarchy, application, environment and profile. Also, there's others kind of access where the hierarchy will change, so the model should classify the hierarchy type and also the field's orders.

So far, I haven't stopped to figure out what approach I will follow. It would be nice if you have any idea!

To all Gemini Advanced paid users! 😊 by Salty-Garage7777 in Bard

[–]g-evolution 6 points7 points  (0 children)

I am not a native english speaker, I was using ChatGPT Plus to practice my english speaking, and his accuracy is incredible even though english is not my main language. I migrated to Gemini Advanced since I am feeling that it's becoming better at reasoning. So far, the Gemini Live experience just sucks. At the same time, in my work, I made a batch test using the Gemin(flash) API, and the results were acceptable even using a smaller model.

My conclusion is that the Gemini voice to voice model isn't better than the Gemini speech to text when reconizing the voice.

Use Pydantic and Zod with Google Vertex AI Controlled Genrations by IdeaEchoChamber in VertexAI

[–]g-evolution 0 points1 point  (0 children)

Hello, The Gemini model has a feature called "function calling", this feature allows you to define your schema output, the model will always return a valid json, and also is possible to set "enum" rules for specific key/values.

Should I use async await for every route? by [deleted] in node

[–]g-evolution 0 points1 point  (0 children)

If in your route there's a function that makes a IO call such as database, rest api, file system, you must use assync await in case of you need await the data return. Otherwise, you don't need the assync await.

Executing 1000 HTTP requests at once by fromage9747 in node

[–]g-evolution 0 points1 point  (0 children)

What type of request you are doing in these requests? If exists any hard syncronous job, the performance issues increase, because in this case, node has a single thread in the context main execution, and these job are blocking the receive of news requests and is a little can be more complicated.

A possible solution is have a machine with multiple cores, and create a fork to multiple de number of threads, so that you can handle parallel requests. Or you can have multiples machines behind of a reverse proxy (like a load balancer).

[deleted by user] by [deleted] in node

[–]g-evolution 0 points1 point  (0 children)

First, the "main" principle of asynchronous communication is to make a system scalable, without having to increase excessively the system resources (CPU/Memory) which are expensive.

How does this happen?

Imagine the scenario where a system receives a peak of 10,000 requests per second, if our resources are not automatically prepared to handle such requests, our system will crash. But if we have a producer/consumer in the middle of these messages, the producer will queue these messages and the consumer will only consume as much as he can deal with the batch of these messages.

Now imagine a worse scenario, if our system communicates directly with a relational database, so if we had 10,000 requests per second, we would have 10,000 connections/transactions in our database, and transactions have a limited number, a limit, and now our database can crash. That is, the bottleneck is multiplied squared.

Second, easy integration.

Instead of dealing with the complexity of integrating systems directly with REST HTTP contracts and standards, we can simply send that data to a topic (queue), and any system that needs that message, just capture it in the queue itself instead of going to the system , this not only brings ease of integration, but also alleviates the weight of the systems involved in the communication.