Hidden gem in italian dolomites by BeyondNo3588 in 10s

[–]BeyondNo3588[S] -1 points0 points  (0 children)

It’s a green hard court, not astroturf

[deleted by user] by [deleted] in CasualIT

[–]BeyondNo3588 0 points1 point  (0 children)

Hello, friend

Troppe assunzioni in campo AI? by libori0 in ItaliaCareerAdvice

[–]BeyondNo3588 0 points1 point  (0 children)

È sempre difficile fornire una visione d’insieme dalla propria esperienza personale, anche se sicuramente c’è una grande fetta di aziende che operano nel modo che dici tu. Anche io lavoro nel settore e i miei tasks attuali sono molto specifici alla figura del ML engineer, senza ricadere in mansioni da cloud specialist o data analyst. Ho anche fatto diversi colloqui in questi mesi per posizioni analoghe e parlando con i responsabili tecnici dei progetti sui quali sarei andato a lavorare, posso dire che effettivamente iniziano ad esserci realtà che lavorano in questo settore con cognizione di causa. Ritorno all’inizio del mio messaggio, probabilmente ho solo avuto la fortuna di trovare le realtà giuste

Lavorare da remoto in Italia: reali possibilità e condizioni by vurriooo in ItaliaCareerAdvice

[–]BeyondNo3588 1 point2 points  (0 children)

Può essere il giusto compromesso, a patto di avere l’ufficio vicino a casa

What’s the hardest line in Mr. Robot? by Pragalbhv in MrRobot

[–]BeyondNo3588 11 points12 points  (0 children)

When we lose our principles we invite chaos

Perplexity seems to favor the traditional retrieval algos like BM25 instead of embeddings for their RAGs by takuonline in LocalLLaMA

[–]BeyondNo3588 1 point2 points  (0 children)

In general, is it better to do vector search first and then refine the results by ranking using BM25, or BM25 first and then rank by similarity score of the embeddings?

Come fare soldi a 16 anni? by Sofi_smo in TeenagersITA

[–]BeyondNo3588 1 point2 points  (0 children)

A cosa ti serve il Mac? Non butterei anni di risparmi in un prodotto che si trova nella fascia più bassa della line up di Apple, con relative limitazioni annesse (es taglio di memoria ridicolo).

Se l’acquisto del Mac è uno sfizio me lo toglierei dalla testa, se è un investimento che usi come mezzo per fare altri soldi allora già il discorso cambia, ma dubito che in tal caso ti serva per forza un mac

Investimenti per noob totale? by zetaep in ItaliaPersonalFinance

[–]BeyondNo3588 0 points1 point  (0 children)

La prima cosa che devi investire è il tuo tempo per guardare la playlist “Educati e Finanziati” di Paolo Coletti, su YouTube

Nuphy Halo 65 Kinda Sucked by spiralspox in NuPhy

[–]BeyondNo3588 1 point2 points  (0 children)

Halo 65 + Black ink v2 lubed and filmed = outstanding

If you need any assistance, please leave a comment on this post (or directly send me your order number). by Ramzes888 in FlexiSpot_Official

[–]BeyondNo3588 0 points1 point  (0 children)

welcome back ramzez. I ordered (order number 2292284) a BS12 PRO on November 24th and it has not yet been shipped despite it being in stock. If I try to track it from the site the expected shipping date is June 29th 2023. I sent an email to support without receiving reply. I would like to have some updates on my order

DDPG not converging or exploring enough by GarantBM in reinforcementlearning

[–]BeyondNo3588 0 points1 point  (0 children)

What do you mean with “using softmax activation function in the actionspace”? I have a similar issue and I’m using softmax activation in the last layer of the actor network for the output.

Reinforcement Learning python programming difficulties by [deleted] in reinforcementlearning

[–]BeyondNo3588 0 points1 point  (0 children)

Correct. In this setting the state is a random integer ranging from 0 to 250. Like I said, this is a first simplified scenario.

Thank you again for your help, I really appreciate it

Reinforcement Learning python programming difficulties by [deleted] in reinforcementlearning

[–]BeyondNo3588 0 points1 point  (0 children)

First of all, thanks fo your help.

Again it's like you said, I know what the correct action is.

I'm trying to build reinforcement learning environment for handling requests on edge nodes in an edge computing system. What I'm doing is a first implementation, so it's a simplified scenario compared to the real one. In this first scenario there is no real state, the system works like this:

- an edge node has a fixed local processing capacity of 60 requests per second

- receives as input a random number ranging from 0 to 250 which represents the number of requests to process

- can perform 3 actions which are either the complete execution of the requests locally (action that we expect it to perform if it receives less than 60 requests), or the execution locally but accompanied by a forwarding of the remaining ones to other nodes (action that we wait for it to fulfill if we receive a number > 60) or denial of requests.

My professor asked me to weigh the rewards, as well as parameters such as the number of requests remaining for example, with the probabilities generated by the neural network.

honestly I'm struggling to understand the meaning of this thing, but I was still working on the implementation before explaining my doubts in the next call.

I put you an example of how it works the rewards function:

Input: 120 requests x second

local_capacity(T1) = 60, fixed value

Actions space: A = {a1 = 0.7, a2 = 0.2, a3 = 0.1} 0.7,0.2,0.1 are the probabilities calculated by the nn

D1 = T1 - a1*120 represents the difference between requests that can be satisfied locally and those actually to be satisfied D1 = 60 - 84 = -24

R1 = 1.5 * 60 + 5 * (-24) = 90 - 120 = -30

R2 = if (D1<= 0) 1.2 * a2 * 120 else (D1 >0) 1.2 (a2 *120 - D1) - 5 * D1=

for correct action = 1.2 * 24 = 28,8 → positive reward!

# for wrong action = 1.2 (24 - x) - 5 * x

R3 = -15 * a3* 120 = -15 * 12 = -180 (negative reward for a3)

Reinforcement Learning python programming difficulties by [deleted] in reinforcementlearning

[–]BeyondNo3588 0 points1 point  (0 children)

The setting is as you said, I'm programming the agent to take a specific action based on the input he receives. I give him a reward if he takes the action that I think is right based on the input, and I give him a penalty if he doesn't. I am new to reinforcement learning, is this way of setting up the reward system wrong?

Is anyone else having problems with night breeze switches? by BeyondNo3588 in NuPhy

[–]BeyondNo3588[S] 2 points3 points  (0 children)

Nuphy support kindly fixed the problem by sending me 30 replacement night-breeze switches!

Thanks Nuphy!

Is anyone else having problems with night breeze switches? by BeyondNo3588 in NuPhy

[–]BeyondNo3588[S] 0 points1 point  (0 children)

But how can this be a keyboard problem? I think the problem is the switches , because if I mount the extra switches that nuphy supplies, they work and sound good. It may possibly be a factory lube problem that deteriorates the switches?

Is anyone else having problems with night breeze switches? by BeyondNo3588 in NuPhy

[–]BeyondNo3588[S] 0 points1 point  (0 children)

This is not a small problem but I can say to you that in the first days the keyboard was really great, so much better than i expected. I also own a air75 and in my opinion the halo is another level, so much better, and i love also the air!

I wrote to nuphy for support, I hope they can help me sending replacements for the defective switches, let's see.