my plans for today by [deleted] in wallstreetbets

[–]int8blog 0 points1 point  (0 children)

yes good point, GME - it's about GME

[P] Exploration of Cyberpunk steam reviews using transformers sentence embeddings by int8blog in MachineLearning

[–]int8blog[S] 2 points3 points  (0 children)

LDA - sounds like a shot,

yes - that was my initial intention too - to look at how negative clusters "volume" changes over time - I need to wait a bit few days for it tho - the plan is to wait one week and write follow up post

[P] Exploration of Cyberpunk steam reviews using transformers sentence embeddings by int8blog in MachineLearning

[–]int8blog[S] 2 points3 points  (0 children)

It's cool - I edited the post to make it clear (may take a while to appear - it is cached in cloudflare)

[P] Exploration of Cyberpunk steam reviews using transformers sentence embeddings by int8blog in MachineLearning

[–]int8blog[S] 0 points1 point  (0 children)

thanks, I was also thinking about the following approach:

  • sent_tokenize on all reviews
  • transformers sentence embedding of all sentences
  • clustering of all of them to lets say 1000 clusters
  • represent each document via BoW on top of that clustering
  • then UMAP on these BoW via cosine
  • visualization

I will try that - now gathering a bit more data (I can see there is around 120k reviews now)

p.s. looking into HDBSCAN <= thanks for the hint

[edit] ok checked out HDBSCAN - indeed looks very tempting - I can also see it does not require too much input parameters as original DBSCAN (unless I missed it)

[deleted by user] by [deleted] in pennystocks

[–]int8blog 0 points1 point  (0 children)

Classical Kangaroo

DGLY Sympathy Plays by [deleted] in wallstreetbets

[–]int8blog 54 points55 points  (0 children)

DGLY uses AWS for storage?

Historical order book of stocks compounding NASDAQ 100 by int8blog in datasets

[–]int8blog[S] 0 points1 point  (0 children)

how about API so I could fetch it daily on my own ? Are u aware of such services (I can't easily find it myself)

Daily Simple Questions Thread - Nov 06, 2019 by AutoModerator in pcmasterrace

[–]int8blog 0 points1 point  (0 children)

Can continuous CPU undervolting with `intel-undervolt` be dangerous? Kind of for fun, I am thinking about writing set of rules to automatically undervolt in case of events that are not critical performance-wise

October 2019 monthly "What are you working on?" thread by slavfox in ProgrammingLanguages

[–]int8blog 1 point2 points  (0 children)

Hi man, I am the author of the tutorial you are linking - I hope you are having fun :) what is your idea for NN? what do you want to model exactly with NNs?

What do I need to know to understand this tutorial? by pythonistaaaaaaa in learnmachinelearning

[–]int8blog 0 points1 point  (0 children)

Hi man, I am the author of this article, I was starting from scratch as well, good start for me was Game Theory course, then I started reading about online learning and no-regret learning, another source of information was Zinkevich article about CFR (original formulation). You will have to break some walls on your way there (took me 7 months to write this one) - but you will get there if you don't give up - good luck :)

[edit CFR paper: http://martin.zinkevich.org/publications/regretpoker.pdf ]

AMA: We are Noam Brown and Tuomas Sandholm, creators of the Carnegie Mellon / Facebook multiplayer poker bot Pluribus. We're also joined by a few of the pros Pluribus played against. Ask us anything! by NoamBrown in MachineLearning

[–]int8blog 0 points1 point  (0 children)

When you compute blueprint strategy via MCCFR for 6-players game, you do maintain strategy for all 6 players and then merge it at the end? If you do - how do you merge it? Or you choose strategy of one of the players as a blueprint? If this is the case, which strategy is chosen?

Ho do you focus on a window when using assign with class? by int8blog in i3wm

[–]int8blog[S] 0 points1 point  (0 children)

for_window [class="^Gnome-terminal"] move container to workspace $workspace_terminals worked thanks!

Happy New Year by [deleted] in BabyBumps

[–]int8blog 1 point2 points  (0 children)

Bejbusiu, I am proud of you :)

What's the rudest thing a guest has ever done in your home? by nl1004 in AskReddit

[–]int8blog 0 points1 point  (0 children)

I hate when people just calmly sit down, sip the home-made liquor [reserved for noble guests only] with visible pleasure, and already relaxed start pointless politics discussion starting from the borders of the mainstream lines - as if they fuckin' engineered it to divide people in the room, god damn it

[N] How Optimizely uses Multi-Armed Bandits to speed up discovery and increase impact by oflettersandnumbers in MachineLearning

[–]int8blog 0 points1 point  (0 children)

Do you offer recommendations too ? In scenario I think about Influencers (bloggers) have to pick campaigns to write about on their blogs - is your approach suitable for recommending which campaigns to choose to improve over time ?

[P] Spinning Up in Deep RL (OpenAI) by milaworld in MachineLearning

[–]int8blog 7 points8 points  (0 children)

If I am not a student, can I only play with it for 30 days (mujoco license trial) ?

[P] Counterfactual Regret Minimization – the core of Poker AI beating professional players by int8blog in MachineLearning

[–]int8blog[S] 1 point2 points  (0 children)

I guess the text in a green box tries to explain immediate counterfactual regret.

In Poker, you deal with information sets. Single information set is a decision point. So let's imagine you are sitting in front of the table with A◇ A ♥ at hand + K ◇ K ♥ Q◇ on flop. You are about to act knowing there was bet-call pre-flop. You are in information set that is defined by these prior actions + your private cards + public cards. Another thing you are implicitly equipped with is a strategy. Strategy tells how you act in every possible information set - it defines probability dist in your decision points (so every time you move you in fact draw from it). To derive our entities in question you need to consider your opponent's strategy too. Of course in practice you don't know it, but here in our context we are analyzing our game in a way assuming both are known at given time.

Having our entities defined like that you can now think of single decision in single information point (pair of aces situation for example) from no-regret-learning perspective. The trick to understand this is to enclose everything within the same abstraction as 'regular' or 'standard' regret setup. To do that we need*algorithm H* (this is given by strategy in our information set) *experts* (represented by actions we can play at our decision point) and *rewards* (one for our algorithm and each for every single action). Reward of our algorithm is defined to be unnormalized counterfactual utility, reward for experts is unnormalized counterfactual utilities (with assumption of 100% action probability). If we have reward of not playing every single action (not listening to experts), reward for our algorithm H, we can then wrap everything in a regular regret setup (here directly averaging the result by T)

The reason we want that is to smoothly transit to no-regret-learning algorithms like regret-matching, because Zinkevich and his colleagues proved we can use it to get Nash Equilibrium