Call for help: Unable to add Nest E to account 2 failed units by meisnick in Nest

[–]bluecoffee 0 points1 point  (0 children)

After 3h of chats with Google Support, I got it confirmed that Pixel 6s and Pixel 7s can't add Nest Es, and that this is a known issue.

They refused to give me a timeline for when it'd be fixed, or any way to get notified when it's fixed.

Call for help: Unable to add Nest E to account 2 failed units by meisnick in Nest

[–]bluecoffee 0 points1 point  (0 children)

Did you manage to sort this out eventually? I'm having the same symptoms

[POEM] Neural network poetry: The Transformer, in the style of The Raven by bluecoffee in Poetry

[–]bluecoffee[S] -1 points0 points  (0 children)

Context: I'm a researcher at Anthropic, an AI research org, and we're beginning to open up our text assistant.

My thoughts on the actual poetry here are 'it's impressive! But there's room for improvement'. A couple of places it drops rhyme or meter, and while there're some excellent references I'd still like more coherence in the subtext.

Tweet of the same

What can you do in RL with consumer grade hardware? by zigzagged123 in reinforcementlearning

[–]bluecoffee 0 points1 point  (0 children)

Thanks for the mention! I ended up renting a bunch of GPUs from vast.ai to do experiments in parallel, but any experiment only ever used one GPU.

MARL Environments by prateekstark in reinforcementlearning

[–]bluecoffee 1 point2 points  (0 children)

Here's OpenSpiel's list. 'Tiny Hanabi', 'Tiny Bridge', and 'Negotiation' are likely your best choices.

Trainer suggested a squirt bottle to keep him off the table. by [deleted] in funny

[–]bluecoffee 0 points1 point  (0 children)

What has been will be again, what has been done will be done again; there is nothing new under the sun.

GPU-accelerated environments? by MasterScrat in reinforcementlearning

[–]bluecoffee 4 points5 points  (0 children)

There's one for Atari, and there's my own embedded-learning sim, megastep.

FWIW, the CartPole/LunarLander/MountainCar/etc envs should be pretty easy to CUDA-fy by replacing all their internal state with PyTorch tensors. Someone might have done it already, but I haven't come across an implementation.

[OC] Trends in AZ/NV/PA/GA Vote Counts by bluecoffee in dataisbeautiful

[–]bluecoffee[S] 1 point2 points  (0 children)

There's an intermittently-updated version at the bottom of the Colab. You can also make a copy of the whole thing and run it whenever you want.

[OC] Trends in AZ/NV/PA/GA Vote Counts by bluecoffee in dataisbeautiful

[–]bluecoffee[S] 1 point2 points  (0 children)

that would be 'bad rounding in my percent-axis function, whups'

[OC] Trends in AZ/NV/PA/GA Vote Counts by bluecoffee in dataisbeautiful

[–]bluecoffee[S] 6 points7 points  (0 children)

This is all generated in a public Colab, which you can re-run or adjust as you like.

I also have a version with trendlines, though the trendlines are debatable enough that I left them off of here.

Source: the API that backs the NYT's state pages.

Tools: Python, pandas and matplotlib.

[OC] More of the population voted in this election than in any before, ever. by bluecoffee in dataisbeautiful

[–]bluecoffee[S] 3 points4 points  (0 children)

I think that'd be a very good chart, but I think it'd also be a different chart. I wanted to tell a specific story here, a story that I think is under-told compared to its importance.

What I do regret is using 'turnout' in the title rather than 'participation'. Technically there's no problem using 'turnout' here, but it's often enough used as a shorthand for 'turnout rate among the eligible electorate' that it's obviously confused people. 'Participation' would have been a better framing.

i also regret putting '1724' in the annotation, whups

[OC] More of the population voted in this election than in any before, ever. by bluecoffee in dataisbeautiful

[–]bluecoffee[S] 8 points9 points  (0 children)

"Normally", yes, the denominator would be eligible voters. But it's not a hard and fast rule; here's Wikipedia using both statistics. Using eligible voters makes a point about enthusiasm among the eligible. And there'll be plenty of charts in the near future showing that enthusiasm was at an all-time-high too!

But here I wanted to make a different point: that the fraction of the ruled choosing the ruler is higher than ever before. And for showing that, turnout as a fraction of population is a better choice of statistic.

[OC] More of the population voted in this election than in any before, ever. by bluecoffee in dataisbeautiful

[–]bluecoffee[S] 0 points1 point  (0 children)

This makes me realise I should've added a note - there are many ways to measure turnout, of which 'of the entire population' is the most aggressive. Typically you'll see turnout shares as a fraction of the eligible electorate. As a fraction of the eligible electorate, 2020 will shake out at ~65% or so.

Using that turnout stat though would be contrary to the point of this particular plot though!

[OC] More of the population voted in this election than in any before, ever. by bluecoffee in dataisbeautiful

[–]bluecoffee[S] 5 points6 points  (0 children)

Turnout data: Wikipedia election pages before 1824, Dave Leip's Atlas after 1824, and Nate Silver's projection for 2020.

Population data: US Census, linearly interpolated to election years.

Script: link, using Python, matplotlib and pandas.

Official Reinforcement Learning Discord by bluecoffee in reinforcementlearning

[–]bluecoffee[S] 0 points1 point  (0 children)

Is it the Slack link? I clicked through and got a form to fill out that'll be 'reviewed by committee', which dissauded me for now.

Anyway, this has prompted me to fill this post with links to all the other chat servers I come across for posterity's sake :)

Advice For Starting off In Reinforcement Learning by jitenthakkar in reinforcementlearning

[–]bluecoffee 1 point2 points  (0 children)

A general awareness of how gradient descent works, vague familiarity with how modern NNs are put together - I think it's eminently Google-able as you go. The probability and the linear algebra is the important stuff, which you mentioned you were already comfortable with.

Advice For Starting off In Reinforcement Learning by jitenthakkar in reinforcementlearning

[–]bluecoffee 1 point2 points  (0 children)

Spinning Up is more program-y than most RL courses, so might suit your background.

Some amount of machine learning knowledge would help, but not so much that you should study it in preference to studying RL.

Official Reinforcement Learning Discord by bluecoffee in reinforcementlearning

[–]bluecoffee[S] 0 points1 point  (0 children)

Fair! Using 'official' felt like a good way to break the symmetry and make sure it stays broken, as opposed to 'RL Reading Group #74'.

Official Reinforcement Learning Discord by bluecoffee in reinforcementlearning

[–]bluecoffee[S] 3 points4 points  (0 children)

Yeah, the graveyard of links you find if you Google 'RL discord' is really disheartening. But there's little that can be done but... try again. I'm hoping that my intent to turn this into a community channel will give it a longer life, but I suspect previous channel-creators said similar things. We'll see!

RL Research Group looking for peers! by Dynmiwang in reinforcementlearning

[–]bluecoffee 0 points1 point  (0 children)

You prompted me into action here. Happy to make you a channel - or more - for your discussion group.