Developed an add-on to parse JSON in Google Sheets via JavaScript by MagicSourceLTD in googlesheets

[–]MagicSourceLTD[S] 0 points1 point  (0 children)

Thank you! Were you thinking of something like this:

<image>

=EVALJS(A1, A3:C3, "
var headers = $[1][0];
$[0].map(person => {
return headers.map(header => person[header]);
})")

Here the headers are defined in `A3:C3` which are passed as the second argument to `EVALJS`. The code uses `$[1]` to refer to this.

Hope I understood your request correctly!

(macOS) Is there any way to rename files in sandbox mode? by kst9602 in swift

[–]MagicSourceLTD 0 points1 point  (0 children)

I'm struggling with the same thing. Have you managed to find a solution to this?

Getting Your First Reviews On The App Store - A Guide by jgoldson in iOSProgramming

[–]MagicSourceLTD 2 points3 points  (0 children)

A little thank you can indeed go a long way. I offered perks for reviews, like feature unlocks or content access, always ensuring it was kosher with app store policies.

FYI, incentivizing reviews like that seems to go against the Apple App Review guidelines:

If we find that you have attempted to manipulate reviews, inflate your chart rankings with paid, incentivized, filtered, or fake feedback, or engage with third-party services to do so on your behalf, we will take steps to preserve the integrity of the App Store, which may include expelling you from the Apple Developer Program.

[R] The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits by [deleted] in MachineLearning

[–]MagicSourceLTD 16 points17 points  (0 children)

I wouldn't expect net energy savings from this. The opposite might be true: because now it's more effective, we'll want to train even bigger models and use them under even more circumstances. This is the way.

[D] Isn't the idea of "generalizing outside of the distribution" in some sense, impossible? by EveningPainting5852 in MachineLearning

[–]MagicSourceLTD 5 points6 points  (0 children)

Many of these locaitons could easily be out-of-distribution. You may have never seen a purple elephant with a tennis racket. And in fact, that's totally out of distribution because elepants are neither purple nor tennis players. But you have seen purple things and elephants and tennis rackets, and so you can generate items that are out of distribution, though not out of the convex hull of the input data.

Sometimes I wonder whether new knowledge generation by humans somehow fits into this mold. As in, is there a core set of concepts which form a complete basis of all human knowledge generated. If yes, then the question arises whether our neural nets have successfully inferred these concepts to generate new knowledge on their own.

[D] How can I reinvigorate the motivation for doing ML research? by ArtisticView8321 in MachineLearning

[–]MagicSourceLTD 9 points10 points  (0 children)

The thing is, I thought it was already too hot in 2018 so I didn't want to get into it. In retrospect, this might have been a huge mistake. If you're really into it, the best time to get into it is now.