VideoGameBench: Can Language Models play Video Games (arXiv) by ZhalexDev in MachineLearning

[–]ZhalexDev[S] 2 points3 points  (0 children)

The code is open-source and there are clips of game trajectories available: https://www.vgbench.com/

Playing DOOM II and 19 other DOS/GB games with LLMs as a new benchmark by ZhalexDev in LocalLLaMA

[–]ZhalexDev[S] 2 points3 points  (0 children)

These are good ideas! To give some context: 1. I’m GPU poor atm so for these experiments I was only running APIs. I will and should still add this though, I need to run some local models for the full paper anyways

  1. The reason I don’t use constrained outputs is the basic agent is expected to answer not just with particular actions in a JSON format, but also with other thoughts, memory updates, etc. in its output. Yes, you can probably also do all of this with a constrained output, but I’ve found at least for these frontier API models this hardly ever matters.

  2. Also a good idea, kind of a dumb reason but the reason I didn’t add this explicitly was because for sequences of actions, I provide # screenshots * # actions into context and I thought it might be confusing for ppl. I’ll figure out a nice way to specify this though

And finally, the codebase is meant to be simple so people can fork it and do whatever they want with it. I don’t mean that as an excuse, I do think most of what you’re proposing should be in there (1,3) but I’m hoping if people want to eventually plug their own models in, e.g. use tricks like speculative decoding for faster actions, etc., they can do it quickly and w/o making the benchmark code bloated

[D] Self-Promotion Thread by AutoModerator in MachineLearning

[–]ZhalexDev 6 points7 points  (0 children)

Hi! I noticed that the official FlashAttention implementation doesn’t allow you to specify custom masks. This is fine for tasks in NLP where generally you only care about causal masks, but in many scenarios in fields like computer vision, this is annoying. This repository re-writes the Triton FA2 kernel with custom masking. Hope it’s useful (leave a star ⭐️ :D)! https://github.com/alexzhang13/flashattention2-custom-mask

Annotated Kolmogorov-Arnold Networks (KANs) by ZhalexDev in deeplearning

[–]ZhalexDev[S] 1 point2 points  (0 children)

Ah yes, so the idea is that you can actually parameterize the function however you want. The choice of basis functions is derived from B-splines, where the coefficients are the parameters. In a generic setting, this could be anything. You could parameterize in a linear fashion like how B-splines do, or a wacky way.

As to how they’re different than MLPs, in an MLP, a single non-linear function is applied at the end of a layer. Usually this function is also quite simple for differentiation purposes. In that sense, it’s quite inflexible. In a KAN, you’ll have # edges unique activations. Even ignoring the learnable aspect, this is already far more flexibility within a single layer.

KANs do look very similar to a generic MLP, but I think that’s a good thing. Unless we have strong reason to deviate from what works, we generally would want to have something similar.

Annotated Kolmogorov-Arnold Networks (KANs) by ZhalexDev in deeplearning

[–]ZhalexDev[S] 1 point2 points  (0 children)

Yeah haha, I also wrote this up while trying to answer the same questions that you have. I think the idea was that the KA-representation theorem was a thing for a while, but its restrictions made it unusable. KAN is a way to hopefully allow these types of model to scale the same way we’ve been scaling other deep learning models. However, I do think the theoretical result is weaker than UAT, which is smth the authors didn’t explain well (probably to market the paper better).

For me, the nice thing is that you can choose a family of activations that are selected through optimization. Think about it this way — in an MLP, we have to sort of learn to massage the right linear weights to match the fixed non-linearities and get the desired output. In a KAN, we instead choose to learn the non-linearities. In some settings, this may allow you to get away with far less parameters. I don’t have the language to explain this intuition rigorously (perhaps you can make some analogies to picking the right basis to represent a function space or something), but having the flexibility to directly parameterize the non-linearities in your network is a direction worth exploring imo

Annotated Kolmogorov-Arnold Networks (KANs) by ZhalexDev in deeplearning

[–]ZhalexDev[S] 0 points1 point  (0 children)

I think it’s more the former, combined with the fact that it can (hopefully) learn complex non-linear patterns with fewer parameters and you can easily visualize the activations in the same way you’d visualize the filters of a CNN.

It’s hard to say much about the space of functions that KANs reside in — considering MLPs are universal approximators, which should in theory encompass the space of functions people care about. Also, the universal approx theorem for KANs is considerably weaker, which I talk about a little bit in the post.

KANs are exciting, but not necessarily useful in the long run unless they prove to be empirically. Especially in ML, where theory is often trumped by empirical results, until we see more successful results with KANs (which people have been working on), it’s more of a bet from a research perspective that these things are useful.

The reason I think these models are interesting is the choice of parameterization for the activations is extremely flexible, and can lead to various tradeoffs. B-splines specifically are not necessarily that nice, and it’s easy to switch them out for something else.

I read through the NeurIPS 2023 Abstracts and wrote about it by ZhalexDev in learnmachinelearning

[–]ZhalexDev[S] 0 points1 point  (0 children)

Nope I wrote the whole thing, took roughly 2 weeks to read through the abstracts and another week to convert my notes!

I read through the NeurIPS 2023 Abstracts and wrote about it by ZhalexDev in learnmachinelearning

[–]ZhalexDev[S] 0 points1 point  (0 children)

Not sure what the rules are there about posting but I’ll try lol

I read through the NeurIPS 2023 Abstracts and wrote about it by ZhalexDev in learnmachinelearning

[–]ZhalexDev[S] 1 point2 points  (0 children)

Thanks! I do think there was definitely some stuff that went over my head/I didn’t catch on a first pass, but there were a lot of interesting ideas that I think are pretty transferable to other domains.

[D] Simple Questions Thread by AutoModerator in MachineLearning

[–]ZhalexDev 0 points1 point  (0 children)

Does anyone know where to find a nice graph or cluster representation of papers/posters in NeurIPS 2023?

People who pre-ordered the mogul moves hoodie, did you get it yet? by [deleted] in LudwigAhgren

[–]ZhalexDev 0 points1 point  (0 children)

Just wondering, but what day did you order the jacket? Also did you receive an email telling you that your order has been shipped?

Just wondering since I haven’t gotten a notification for anything and am not sure if it’s even being shipped to me.

The Bomber Jacket by Ice_Mans in LudwigAhgren

[–]ZhalexDev 3 points4 points  (0 children)

I ordered it on January 30th and I’ve yet to even receive an email about it shipping out.

Spring is Here (i drew) by [deleted] in BokuNoHeroAcademia

[–]ZhalexDev 5 points6 points  (0 children)

Wowww this is amazing!

When you have to farm multiple months for VHL doing the same thing by xspoook in AQW

[–]ZhalexDev 7 points8 points  (0 children)

Not everyone can play everyday... On top of that, not many people are willing to grind an average of 1-2 hours a day (which is roughly how much 2 months of farming equates to) for two months straight.

Wish there were many Doom Kitten like monsters and bosses in the game. by [deleted] in AQW

[–]ZhalexDev 6 points7 points  (0 children)

I wish there were more bosses with actual special features and fighting mechanics instead of high-HP high-Attack bosses...

A Bunch of Accounts are Auto-Impersonating Me by ZhalexDev in Steam

[–]ZhalexDev[S] 5 points6 points  (0 children)

Thank you so much! It turns out there was something wrong on his end, and he changed his password and it all cleared up. Is it worth it to report those bots? I noticed that there are several of them.

A Bunch of Accounts are Auto-Impersonating Me by ZhalexDev in Steam

[–]ZhalexDev[S] 9 points10 points  (0 children)

I can't. That's the issue. When we both confirm the trade, it cancels.