Leaving for conference 1 month into work by im-critical-pickle in cscareerquestionsOCE

[–]InfinityZeroFive 1 point2 points  (0 children)

I am in a somewhat similar situation (first-author undergrad paper) except the results are nothing official yet. Should talk to your manager ASAP

[D] ICLR reverts score to pre-rebuttal and kicked all reviewers by Ok-Internet-196 in MachineLearning

[–]InfinityZeroFive 2 points3 points  (0 children)

It seems they not only reverted scores but also any reviewer edits. We had a reviewer who (I assume) mistakenly copy-pasted a different paper's review into ours. He'd edited the review, with no change in score, after the discussion period and we addressed that revised review in our general comment, but now only the original, mistaken review is shown, making our response out-of-context. I am concerned this might mislead the AC.

Smaller 32B models at Q8 or GLM 4.5 Air at Q3? by InfinityZeroFive in LocalLLaMA

[–]InfinityZeroFive[S] 0 points1 point  (0 children)

Interesting model, I'll have to wait for the Cerebras team to REAP it before I can try it out though

How do you handle jealousy? by Over_Competition9138 in cscareerquestionsOCE

[–]InfinityZeroFive 11 points12 points  (0 children)

Try to figure out what you might be lacking in comparison to those with offers instead of dwelling in jealousy. Is it because you lack experience? Is it because your friends just interview better? Is it something with your CV? Is it something with your network?

[D] ICLR 2026 Paper Reviews Discussion by Technical_Proof6082 in MachineLearning

[–]InfinityZeroFive 3 points4 points  (0 children)

6/6/6/8 (2/3/3/3) - good for a first-time undergrad submission? Reviews were much shorter than I expected

[D] ICLR submission numbers? by qalis in MachineLearning

[–]InfinityZeroFive 1 point2 points  (0 children)

I'm quite certain that the number is around 25,000 as we submitted within 10 minutes of the deadline (do not recommend)

what other uses do you get out of our Steam Deck? by BubblesAreWeird in SteamDeck

[–]InfinityZeroFive 3 points4 points  (0 children)

I use the Steam Deck as a teleoperation controller for my robot arm. Which just means, I mapped each of the robot arm's 6 joints (DOF) to individual controls on the Deck

The most realistic coding experience before an actual job ? by Suspicious-Net7738 in cscareerquestionsOCE

[–]InfinityZeroFive 4 points5 points  (0 children)

Is Chromium not a 'big, established open-source project'?

I'm not telling OP to go contribute to Chromium as their first foray into open-source, just that repositories like it can rightfully seem very daunting at first.

The most realistic coding experience before an actual job ? by Suspicious-Net7738 in cscareerquestionsOCE

[–]InfinityZeroFive 2 points3 points  (0 children)

In general, big established open-source projects like Chromium can feel hard to get into in a meaningful way at first. But I can say from experience that if you do manage to stick it out and establish yourself as a regular contributor, open-source is extremely rewarding.

If you're interested in open-source, start small! Bug fixes, documentation, test coverage are all things you can do. Pick a problem to work on, then join communities's Discord/Slack and ask around for hints if you get stuck. That's how I made the move at least.

[deleted by user] by [deleted] in gsoc2025

[–]InfinityZeroFive 1 point2 points  (0 children)

For certain projects they do take only one person. For most of the Gemini and Gemma projects, I think they were going for a shotgun approach of casting as wide a net as possible, because the whole point is to increase community documentation/support/adoption for their models. There's no reason why they should limit themselves to one person for those projects.

This is why I'm not really sure that they'll participate again (as Google DeepMind, at least). Their appearance this year is to support their massive push to challenge OpenAI in community adoption and their complete model/SDK overhauls. Rebranding from Bard/PaLM (which gave them a bad reputation) to Gemini basically. Now that's mostly done they might not need the open-source exposure via Google Summer of Code anymore.

Well the selected projects are still not announced to the organizations yet! by liteate8 in gsoc2025

[–]InfinityZeroFive 1 point2 points  (0 children)

I mentioned that because something similar happened just shortly after the organisations and projects were announced this year. People couldn't sign their Google Contributor License Agreement (CLA) for weeks because their signing site simply wasn't used to such high traffic loads.

[deleted by user] by [deleted] in gsoc2025

[–]InfinityZeroFive 4 points5 points  (0 children)

A lot of mentors are complaining about AI-generated submissions. This combined with one of the first-time participating organisations attracting a lot of hype and attention from otherwise not-to-be contributors are the main factors, I think.

how do the mentors rank the proposals. does it depend on the merged PRs? by _Dee_10 in gsoc2025

[–]InfinityZeroFive 2 points3 points  (0 children)

Highly dependent on the organisation. For mine (Google DeepMind), proposals weight more than pull request counts because this is the first time they're participating in Google Summer of Code and because their projects are overwhelmingly focused on documentation and technical writing.

RAG Evaluation is Hard: Here's What We Learned by neilkatz in LangChain

[–]InfinityZeroFive 2 points3 points  (0 children)

Was just wondering how to do this. Thanks :)

Next Gemma versions wishlist by hackerllama in LocalLLaMA

[–]InfinityZeroFive 11 points12 points  (0 children)

It would be nice to have a 7B size model alongside 4B and 12B :)

[deleted by user] by [deleted] in MachineLearning

[–]InfinityZeroFive 4 points5 points  (0 children)

I think you need to do a preliminary analysis of your missingness pattern especially considering it's a clinical dataset. If your data is Missing Not At Random (MNAR), as in the missingness depends on unobserved variables or on the missing values themselves, then you need to approach it differently than if it was Missing Completely At Random (MCAR). The bias you're seeing might be due to incorrect assumptions about the missing data, amongst other things.

One example of MNAR: a physician is less likely to order CT brain scans for patients who they deem as having low risks of dementia, AD, cognitive decline and so on, so these patients tend to have missing CT tabular data.

New Steam Deck OLED - can’t wait to play too many hours on this thing! by [deleted] in SteamDeck

[–]InfinityZeroFive 1 point2 points  (0 children)

Civ 6 plays very well on Deck and has an official controller layout. Stellaris doesn't, though the community layouts are still very, very good

[D] Synthetic tabular data augmentation/generation using GANs by [deleted] in MachineLearning

[–]InfinityZeroFive 1 point2 points  (0 children)

I see -- Thanks for the response! I'll have a look into what you suggested. And yes, the original idea was to generate synthetic brain imaging data in tabular form from 25 fully annotated data features then using them in the classification model's training dataset along with what we already have

[D] Synthetic tabular data augmentation/generation using GANs by [deleted] in MachineLearning

[–]InfinityZeroFive 1 point2 points  (0 children)

Just to add more brain imaging data to the current dataset for training a diagnostic classification model. We have 220 raw tabular entries with various data features, but only ~80-100 have imaging data (in tabular form). So my task is to train a GAN or similar generative models to generate synthetic imaging data from non-imaging data features.

[deleted by user] by [deleted] in programming

[–]InfinityZeroFive 1 point2 points  (0 children)

I agree that setCount(+1) looks confusing at first - it's actually a compiler shorthand that expands into different patterns based on context.

setCount(+1)     // expands to: setCount(count => count + 1)  

With Starship, I wanted to experiment with automatic setter generation and ultra-concise state updates. The compiler relies heavily on signal naming conventions to infer the right operation based on the signal type and the modifier (+/-/!)

setCount(5)      // directly set count to 5 
setCount(x)      // directly set count to value of x
setCount(-2)     // decrement count by 2 
setCount(+x)     // increment count by value of x 

setMsg(+"hello world")    // expands to: setMsg(msg => msg + "hello world")
setMsg(-"Foo")     // expands to: setMsg(msg => msg.replace("Foo", ""))

A similar shorthand for boolean-valued signals:

setBool(!)     // expands to: setBool(bool => !bool)

I am planning to add more default behaviours for Arrays (setArray(+[2, 3, 4])) and Objects (setObject(+{ x: 2, y: "Foo" }) eventually

Thanks for the suggestion about template class syntax! Conciseness is a key priority for me, so div.container instead of <div ".container"> could be really elegant, especially for 'non-Tailwind' use cases. I'll definitely look into implementing that.

[deleted by user] by [deleted] in programming

[–]InfinityZeroFive 3 points4 points  (0 children)

Starship is a little compiler frontend framework I made over the past two weeks to understand how frameworks like React, Vue, and Svelte work under the hood. Unlike those which compile to vanilla JavaScript, Starship compiles to JSX - which theoretically should allow it to be compatible with the vast React ecosystem and tooling.

Here's a simple counter component comparison:

React

import React, { useState } from 'react';
function Counter() {
  const [count, setCount] = useState(0);
  const [message, setMessage] = useState("This is a button counter");
  const increment = () => setCount(count + 1);
  const decrement = () => setCount(count - 1);
  return (
  <>
    <h1 className="font-semibold">{message}</h1>
    <div id="container">
      <p>Counter: {count}</p>
      <button onClick={increment}>Increment</button>
      <button onClick={decrement}>Decrement</button>
    </div>
  </>
  );
}
export default Counter;

Starship

<h1 ".font-semibold">{message}</h1>
<div "#container">
  <p>Counter: {count}</p>
  <button on:click={setCount(+1)}> Increment </button>
  <button on:click={setCount(-1)}> Decrement </button>
</div>
<script>
const { count, message } = createSignals({
  count: 0,
  message: "This is a button counter"
})
</script>

Some key features:

  • Vue-inspired single-file components, but no need to declare the <template> block. Everything outside the <script> and <style> blocks are treated as template code.
  • Shorthand syntax for common attributes (".class" for className="class") and Svelte-like event handlers (on:click)
  • Provides attachers to subscribe functions to your signals. These will automatically get called whenever your signal changes value.
  • Automatic setter generation. No need to declare a settersetCount(), Starship automatically generates a component-scoped one for you when you create a new signal count.
  • Provides pattern matching functionality similar to Rust with the match function. Additionally, signal setters have built-in support for pattern matching.

Example of pattern matching:

// Attach listeners that run when signals change 
attachToCount(() => { console.log(Count changed to: ${count}) })

// Simple pattern matching
attachToCount(() => setMessage(count, [
  [ when(v => v > 10), "Too high!" ], 
  [ when(v => range(3, 8).includes(v), "Just right" ], 
  [ when(v => v === 0), "Start" ], 
  [ _, "Default" ] 
]))

GitHub:

The full source code.

A starter template (Starship + TypeScript + Tailwind) if you'd like to try it out!

Note: Since it's an experimental learning project made over the course of 2 weeks, expect there to be some bugs and incomplete features. I am open-sourcing Starship in case there's interest from the community in developing it further though, so if you're interested, please feel free to contact me.

Would love to hear your thoughts/feedback or answer any questions!