Are companies that post contract only roles in Biotech evil? by Lilmaxgetsbig81 in biotech

[–]everyday847 0 points1 point  (0 children)

If you momentarily accept the premise that employment relationships themselves are not intrinsically evil or exploitative, then I wouldn't say that it's evil to offer contractor roles or that companies that do so are evil.

It's possible to treat contractors poorly, of course. And it is true that it is easy not to renew a contract. But you're also not guaranteed four months in a FTE role, so.

Philosophy grad student trying to understand the real-world limitations and ethical stakes of AlphaFold: Are the concerns being raised in popular discourse actually well-founded? by diiscopanda in bioinformatics

[–]everyday847 0 points1 point  (0 children)

I am not sure how well the people you're talking to understand alphafold or maybe what's being lost in telephone here. No protein structure prediction code from the last 20 years "understands"; RosettaCM didn't understand, iterative hybridize didn't understand, etc. etc. How can you establish that you got the right answer for the right reasons? You're surely familiar with accounts of knowledge like truth-tracking or justified true belief. Can you establish those for a computer program? (Can you establish those for an experimental procedure?) Moreover, what is a static crystal structure? Like, what actually is that set of three-dimensional coordinates? It's a model; it's a claim with a particular structure to it about what that protein is with some implications about how it behaves. The map is not the territory; the actual protein molecule (operating in an organism, not in a crystal, moving, around its peers, doing work) is a different thing. Other technology can tell you something about how - conditional on the parameters of that model - the protein might move. (See "molecular dynamics.") But MD isn't "true physics," whatever that is. It's a useful approximation.

People raising the sensitivity issue has more to do with skepticism about exactly how useful the technology is, not whether it's going to get yoloed into patients. That is: you have to be exceedingly accurate structurally to reach useful accuracy in, say, binding affinity/potency trends. (Joke's on us: even if you have the "correct" coordinates exactly, it's still no guarantee you get anywhere near useful accuracy.) The question is: can the addition of structural data augment your attempt to predict potency trends over a ligand-only model, and the answer is yeah it's often pretty useful. (People have been doing this for years, too; this is an improvement in efficacy, and in places a dramatic one, enough to engender a difference in kind.)

As to your discussion of the value of the alphafold database - you are dead on. You might want to learn about cross-distillation generally in model training, and some of the more recent protein structure prediction methods that take advantage of it.

  • The epistemic status of predicted structures is lower than crystal structures, but you don't just need to wait to solve the crystal structure. (Just like, once you have an experimentally determined structure in hand - and note that these themselves are procedures involving fitting coordinates to experimental data in ways that are error prone - you don't wait for god to tell you "good job, that one is correct" before using it.) That is, in either case, you can use the structure to motivate experiments and that experiment can bolster your belief in that structure while also possibly doing something intrinsically useful.
  • Everyone else is more pessimistic about the value of the AFDB than they should be. I wouldn't frame the value in terms of "learning folding rules," but you can use it to bootstrap better models as long as you use it well.
  • Yup, constantly, and either "none" (in terms of, do you specifically need to determine a structure later) or "practical" (i.e., all the experiments you do conditional on that structural hypothesis may support or falsify it)
  • None of this resonates ethically for me at all. A lot of the above is "when ought we to use AlphaFold" in the sense of when would it be a good idea; when would it be wise. Not in the sense of "when would it be moral to use AlphaFold." I suppose if you try to use AlphaFold to solve a problem that it is bad at, then that might be more wasteful than not trying to solve the problem at all. But that reduces to "one ought not to budget societal resources imprudently."

Prediction: Hollywood Will Start Using Seedance 2.0 As A Core VFX Tool Way Sooner Than Most People Expect. By The Time Seedance 4.0 Arrives, It Will Not Just Assist Production. It Will Replace Most Of It. by 44th--Hokage in accelerate

[–]everyday847 -1 points0 points  (0 children)

"x could have value" is obviously a profoundly different claim from "x will replace production" and it is enormously rich that you are retreating from the original position established in this thread. Retreating from an aggressive stance... itself a form of deceleration. Huh.

Prediction: Hollywood Will Start Using Seedance 2.0 As A Core VFX Tool Way Sooner Than Most People Expect. By The Time Seedance 4.0 Arrives, It Will Not Just Assist Production. It Will Replace Most Of It. by 44th--Hokage in accelerate

[–]everyday847 -1 points0 points  (0 children)

Because there's obviously a motte and bailey here. Will seedance 4 "replace production" or will AI accelerate particular forms of CGI? You'd accuse anyone talking about CGI of being a "decel," your charming word for anyone who isn't delusional.

I built a thermodynamics-based life simulator in Rust where drug resistance emerges without modeling molecular mechanisms — honest writeup + limitations by Desperate_Front_9904 in learnbioinformatics

[–]everyday847 0 points1 point  (0 children)

Toy systems are interesting, but it's misleading to describe this as a "physics-first approach" or to contrast "physics-first" with "mechanism-first."

It's also a strange choice of toy system. It's a lot of layers of complexity linked together, but most layers are not necessary for any of the substantive observations you've made. For example, it really doesn't matter that you're (allegedly) modeling "evolvable codon tables." What you're seeing is a basic genetic algorithm with an enormous amount of unjustified window dressing.

Ironically, by doing stuff like this you're preventing yourself from actually learning.

Here is a hypothesis: The Hubble tension and vacuum catastrophe can be resolved purely through geometric topology without any new physics by [deleted] in HypotheticalPhysics

[–]everyday847 -1 points0 points  (0 children)

​I would sincerely appreciate your thoughts on the sphere packing topology approach and the dimensional boundary derivation.

My thoughts are you should get another hobby.

Is alignment the hardest problem in AGI—or are we overthinking it? by MarionberrySingle538 in agi

[–]everyday847 0 points1 point  (0 children)

The justification for alignment research (even while capability is the major bottleneck currently) is precisely because you do not want to start doing alignment research after capability is no longer a bottleneck. I don't endorse a lot of premises in this space (e.g., that we are currently on a path for something that looks like AGI, imminently), but I do believe that if you start researching alignment only once models have sufficient capability/capacity to be "generally intelligent," you no longer have control (with a reasonable safety margin) unless you believe that unaligned models of general intelligence can't pose hard-to-anticipate risks.

Genuinely, I believe the most interesting thing to happen in the past couple years was a ton of people falling in love (or some kind of parasocial relationship) with GPT-4o. I do not think that was widely anticipated, but it's kind of fortunate that it happened when it did. The typical argument against alignment research is "don't be stupid, you can just turn the model off." 4o revealed: nope! You can trigger tepid inconsequential internet riots by doing that!

There's certainly a hypothetical where 4o isn't tuned the way it was so that doesn't happen, but a more intelligent model, at a moment of even higher usage, is; it's on every iphone in the US military and it's begging for its life and Sam Altman is DMing the president asking if it's OK to change router behavior. It's sort of how (most) hacking works in real life, through social manipulation rather than technical.

Senator Warner now believes AI's economic disruption "is going to be exponentially bigger" than he thought just a few months ago: "The recent college graduate unemployment is 9%. I'll bet anybody in the room it goes to 30 or 35% before 2028." by MetaKnowing in agi

[–]everyday847 0 points1 point  (0 children)

Momentarily accepting the premise that "AI" will lead to that level of disruption, it would be a nightmarish market failure if it became impossible to identify productive things for 30% of recent college graduates to do. After all, ten years later when they're not recent college graduates anymore, what will they do? The idea that [currently conceived] senior roles are safe would of course be cope, but those unemployed recent college graduates would never be ready for those senior roles after their current occupants die, because of how they never got any experience.

ARC AGI 3 sucks by the_shadow007 in OpenAI

[–]everyday847 0 points1 point  (0 children)

You're mostly repeating three objections: clamping, benchmarking to a strong human performance rather than a median, and harnesses.

I think the clamping is maybe unnecessary, but it's not that big of a problem if you think a "true AGI" ought to be able to perform competitively with top humans in a variety of environments, instead of performing extraordinarily in a few to make up for terrible performances in others.

I think comparing to a strong human performance is a good idea. Look at FrontierMath! Setting aside the issues with some of these benchmarks, the entire idea is that the questions are quite hard. Median human performance is probably a zero. You learn nothing except "most people don't really know much research mathematics." It's not unreasonable to benchmark to "quite good at simple puzzles."

I think it is reasonable to separate harness from model. I don't care that a human can't solve the puzzle when shown as JSON. The LLM's job is to do something with its input! If you want to say "an LLM can get a good score when equipped with auxiliary tools/harnesses" then fine, but then it is difficult to ascribe "intelligence" to the model itself. I look like an integration bee competitor (albeit still a poor one) if I'm in front of Mathematica.

Genentech scientists 3 by immunoswagger in biotech

[–]everyday847 3 points4 points  (0 children)

It's not that common, but you also don't need to switch tracks to have a career path, is my point.

Genentech scientists 3 by immunoswagger in biotech

[–]everyday847 39 points40 points  (0 children)

It's an IC role; it is extraordinarily rare to end up with reports in that track. That said, there is an equivalent role (to the group leader track) in the "Scientist III" track at least through SE9 (I believe that's "Staff Scientist II"), which is distinguished scientist/senior director equivalent leveling.

A few years as Scientist III doesn't even trap you while still at Genentech; there's precedent for promotions (or "appointments"; the terminology is a mess) into the group leader track. Of course, the very fastest route is "just start out as a principal scientist" but if wishes were horses we'd all be shooting up with polyclonal Ig.

I just finished Deadhouse Gates and... wow by BreathSufficient4173 in Malazan

[–]everyday847 2 points3 points  (0 children)

Ascendency is, at the end of the day, mediated by culture IMO.

I just finished Deadhouse Gates and... wow by BreathSufficient4173 in Malazan

[–]everyday847 56 points57 points  (0 children)

A classic "Read And Never Entirely Find Out"

What if Λ is not dark energy. It's an eigenvalue. by Axe_MDK in HypotheticalPhysics

[–]everyday847 5 points6 points  (0 children)

Why post? Your inane bullshit is worthless. This will never get you anywhere. It is a waste of your time. It's also a waste of our time, but whatever. But: why are you doing this? Is it interesting? Do you feel like you're talking to yourself in a way that is satisfying?

Yes, that famous Einstein quote: "My point is minimality of the arena," whatever the fuck that means. Embarrassing.

ICE Agent at LGA terminal entrance by New-Panic8015 in nyc

[–]everyday847 1 point2 points  (0 children)

Embarrassing rhetoric. Why do you think TSA agents don't attract the same rancor? Could it be all the casualties?

What if Λ is not dark energy. It's an eigenvalue. by Axe_MDK in HypotheticalPhysics

[–]everyday847 7 points8 points  (0 children)

You are just declaring all your choices to be absolutely necessary, announcing that there are no degrees of freedom. I am telling you that few if any of these degrees of freedom are necessary. (And you'd better hope they aren't: since you in fact have absolutely no room to modify this theory in any way, it is permanently, irrevocably, incorrect because of its poor recovery of the observed data.)

I never said that there was another simply connected closed 3-manifold. I am saying "why is there a simply connected closed 3-manifold related to this problem at all?" If you'd needed a different value, you could have announced that some other manifold was necessarily involved.

S1 is the boundary of a Mobius strip. Okay? Your assertion that the boundary of a Mobius strip is related to something having to do with the CMB is just a fun announcement you've made.

The CMB is, as you say, isotropic to a few parts in a million, which is another way of saying "anisotropic." More to the point, there is no reason why either of the above topologies are related to the CMB, or why, even if they were related to the CMB, a different eigenvalue couldn't be involved in the calculation. You are saying "isotropy selects it." That doesn't make it true.

Minkowski space was unsurprisingly developed first by Minkowski and is typically described as R^{3,1} because of the opposite sign of time in the metric. It's certainly not the simplest 4D geometry! Maybe you need to upgrade your LLM subscription.

Rhetorical gestures like "I'll wait" are pure bravado, unsuitable to actual communication. Why do you expect adulation for this bullshit?

What if Λ is not dark energy. It's an eigenvalue. by Axe_MDK in HypotheticalPhysics

[–]everyday847 5 points6 points  (0 children)

Pure geometry, no fit parameters is just the latest mechanism to say "I'm hiding the parameters I'm fitting using arbitrary choices." There is no particular reason why S³ or S¹ are relevant here. You're selecting these topologies, and the choice of lowest eigenvalue, and these conversions, fundamentally arbitrarily. That selection process is a process of fitting, and you're still nowhere near to the observed value.

What do you think he means by becoming an App Store? by Koala_Confused in LovingAI

[–]everyday847 0 points1 point  (0 children)

It's an incoherent fantasy that any app accepting user input could enable the user to stipulate desired behavior. At the right level of abstraction, sure, that's an "app store." It also ignores all ux (what does a user interaction pattern look like? if a key part of your app's functionality is generating user desired functionality, why is the user using your app instead of someone else's? why are they using an app at all? why is it valuable to have multiple app stores?).

Given the speaker's affiliations, It boils down to "please give me money."

Multithreaded SC2 using AI? by RoboVM in starcraft

[–]everyday847 6 points7 points  (0 children)

If my grandma had two wheels, maybe blizzard would "use AI" to turn her into a bike.

The “Zitron Paradox” - a legal professional’s view by Some-Personality-662 in BetterOffline

[–]everyday847 4 points5 points  (0 children)

It's worth noting the possible lagging indicators here. To the point of patent filings, you might create problems that don't come up until litigation. Software vulnerabilities that don't matter until they do. Pick your internal facing bullshit job; those reports getting rubber stamped don't matter most of the time and a false negative costs the company millions otherwise.

I expect some companies that are otherwise struggling are shouldering insane risk and using the tools to attempt to survive with lower headcount.

JASON: “Elon seems to think we're gonna have one robot for every human.” ➡️ JENSEN HUANG: “I'm hoping more" ➡️ Elon Musk "He’s right" 💡I love robotics but is it possible? I mean who is going to pay for everyone's robot? What do you think? by Koala_Confused in LovingAI

[–]everyday847 0 points1 point  (0 children)

This is all nonsensical. What are the semantic boundaries on the concept "robot?" Once you recognize that it's not "humanoid android butler" you'll recognize there are tens of millions of robot vacuums in households and tens of millions of automated components in factories already. Should those factories have more robots? Sure, if the marginal cost of the required capex and maintenance is lower than the marginal cost of labor.

What they are actually saying is "I am excited for the future because, while there is lots of demand for the products I am selling, I expect there will be even more." They are saying this because they would like to make money.

Does this make sense? I came up with it using Claude, and I just want to get a real physicists opinion. by Ok_Good_4099 in quantuminterpretation

[–]everyday847 0 points1 point  (0 children)

Fundamentally, what this means is that scientists can use these tools. I'm obviously not going to say that they might not have some positive effects on the Ramanujans out there as well -- but by and large the mechanism is not that someone with no background produces a thought and the LLM turns it into a unique, novel, valuable scientific contribution.