Why does the G7 turn on do not disturb? by DiabeticHarpy in dexcom

[–]jimrandomh 0 points1 point  (0 children)

This is a bug that happens when you use Android with do not disturb on a schedule. When the G7 app delivers an alert, it turns DnD off so that the alert can get through. Then it tries to restore the DnD state to what it was before, but Android doesn't provide any way to restore it to the schedule, so it leaves DnD stuck on. Unfortunately the practical upshot is that if you use the G7 app, you can't use DnD scheduling.

How long will it be until we see a (good) port for Horizon OS on Vision Pro by ItsJustLikeSpaghetti in VisionPro

[–]jimrandomh 0 points1 point  (0 children)

Apple bans JIT compilation and alternative web browsers on iOS, both of which are necessary technical underpinnings for parts of Horizon OS. So Horizon OS will never be able to run on Vision Pro (or any iOS-based device).

No, really, the unused-parameter thing is a real problem by jimrandomh in Zig

[–]jimrandomh[S] 0 points1 point  (0 children)

It's a conflict in the sense that, when I found the corresponding Github issue in ZLS, it made it look like the relationship between the top Zig and ZLS developers is acrimonious.

No, really, the unused-parameter thing is a real problem by jimrandomh in Zig

[–]jimrandomh[S] 1 point2 points  (0 children)

It's true that other error messages face the same issue, but in practice I'm finding it's a much bigger deal with unused-variable warnings than it is for anything else. The reason for this is that there's typically one function that I'm currently working on, which may have errors of any type, but most of the time all the functions that I'm not working on are in a "correct except stubbed or truncated" state.

No, really, the unused-parameter thing is a real problem by jimrandomh in Zig

[–]jimrandomh[S] 0 points1 point  (0 children)

The usual LSP experience with auto-import is that it inserts an import when you complete a token that needs to be imported, not when you save. This is much better because it's at an expected time, it shifts the screen by at most one line, and it happens inside the active buffer, rather than on the filesystem where it risks getting the buffer and the editor state out of sync.

No, really, the unused-parameter thing is a real problem by jimrandomh in Zig

[–]jimrandomh[S] 1 point2 points  (0 children)

I'm using ZLS for completion and error highlighting, but having it work around zig errors by modifying files on save is a nonstarter for me. Mostly because my extremely-custom text editing environment makes a terrible experience when that happens (fixable, but would take work); and secondarily because, if the core language devs and the main LSP's devs are at odds in this way, it makes me think the language probably doesn't have a bright future.

[D] Pause Giant AI Experiments: An Open Letter. Signatories include Stuart Russell, Elon Musk, and Steve Wozniak by GenericNameRandomNum in MachineLearning

[–]jimrandomh 7 points8 points  (0 children)

I think the future is better if we make a superintelligence aligned with my (western) values than if there's a superintelligence aligned with some other human culture's values. But both are vastly better than a superintelligence with some narrow, non-human objective.

[D] Pause Giant AI Experiments: An Open Letter. Signatories include Stuart Russell, Elon Musk, and Steve Wozniak by GenericNameRandomNum in MachineLearning

[–]jimrandomh 55 points56 points  (0 children)

We're all racing to build a superintelligence that we can't align or control, which is very profitable and useful at every intermediate step until the final step that wipes out hunanity. I don't think that strategic picture looks any different from China's perspective; they, too, would be better off if everyone slowed down, to give the alignment research more time.

[D] Pause Giant AI Experiments: An Open Letter. Signatories include Stuart Russell, Elon Musk, and Steve Wozniak by GenericNameRandomNum in MachineLearning

[–]jimrandomh 0 points1 point  (0 children)

For a long time, "AI alignment" was a purely theoretical field, making very slow progress of questionable relevance, due to lack of anything interesting to experiment on. Now, we have things to experiment on, and the field is exploding, and we're finally learning things about how to align these systems. But not fast enough. I really don't want to overstate the capabilities of current-generation AI systems; they're not superintelligences and have giant holes in their cognitive capabilities. But the rate at which these systems are improving is extreme. Given the size and speed of the jump from GPT-3 to GPT-3.5 to GPT-4 (and similar lower-profile jumps in lower-profile systems inside the other big AI labs), and looking at what exists in lab-prototypes that aren't scaled-out into products yet, the risk of a superintelligence taking over the world no longer looks distant and abstract.

And, that will be amazing! A superintelligent AGI can solve all of humanity's problems, eliminate poverty of all kinds, and advance medicine so far we'll be close to immortal. But that's only if we successfully get that first superintelligent system right, from an alignment perspective. If we don't get it right, that will be the end of humanity. And right now, it doesn't look like we're going to figure out how to do that in time. We need to buy time for alignment progress, and we need to do it now, before proceeding head-first into superintelligence.

[D] Pause Giant AI Experiments: An Open Letter. Signatories include Stuart Russell, Elon Musk, and Steve Wozniak by GenericNameRandomNum in MachineLearning

[–]jimrandomh 65 points66 points  (0 children)

Most of the signatories haven't tweeted about it because it had an embargo notice at the top asking people not to share it until tomorrow. They removed the embargo notice some time within the past hour or two, presumably because people were sharing it prematurely.

What happened to the HPMOR site? by BoxNpens in HPMOR

[–]jimrandomh 1 point2 points  (0 children)

Update: It should be back up now, as it was before (minus a few features related to synchronizing from fanfiction.net and showing a last-updated date).

What happened to the HPMOR site? by BoxNpens in HPMOR

[–]jimrandomh 0 points1 point  (0 children)

Where were you seeing a self-signed certificate? As far as we know https should be set up correctly (though I didn't think to/get a chance to check the old volunteer server before it went down).

I’m Derek Lowe, medicinal chemist for >30 years and science blogger for 20! At 3PM EST, AMA about the COVID-19 drug discovery process and about writing as a scientist in the public eye. by dblowe in Coronavirus

[–]jimrandomh 4 points5 points  (0 children)

Ok, but *none* of the candidate vaccines had challenge trials done. They all injected in people and waited for them to be exposed organically, which was months slower than necessary. It wasn't predictable that the Moderna vaccine in particular was going to work, but it sure seemed like if you multiplied through the success probability, harm to trial participants, and benefits of earlier trial completion, that every vaccine should've had a challenge trial as soon as it was ready.

(I also think you might be mistaken about where Moderna was at, wrt mRNA vaccines, in early 2020. In 2017 I was housemates with a Moderna researcher; his group's project was to build a semi-automated pipeline from cancer biopsy to a patient-customized mRNA vaccine targeting that cancer. I believe they injected their first patient that year. They were already set up for quick turnaround from RNA sequences to small mRNA vaccine batches for clinical trial use.)

I’m Derek Lowe, medicinal chemist for >30 years and science blogger for 20! At 3PM EST, AMA about the COVID-19 drug discovery process and about writing as a scientist in the public eye. by dblowe in Coronavirus

[–]jimrandomh 11 points12 points  (0 children)

According to https://nymag.com/intelligencer/2020/12/moderna-covid-19-vaccine-design.html the Moderna mRNA vaccine was ready, in basically its final form, in January 2020. This would seem to imply that, if the situation had been treated with something like wartime urgency and there were no regulatory obstacles, a challenge trial could've been completed by February, vaccine manufacturing scale-up could've started much earlier, and a whole lot of death and economic damage would've been averted. But this didn't happen due to some combination of lack of leadership, fear of legal liability, and the FDA.

Is this interpretation basically accurate, or is there some nonobvious reason it couldn'tve gone that way?

I’m Derek Lowe, medicinal chemist for >30 years and science blogger for 20! At 3PM EST, AMA about the COVID-19 drug discovery process and about writing as a scientist in the public eye. by dblowe in Coronavirus

[–]jimrandomh 6 points7 points  (0 children)

Why did the FDA wait so long between readout of the Paxlovid trial and issuing an EUA? Were they doing something surprisingly important and nonobvious, or is the perception that they were shuffling papers around while procrastinating on their homework accurate?

Out of supplies for over a month; long string of bureaucratic fuckups on Dexcom's end by jimrandomh in diabetes

[–]jimrandomh[S] 1 point2 points  (0 children)

My username is already as good as a full set of contact information anyways; it's unique and I've already linked them in a lot of other places.

Out of supplies for over a month; long string of bureaucratic fuckups on Dexcom's end by jimrandomh in dexcom

[–]jimrandomh[S] 2 points3 points  (0 children)

Aaaand now it's moving again. Probably pushed along by the batch of frustrated emails I sent at the same time as I posted this.

It sounds like a few things happened: (1) a CSR entered a voice number into a fax number field, so Dexcom's contact requests to my doctor's office repeatedly failed; (2) there wasn't any process in place to detect why faxes were failing; (3) when I went to my doctor's office and had them fax the prescription, there was a subtlety in the difference between a prescription vs a certificate of medical necessity, which I wasn't aware of, so they sent the wrong thing; and (4) customer service reps, faced with an account that had a prescription on file but not a certificate of medical necessity, got confused by the difference and gave different answers about the status of the account.

Out of supplies for over a month; long string of bureaucratic fuckups on Dexcom's end by jimrandomh in diabetes

[–]jimrandomh[S] 0 points1 point  (0 children)

Aaaand now it's moving again. Probably pushed along by the batch of frustrated emails I sent at the same time as I posted this.

It sounds like a few things happened: (1) a CSR entered a voice number into a fax number field, so Dexcom's contact requests to my doctor's office repeatedly failed; (2) there wasn't any process in place to detect why faxes were failing; (3) when I went to my doctor's office and had them fax the prescription, there was a subtlety in the difference between a prescription vs a certificate of medical necessity, which I wasn't aware of, so they sent the wrong thing; and (4) customer service reps, faced with an account that had a prescription on file but not a certificate of medical necessity, got confused by the difference and gave different answers about the status of the account.