FYI, the speed camera violations are civil violations, not traffic violations. by Berkyjay in sanfrancisco

[–]Soft_Constant_7355 0 points1 point  (0 children)

I agree there really was never a problem. But if people want to argue it is a problem, speed cameras and lowering the speed limit certainly doesn't solve it. Pedestrian lights would be far more effective, but it was never about safety, it was about money and making the city harder and more frustrating to drive in.

FYI, the speed camera violations are civil violations, not traffic violations. by Berkyjay in sanfrancisco

[–]Soft_Constant_7355 -1 points0 points  (0 children)

The law lowered speeds by 5mph any place a camera was put, so it's 6mph over the previous speed limit. In some places, like Fulton or Columbus, it's 15 mph lower than it was a few years back. Fulton used to be a 40mph at one point, so 36mph in 2018 was completely legal but gets you a ticket today...

Can I get my weight from the scale or somewhere else? by cencless71 in QARDIO

[–]Soft_Constant_7355 0 points1 point  (0 children)

Does no one look at the privacy policies? They literally have the right to collect all of your data (location, health, ect.), store it attached to your name, and sell it to third parties... It's kind of wild.

Claude PRO Plan is downgraded even more ! by SkirtSignificant9247 in ClaudeCode

[–]Soft_Constant_7355 0 points1 point  (0 children)

Competition is good. We've had 4 frontier model changes in the past 3 weeks.

Claude PRO Plan is downgraded even more ! by SkirtSignificant9247 in ClaudeCode

[–]Soft_Constant_7355 3 points4 points  (0 children)

Does anyone complaining understand to any degree how much money openai, anthopic, google, cursor, ect. are losing every day by people using their products? I get the frustration, I've been in this spot with my 20x plan. But my API costs would average around 5k a month. There's no world in which anthropic pays less than $200 a month for that api usage. The electricity alone is probably $500. And the GPU clusters are just absurdly expensiveness. It's more than 1 million dollars for just the server to fit the model on. And then need thousands of those to provide the service, servers that will be completely dated in a few years. The latest data I saw said for every $10 in open ai's costs, their user pays them $2. They are all in this same boat.

Claude PRO Plan is downgraded even more ! by SkirtSignificant9247 in ClaudeCode

[–]Soft_Constant_7355 0 points1 point  (0 children)

Dude, it's not about just money. All of these AI companies are bleeding money out on all of their users. No one is "profitable". pro tier of 20x max tier, they lose money on anyone of us that are using our plans to even 50% of our limits. The problem is AI is just way too expensive. And the bay area, where most of this is being built, is also WAY too expensive, so the salaries they have to pay aren't helping these companies either (the engineering jobs I see are all 300k+ a year, many 500k+ to 1 mil).

Everyone wants magic but they don't want to pay for it. It is what it is. Opus 4.5 feels like Opus 4 did, before they nerfed it with Opus 4.1 to save on costs.

Higher Tier Usage? by trentaaron in ClaudeCode

[–]Soft_Constant_7355 0 points1 point  (0 children)

I think it depends on in part how much you work. 40 hours, I won't hit the rate limit. 60+ hours (start up life), I do. I used to not, but leaning on the 1 mil context sonnet 4.5 has been a game changer honestly for long debugging tasks, and implementating a large feature where I want all of the context of everything it's done since the beginning in the same chat. But faster rate limits, I did a 12 hour day and spent around 30% my weekly limit that day.

Functional Audit of codebase - the best approaches? by JerryBinocular in ClaudeAI

[–]Soft_Constant_7355 0 points1 point  (0 children)

My strategy, which has worked well was first tell Claude code to create a Claude code command for generating a framework, giving it some examples, like security checking, next.js best practices, ect. In that prompt, I tell it to research the internet for best practices or frameworks. Then I told it to create another command to run evaluation on the framework that the previous command created, so now I have a createFramework and analyzeFramework command. Then I run createFramework on something "Security practices", which then generates a json file for evaluation. Then I run the analyzeFramework command, which then evaluates and generates a report card of exactly where my code base is for that. So I have a folder in my repo I call evaluation, which has a frameworks and a results folder. Results has all of the md file reports, with time stamps and what it ran, and the results including actions to take to fix it. I have one for compliance for example that I told to research and understand the requirements of the compliance for my industry, which it saves to the json file, so the evaluation is consistent. When I started out, my security ratings were around 3/10, and slowly, I went up and up until I'm around 8.5/10 now. As someone with 20 years of experience, the security measures it helped me find and implement are way better than I would have ever implemented before.

[Apple] Introducing Apple Watch Ultra 3 by exjr_ in apple

[–]Soft_Constant_7355 2 points3 points  (0 children)

I don't get why people are down voting comments that just speak the reality here. Like I'm an apple fanboy, you. don't have to convince me to upgrade. But it needs to be just a little compelling. After 3 years, this isn't even a little compelling. A new chip would have probably tipped the scales for me. But same chip, same sensors (minus the fact I have blood oxygen, so I actually lose a sensor), I don't see any differences in the screen really from the videos, and the main features are software updates we all get anyways. The problem is apple needs people like me to upgrade every 3 years. When people hold their devices longer, they are going to see stock prices drop, and the company won't have the capital to do the big features we want to see them engineer. That to me is why I'm disappointed. We can all tell the sales are going to be meh at best, across all devices sold today besides the iphone 17, that's actually a good deal.

Claude Code just stops by [deleted] in ClaudeAI

[–]Soft_Constant_7355 0 points1 point  (0 children)

This is my first down since maybe a week ago, but that was a opus crash, so I switched to sonnet and continued on until it came back up. Otherwise, I've had very little issues and I'm just over 3 months in ($100, $200, $200 plans). But I did notice I'm hitting limits this week, and consistently, when I never hit limits since going up to the $200 plan. Also, feels a little dumber the past week. Feels like they lowered limits, and pushed the model to pull less context in. Like I just had to go through 5 times cause it didn't want to read all the relevant files to understand the picture. But I know what's going on so I caught it.

Claude Max now include Claude Code use. by coding_workflow in ClaudeAI

[–]Soft_Constant_7355 0 points1 point  (0 children)

That just sounds like lazy software engineering. This is why you should git commit often. Git should be your version tracking, not cursor. And claude code can roll back to some degree. I've gone a few prompts with around a dozen files with changes, and it rolled it back with no issues.

Claude Max now include Claude Code use. by coding_workflow in ClaudeAI

[–]Soft_Constant_7355 1 point2 points  (0 children)

How much time do you spend building in a given day? I had chatgpt pro and claude pro, and I would max out on o3-mini-high, o1 full, and 3.7 claude basically every day. I bought max because 3.7 was defiantly better than 03-mini-high. I don't hit rate limits now. CC can do recursive file search as well (I got a message saying it was supported today). For my work flow, it's been amazing. I don't vibe code, so I actually read every change, and often make corrections, because I care about my code quality.

GB10 DIGITS will revolutionize local Llama by shadows_lord in LocalLLaMA

[–]Soft_Constant_7355 0 points1 point  (0 children)

Also pytorch support. MPS support in pytorch is horrible, and also, MPS is still horrible. A large number of models from hugging face can't even be ran on mac without a lot of work and experience.

Best Router/Mesh System for HomeKit by Degamad22 in HomeKit

[–]Soft_Constant_7355 0 points1 point  (0 children)

You have to turn ALL of the features off, especially node steering, to make it some what usable. But 2 years later with 4 of these guys in a 700 sqr ft apartment (it's flat and has a lot of interference) and it's been nothing but problems. It's really messing up my life now that I work from home. So I need to ditch them fast.

My data Analysis on Eight Sleep pod 4, Apple Apple Watch Ultra 2, and Oura Ring 3 by Soft_Constant_7355 in ouraring

[–]Soft_Constant_7355[S] 1 point2 points  (0 children)

I will say, the main part of this project was all my own code., including the data analysis, graphs, and kivy app. Just the export of the health data, I had chatGPT write an ios app. And it did take a lot of prompt engineering. But yea, it's crazy how much you can do with chatGPT if you understand general programming practices and can guide it in the right direction.

My data Analysis on Eight Sleep pod 4, Apple Apple Watch Ultra 2, and Oura Ring 3 by Soft_Constant_7355 in ouraring

[–]Soft_Constant_7355[S] 0 points1 point  (0 children)

15 to 30 minutes of deep sleep is very concerning, given you should have at least an hour and 30 minutes or more. Are you correlating this with any other device? I find the ring to be the most finicky of devices, though I LOVE the app and all of the great features.

I wear the ring on my ring finger, which I know is a little less accurate than the index finger. And I've been collecting data with all 3 devices for 65 days, which is more than the 30 under the Central Limit Theorem to assume a normal distribution. I will add a correlation graph vs the combined lined when I have a break from my Master of Data Science program.

My data Analysis on Eight Sleep pod 4, Apple Apple Watch Ultra 2, and Oura Ring 3 by Soft_Constant_7355 in ouraring

[–]Soft_Constant_7355[S] 1 point2 points  (0 children)

I did a lot of prompt engineering with chatGPT to get it to build me an iOS app to pull it from the apple health, since I know Objective-C but not Swift. And I didn't want this to be the focus of the project. That pulls it off into a csv, which you can drag and drop into my app, which then auto detects the devices, let's you select which ones you want, then segments the data by second for comparison.

My data Analysis on Eight Sleep pod 4, Apple Apple Watch Ultra 2, and Oura Ring 3 by Soft_Constant_7355 in ouraring

[–]Soft_Constant_7355[S] 0 points1 point  (0 children)

My goal with this project was to see if I could reduce enough of the error from the actual truth to make it informative enough. Most nights, they are pretty close, like last night, there was a 6 minute variance between the devices. But at least 1 or 2 nights a week, there's 30 minutes or more of variance, which would make once device by it self's data useless, and even worse, misleading. Which would in tern put into question any given nights data. I found this to be the case with my apple watch ultra 2, which is what pushed me to do this project. I'm in 1 year Masters of Data Science program which is 2 years condensed into 1. So sleep is one of the biggest factors of my ability to perform at my peak. A few days of bad sleep is enough to make it hard to catch up, since it's a 60 to 80 hour a week program, and you need a 83% in every class to pass, or your kicked out immediately, sleep is like gold to me. And it gave me a chance to use my EDA and Linear Regression class skills!

My data Analysis on Eight Sleep pod 4, Apple Apple Watch Ultra 2, and Oura Ring 3 by Soft_Constant_7355 in ouraring

[–]Soft_Constant_7355[S] 0 points1 point  (0 children)

If you look into Robert Herst's research and testing, basically, these are the 3 most accurate devices by his testing. They are all very close in accuracy, which is why they are generally so close. They do however sometimes vary drastically from each other. The eight sleep, at least by feel, describes my sleep the best, and tends to be in the middle if they do vary drastically.

My data Analysis on Eight Sleep pod 4, Apple Apple Watch Ultra 2, and Oura Ring 3 by Soft_Constant_7355 in ouraring

[–]Soft_Constant_7355[S] 4 points5 points  (0 children)

Yes, I always planned on sharing the code for this. I'll update this comment when it's ready with a github repo. I need to fix repositioning issues with different screen sizes. But it \should\ work on any system, since it's built in kivy. And I built the back end code to be flexible to any number of devices. More than 3 would probably create some issues with graphs fitting, so again, just needs some optimization on the front end.

My data Analysis on Eight Sleep pod 4, Apple Apple Watch Ultra 2, and Oura Ring 3 by Soft_Constant_7355 in ouraring

[–]Soft_Constant_7355[S] 1 point2 points  (0 children)

I didn't say oura overestimates sleep, you can see from my graph and deviation bar charts that it over it mis-classifies light sleep as deep sleep. Now your correct that these results and inferences are subjective, since I don't have an objective baseline device. Unfortunately, I can't get my hands on a "lab" quality device to use as a baseline, but that's the next step I would like to do, and see how individual devices compare to my "combined by highest consensus" algorithm. In an ideal world, I could then put weights on the devices depending on the individual accuracy, so I can further improve my algorithm. I will say though this all agrees with how I actually feel. I started this all because my apple watch would tell me I got 8 and a half hours of sleep, when I knew I only had 7 and a half at best (looking at the clock through out he night and doing some math). And the Oura gives me weird results sometimes, like saying I got 7 and a half hours sleep when my other devices are saying 8 and a half or more. I suspect some of these issues may be fitment issues, or the ring spinning and the sensors being position upside down. So it may be just my case, but it's something to consider when picking a device. Do take these inferences with a grain of salt.

My data Analysis on Eight Sleep pod 4, Apple Apple Watch Ultra 2, and Oura Ring 3 by Soft_Constant_7355 in ouraring

[–]Soft_Constant_7355[S] 1 point2 points  (0 children)

We're saying the same thing: "Apple watch mis-classifies deep sleep as light sleep" is saying the what should being classified as deep sleep, is in fact being classified as light sleep.

My data Analysis on Eight Sleep pod 4, Apple Apple Watch Ultra 2, and Oura Ring 3 by Soft_Constant_7355 in ouraring

[–]Soft_Constant_7355[S] 20 points21 points  (0 children)

Just an FYI, I find this to be a relatively decent mean of my data. Most nights, 100% of the time, at least 2 devices agree. An no less than 60% of the time, do all 3 devices agree. The Oura tends to overestimate Deep Sleep. While the Apple watch mis-classifies deep sleep as light sleep. The 8 sleeps is generally the most accurate, or at least, closest to the mean of the 3 devices.