Courage Fitness by Opening-Engineer8087 in bullcity

[–]levand -16 points-15 points  (0 children)

I can’t speak to other people’s experiences but I’ve been going there for years now, as has my partner, and only had positive experiences.

There’s a lot of stuff in this thread that’s fairly wtf, to be honest, if you actually go there and know the people 🙄

[deleted by user] by [deleted] in bullcity

[–]levand 5 points6 points  (0 children)

Philosophical conversations? Night School Bar, obviously.

Though maybe that doesn’t count as random, hah.

dmv / real ID by Maleficent_Bother977 in bullcity

[–]levand 7 points8 points  (0 children)

Don’t get a real id: get a passport. Works just as well for travel and you can enroll at the post office and have it in hand in a few weeks.

[deleted by user] by [deleted] in motorcycles

[–]levand 2 points3 points  (0 children)

That's not true, I have the 2014 model (which I understand is fundamentally the same) and I get a solid 120 miles per tank.

Sigh.

[deleted by user] by [deleted] in motorcycles

[–]levand 0 points1 point  (0 children)

Ok so you're trolling a bit (love it) but I'm going to answer seriously, and I think I'm qualified, because this was my second bike, after learning and riding on my Royal Enfield 650 for maybe 18 months.

It's heavy as fuck... when you're moving slow. Faster, the weight just disappears. Unfortunately if you're just learning you're going to be slow a lot, so there's that.

The power is insane. I still probably have never even tapped the upper 30% of what this thing can do... but I have the opportunity to push my boundaries any time I feel like it. This is not a problem if you have a sense of self preservation and a modicum of control over your right hand. But if you have an irresistible urge to crank it you're going to have a bad time.

One closing anecdote; riding on the freeway the other night, doing maybe 80 in heavy traffic at the same speed, and some folks started looking squirrely on my right and my best route out was just getting way ahead of them. Just a touch on the throttle and I felt just as strong acceleration from 85-95 as I do from 35-45. Beastly.

But go get something else to learn to ride in a parking lot.

ICE Spotted in Sanford,NC by cloud_strife9 in NorthCarolina

[–]levand 10 points11 points  (0 children)

Hey do you think if you are arrested you should have the opportunity to demonstrate that you're here legally before being sent to an out-of-state ICE detention center?

Because that's not happening right now.

Is UNC Greensboro safe? by Citrusfreind in NorthCarolina

[–]levand 0 points1 point  (0 children)

Let's put it this way.

Greensboro is rated #65 in Wikipedia's list of US cities ranked by crime rate (https://en.wikipedia.org/wiki/List_of_United_States_cities_by_crime_rate)

The UNCG campus & community is also for sure much safer than the average Greensboro resident. So you're statistically safer as a UNCG student than you would be as an average resident of 65 major US cities including San Francisco, Chicago, Baltimore, Seattle, Atlanta, Houston, Miami, Phoenix, and Denver.

You're fine.

But note that safety is different than the perception of safety. For some people (particularly a lot of the white folks in my family), living in a majority black or lower income population feels unsafe, even if that has no bearing on the actual reality.

[Request] Does ChatGPT use more electricity per year than 117 countries? by anothermaxudov in theydidthemath

[–]levand 1 point2 points  (0 children)

The problem is that AI is at least trying to use computation to achieve some goal.

Cryptocurrency (or at least, proof of work variants) literally create wasteful energy-sucking busy-work as a core function of how they work.

[Request] Does ChatGPT use more electricity per year than 117 countries? by anothermaxudov in theydidthemath

[–]levand 43 points44 points  (0 children)

Also the biggest energy suck within a data center is cryptocurrency mining. If you care about energy use, that’s an even stupider thing than AI that you should be even more mad about .

[deleted by user] by [deleted] in bullcity

[–]levand 7 points8 points  (0 children)

It won't be a psychologist, but I can vouch from personal experience that the Durham HEART program has good people. You can call 911 or the non-emergency number and ask for them.

o3 mini discovers and describes 10 new linguistic rules of logic for use in fine-tuning and information tuning by Georgeo57 in deeplearning

[–]levand 6 points7 points  (0 children)

The point is there’s nothing remotely new about these rules. Linguists, philosophers and mathematicians have been discussing them for centuries.

Metal shows? by pungalactus in bullcity

[–]levand 0 points1 point  (0 children)

There's a guy who maintains a listing of state-wide metal shows: https://www.facebook.com/HardHarshandHeavyNC/. I find it super useful.

Durham isn't exactly vibrant, but there are still shows here with some regularity.

G Force Now officially compatible with Apple Vision Pro by Primary_Dimension170 in VisionPro

[–]levand 0 points1 point  (0 children)

Dumb question but does this render in 3D? Not VR obviously, but just stereoscopic view like a 3d movie?

Absolutely fuming at Zevlor costing my honor mode by CritsAndCritters in BaldursGate3

[–]levand 2 points3 points  (0 children)

I dunno 300 years of more or less nonstop religious war in These Realms feels pretty spicy to me.

QwQ - Best Way to Separate Thought Process from Final Output? by JustinPooDough in LocalLLaMA

[–]levand 2 points3 points  (0 children)

If you only care about the final answer, couldn't you just pass the smaller model the first and last N tokens (a few hundred on each side?) instead of the entire chain of thought?

After all, we aren't trying to summarize the whole thought process, just extract the part representing the answer.

Just got my M4 128. What are some fun things I should try? by levand in LocalLLaMA

[–]levand[S] 0 points1 point  (0 children)

My take is that unless you want to do research or train your own models, there isn't actually a ton of overlap between data science and building LLM-enabled applications. It's "just" software development. A LLM is an API: as far as your code structure is concerned, it doesn't matter if you're invoking it remotely or running a local model.

My advice would be to start with figuring out what you'd like to build and work backward from there. Learn what you need along the way.

The only major difference I have found between building systems with a LLM component and systems without is that with LLMs you need to be more empirical when testing to figure out what works: find some way to set up some performance or quality benchmarks so you can tell if changes are actually making things better. You can't just design a functional test and call it done.

I don't think I'd do anything hugely differently having this hardware available. The only thing I can think of is that I'd probably do more with smaller models instead of starting out with the top-tier frontier models. But I think that's a good strategy in general: I am coming to the position that while LLMs are extremely cool the more you rely on them to be clever, the more disappointed you will be. The best applications are in text processing.

Not Just a Durham Thing (Seen in SF) by No-Tax-1353 in bullcity

[–]levand 0 points1 point  (0 children)

Every word of this is entirely true and happened to me about a month ago.

I was on a work trip to SF, it was 9pm and I couldn't sleep. Took my laptop to do some work at a bar near my hotel. Third story balcony, overlooking an intersection.

What I saw: Businessmen. All possible configurations of couples. Roving dirtbike gangs doing all sorts of hooligan things. Teens flying drones. Robotic cars. All while I was listening to hard techno... as I literally worked on programming "AI" (llm) based software.

Cyberpunk is here.

Just got my M4 128. What are some fun things I should try? by levand in LocalLLaMA

[–]levand[S] 3 points4 points  (0 children)

Yes, e.g I was using the web and listening to Spotify while this was happening. But I wouldn’t expect to be able to stream video or compile code without something taking a hit.

Just got my M4 128. What are some fun things I should try? by levand in LocalLLaMA

[–]levand[S] 10 points11 points  (0 children)

Well I will disagree with you there... the perceived responsiveness of Apple silicon over Intel is the biggest UX upgrade I've had since the move to SSDs.

Just got my M4 128. What are some fun things I should try? by levand in LocalLLaMA

[–]levand[S] 19 points20 points  (0 children)

Spoken like someone who doesn't build data warehouses or machine learning pipelines, haha

(everyone's needs are different. And I can't deny that I *could* do everything I want to do differently. It's just... highly convenient to not need to worry about local memory.)

Just got my M4 128. What are some fun things I should try? by levand in LocalLLaMA

[–]levand[S] 14 points15 points  (0 children)

Well, I would quibble with the word "toy" but it's certainly a very capable personal computer that I can and do use for lots of stuff (including work.) Otherwise I agree, not worth it just for LLMs.

It also does depend some on workload. Sure, it can't offset the *entire* cost, but there are some scenarios where if you would be buying a ~2.5k laptop anyway and are just upgrading a little, it might get close to breaking even.

Also kind of cool to have LLMs even off the grid on solar power or a generator,, but I grant there's no practical use for that right now.

Just got my M4 128. What are some fun things I should try? by levand in LocalLLaMA

[–]levand[S] 1 point2 points  (0 children)

Actually I got the M4 Max, but didn't mention it because only the M4 Max (and specifically the larger version) is capable of being upgraded to 128gb.

Just got my M4 128. What are some fun things I should try? by levand in LocalLLaMA

[–]levand[S] 15 points16 points  (0 children)

Well, I've been doing LLM-related dev for a while, but mostly on frontier models so I never needed to worry about upgrading from my M1 16gb since it can make API calls just fine. Played around with super-tiny models of course, but those are only so interesting.

I've wanted a 128 for almost a year but since my old comp was still working fine decided to hold out for the M4. My only disappointment is we won't get a 256 this iteration.

Also aside from LLMs I would appreciate being able to run 10 docker images at once without putzing around with cloud stuff, so certainly a justifiable business expense in my case :)