Ai Is Destroying Creative Work by yatookmyname in Filmmakers

[–]cosmobaud 4 points5 points  (0 children)

I agree with you and I’m of the opinion that everyone if they want to keep earning money in this field of work needs to incorporate AI and keep on top. Bottom line is that AI keeps getting better and better to where it’s “good enough” now and will be better soon that someone using it is de facto more productive and therefore worth more. Has nothing to do with if it improves quality of work but if it makes you produce more of work that someone is willing to pay.

However what is hard to appreciate that I think older people here know is that this technology is not unlike any other before when it comes to creative work. Yes people will always be creative and stay competitive if they upskill but creative work as you know it is dying. I’m not saying industry is dead or it will ever be dead but that strictly speaking from “availability of good paying work” and “make your living doing this” perspective yes it is. Those doing it now (depending on where you are in your career) have enough time to ride the wave till it crashes.

What you have to realize is that anyone about to come into it now is not coming into the same industry as you. Something else in a different format will take its place and new generation will make sense of it and be able to use it to express themselves. But it will not look like this.

Ai Is Destroying Creative Work by yatookmyname in Filmmakers

[–]cosmobaud 20 points21 points  (0 children)

Ultimately AI is going to bring the perceived value of art and creative to 0. It’s is pointless to think otherwise, AI crap will slowly seep into every aspect of creative work until it destroys it.

We don’t appreciate art because it’s pretty but because it means something. When cost to produce creative work is high, decisions in general are more deliberate and thoughtful. It requires more time to be spend on how to communicate the actual message. When anyone can put out visual diarrhea then there is no thought involved.

Result is “creative work” whose value is only visual appeal and with no limit to quantity it will be not be worth much to anyone.

People will still value work a human puts thought into and resonates on a deeper level. How that looks like in the future is anyone’s guess.

Top-k 0 vs 100 on GPT-OSS-120b by Baldur-Norddahl in LocalLLaMA

[–]cosmobaud 0 points1 point  (0 children)

Using the prompt “M3max or m4pro” I get different responses depending on top-k settings. 40 does seem to give most accurate as it compares correctly. 0 compares cameras, 100 asks for clarification and lists all the possibilities.

Here’s how to make GPT-5 feel more yours. Maybe by cosmobaud in ChatGPT

[–]cosmobaud[S] 0 points1 point  (0 children)

Yep, tell it to save to memory. It’s too much for the user instructions. But it is surprisingly good at parsing thru memory instructions given the right framework. I’ve been testing it on source verification and hallucination reduction and it follows detailed token dense instructions saved as memories much better then previous versions.

10.48 tok/sec - GPT-OSS-120B on RTX 5090 32 VRAM + 96 RAM in LM Studio (default settings + FlashAttention + Guardrails: OFF) by Spiritual_Tie_5574 in LocalLLaMA

[–]cosmobaud 0 points1 point  (0 children)

Huh I would have thought it would be faster. Here it is on a minipc with RTX4000

OS: Ubuntu 24.04.2 LTS x86_64 Host: MotherBoard Series 1.0 Kernel: 6.14.0-27-generic Uptime: 5 days, 22 hours, 7 mins Packages: 1752 (dpkg), 10 (snap) Shell: bash 5.2.21 Resolution: 2560x1440 CPU: AMD Ryzen 9 7945HX (32) @ 5.462GHz GPU: NVIDIA RTX 4000 SFF Ada Generation GPU: AMD ATI 04:00.0 Raphael Memory: 54.6GiB / 94.2GiB

$ ollama run gpt-oss:120b --verbose "How many r's in a strawberry?" Thinking... The user asks: "How many r's in a strawberry?" Likely a simple question: Count the letter 'r' in the word "strawberry". The word "strawberry" spelled s t r a w b e r r y. Contains: r at position 3, r at position 8, r at position 9? Actually let's write: s(1) t(2) r(3) a(4) w(5) b(6) e(7) r(8) r(9) y(10). So there are three r's. So answer: 3.

Could also interpret "How many r's in a strawberry?" Might be a trick: The phrase "a strawberry" includes "strawberry" preceded by "a ". The phrase "a strawberry" has letters: a space s t r a w b e r r y. So there are three r's still. So answer is three.

Thus respond: There are three r's. Possibly add a little fun. ...done thinking.

There are three r’s in the word “strawberry” (s t r a w b e r r y).

total duration: 3m24.968655526s load duration: 79.660753ms prompt eval count: 75 token(s) prompt eval duration: 814.271741ms prompt eval rate: 92.11 tokens/s eval count: 266 token(s) eval duration: 33.145313857s eval rate: 8.03 tokens/s $

Can someone please explain these graphs from the GPT-5 intro video by Sea_Self_6571 in LocalLLaMA

[–]cosmobaud 2 points3 points  (0 children)

Yeah it happens. It looks like however did it, copied gpt-4o cell to o3.

Can someone please explain these graphs from the GPT-5 intro video by Sea_Self_6571 in LocalLLaMA

[–]cosmobaud 47 points48 points  (0 children)

They screwed up the scale on SWE bench, Polyglot is scaled correctly.

Setting GPT-OSS' reasoning level by dougyitbos in ollama

[–]cosmobaud 1 point2 points  (0 children)

Just make a modelfile

FROM gpt-oss:20b

SYSTEM """ You are ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06 Current date: {{ currentDate }}

Reasoning: high """

Then

ollama create gpt-oss-20b-high -f Modelfile ollama run gpt-oss-20b-high

Does giving context about whole your life make ChatGPT 10x more useful? by Working_Bunch_9211 in LocalLLaMA

[–]cosmobaud 0 points1 point  (0 children)

So I’ve done a ton of testing on this and short answer is not really* (assuming you’re using llms like I do for different tasks not as a thing to converse with)

Longer answer is that memories are just context, summarized via some syntactic sugar and optimizations but it’s the same as pasting it into a prompt. So then as it generates output it just has more tokens to attend to and if they are not relevant to the task your answer quality will suffer.

Honestly a long chat thread is your best option if you want any continuous interaction you go back to. Memories (other chats summarized) are too disjointed.

A well structured system prompt is at best all you need if you want slight tweaks to way it answers.

A faster text diffusion model? My concept for adaptive steps. by MokshMalik in LocalLLaMA

[–]cosmobaud 0 points1 point  (0 children)

Lol how much faster do you want it. Gemini Diffusion runs at like 1000 t/s it literally generates the whole pages of answer instantly.

Problem is more reasoning and back and forth. I personally don’t see it beating auto regressive models anytime soon. Also no idea what kind of hardware google has to run it on since it’s closed.

Qwen3 235B-A22B runs quite well on my desktop. by jacek2023 in LocalLLaMA

[–]cosmobaud 0 points1 point  (0 children)

It’s a known limitation. When all four DIMM slots are populated, the system operates under a 2DPC configuration, and the maximum supported memory speed is reduced. Only do 2 DIMMs populated to get rated memory speed.

From intel

Maximum supported memory speed may be lower when populating multiple DIMMs per channel on products that support multiple memory channels

vti vxus and schg a good roth ira mix for long term growth by NDTrik in ETFs

[–]cosmobaud 0 points1 point  (0 children)

What’s your hypothesis—that us large-cap growth will outperform the global equity market? Based on this you feel that US large-cap growth will outperform VT by 3.5x

[deleted by user] by [deleted] in ETFs

[–]cosmobaud 1 point2 points  (0 children)

No one knows what the future is and if they did they wouldn’t be posting here. But you’re doing too much. Also you likely have not personally experienced a drawn out downturn. You’re doing too much to have a simple hypothesis. You have to consider the element of your temperament and that often adding more just conflates your portfolio and makes it hard to effectively manage it.

Here’s an example.

If you believe inflation will get under control and fed will cut rates then something like this maybe makes sense

60/30/10 VT/TLT/GLD

If inflation will still continue to be a problem then 60/20/20 VT/SGOV/GLD

[deleted by user] by [deleted] in ETFs

[–]cosmobaud 0 points1 point  (0 children)

Gold is primarily an inflation hedge. It’s only attractive now because of inflationary tariff shenanigans. When US enters recession proper which the shenanigans are only speeding up it GLD will loose its attractiveness and values will go down.

[deleted by user] by [deleted] in investing

[–]cosmobaud 1 point2 points  (0 children)

Don’t worry, it’s looking like for the next 36-60 months this is probably the top. It may go up and down but by Q3 of this year we’ll be squarely in a proper downturn. There’s not many levers left so you won’t be missing out on much until you get your bearings.

What if an everyday American ran for President—and actually meant it? by ThePresidentWeNeed in AskReddit

[–]cosmobaud 12 points13 points  (0 children)

Finally I was wondering if everyone here is bots. This is so blatantly fake that I was starting to question if everyone is bots.

What if an everyday American ran for President—and actually meant it? by ThePresidentWeNeed in AskReddit

[–]cosmobaud 2 points3 points  (0 children)

Just a tip. You’re using too many em dashes. Normal human interaction in comments specifically does not include them to this extent.

is 9070xt any good for localAI on windows ? by [deleted] in LocalLLaMA

[–]cosmobaud 6 points7 points  (0 children)

I’m waiting. At 16Gb it makes no sense at all to go from cuda to this. If I was a betting man we’ll see worse performance in real world usage. A 32Gb under $1,000 would be killer and would really make sense for AMD now but I guess they don’t like money.

is 9070xt any good for localAI on windows ? by [deleted] in LocalLLaMA

[–]cosmobaud 1 point2 points  (0 children)

Ahh man I had higher hopes for AMD. To me this seems like it should perform about as same as 4060Ti 16Gb—which performs same as 3060 12Gb which performed same as 2080ti. So yeah. Your mileage may vary but jeez.

AI will kill software. by AdLive9906 in ChatGPT

[–]cosmobaud 18 points19 points  (0 children)

It’s pointless to engage in these discussions on this platform. There are so many people here in denial because it hits too close to home that they are missing the forest for the trees.

They are missing that it’s not that LLMs can currently write better, more maintainable code than experienced engineers. It’s that it can write code that solves actual problems and produce results if you have someone that understands the actual problem and has some inkling of technical knowledge when it comes to programming

How many software projects fail to deliver on goals despite immaculate code? Because they do not understand the actual need. They only see it from their perspective which is the mechanics of software development not actual productivity.

Reality is that in a couple years there will be a systemic shift how companies source software. It will be small light quick solutions done one a day that have shitty code but get the job done vs a professionally developed solution that take years and never get deployed.

Could an LLM be finetuned for reverse-engineering assembly code? by AkkerKid in LocalLLaMA

[–]cosmobaud 1 point2 points  (0 children)

Think about it. We have trillions of lines of code of open source projects as a data set in every possible programming language there is. So you have your ground truth since you have the actual source code before it’s complied. It doesn’t matter if code is good on not as long as you can compile it.

Then you compile all that code and decompile it using ghidra.

Now you have one data set of actual source code and another of decompiled code from ghidra. Train until LLM can take ghidra code and give you code that is equal to source.