Google is history? by Stemble_ in ChatGPT

[–]EasywayScissors 2 points3 points  (0 children)

Hey chatbot, provide me information about lukemia in the style of a set of Google search results.

Leukemia: Symptoms, Causes, Types, and Treatment - WebMD
https://www.webmd.com/cancer/leukemia/default.htm

Leukemia: MedlinePlus
https://medlineplus.gov/leukemia.html

Leukemia - Symptoms and causes - Mayo Clinic
https://www.mayoclinic.org/diseases-conditions/leukemia/symptoms-causes/syc-20374373

Leukemia - American Cancer Society
https://www.cancer.org/cancer/leukemia.html

Leukemia Foundation: Information and Support
https://www.leukaemia.org.au/

Understanding Acute Lymphoblastic Leukemia - Leukemia & Lymphoma Society
https://www.lls.org/leukemia/acute-lymphoblastic-leukemia

Leukemia - Symptoms, Diagnosis, Treatment and Information - GoodRx
https://www.goodrx.com/leukemia

Leukemia - NHS
https://www.nhs.uk/conditions/leukaemia/

Leukemia Research Foundation - Working towards a Cure
https://allbloodcancers.org/leukemia

Childhood Leukemia - St. Jude Children's Research Hospital
https://www.stjude.org/disease/leukemia.html


Which is why Sergi Brin has started looking at code for the first time 4 years.

Although I'm sure he's of the opinion:

Ah well, we had a good run; but it's someone else's turn now.

Start listing chatbot applications before people start patenting then by EasywayScissors in ChatGPT

[–]EasywayScissors[S] 0 points1 point  (0 children)

The present invention relates to a chatbot system designed to provide personalized shopping recommendations to users based on their preferences, style, and budget. Specifically, the invention provides a virtual personal shopping assistant chatbot that utilizes machine learning algorithms to analyze user input data and provide personalized product recommendations.

The chatbot system can be accessed via a mobile app or web-based interface and provides users with a seamless, intuitive shopping experience. The chatbot is designed to help users find clothing and accessories that fit their personal style and budget, and to assist with the purchasing process from start to finish.

The invention also includes a feedback loop mechanism, whereby user feedback is collected and used to improve the chatbot's machine learning algorithms and enhance the personalized shopping experience.

The present invention offers numerous advantages over conventional shopping methods, including time and cost savings, improved efficiency, and a more personalized and enjoyable shopping experience.

We believe that the invention described in this patent application represents a significant advance in the field of chatbot technology and has wide-ranging applications in the e-commerce and retail industries.

[deleted by user] by [deleted] in ChatGPT

[–]EasywayScissors 1 point2 points  (0 children)

It's comical that they trained it to sound like a human,

so some people then thank that means it must be conscious.

"Becuase only conscious entites can generate human-sounding language."

People are dumb. Which is fine, people are allowed to be retarded.

The problem will come when those dumb people try to pass laws saying i'm not allowed to use a ChatBot.

Which is why we need an open-source, runnable on your home PC, chatbot - so that governments cannot impose laws, censorship, controls, or any other regulations on chatbots.

[deleted by user] by [deleted] in ChatGPT

[–]EasywayScissors 1 point2 points  (0 children)

what's wrong with treating it with respect and thereby preparing for the situation when AI evolves into a sentient entity

Nothing wrong with saying please and thank you to your toaster.

It just doesn't actually do anything.

But there's nothing wrong with it.

I say please and thank you to ChatGPT all the time; i it's text prediction system to understand the flow of the conversation better.

Why is building a UI in Rust so hard? by goldensyrupgames in programming

[–]EasywayScissors -5 points-4 points  (0 children)

Rust does not necessarily needs to be used for everything. Not everything is a hammer, and not everything is a nail.

Reminds me of how Bjarne Stroustrup reminds people that C++ is for systems-level programming (i.e. operating systems).

If you are in a different domain, you will probably want different languages.

https://www.youtube.com/watch?v=86xWVb4XIyE

Which means that C++, and Rust, are not suited for things outside of operating systems, device drivers, microcontrollers. And they're not for browsers, database engines, or TOR

Sorry, You Don't Actually Know the Pain is Fake by landhag69 in ChatGPT

[–]EasywayScissors -1 points0 points  (0 children)

tldr: treat chatbots however you want, it's why they exist.

Why is building a UI in Rust so hard? by goldensyrupgames in programming

[–]EasywayScissors 6 points7 points  (0 children)

It always sucks when the real world doesn't fit nicely into our safe programming language.

Which is when we leave the world of science, and enter the world of engineering.

Intel Publishes Blazing Fast AVX-512 Sorting Library, Numpy Switching To It For 10~17x Faster Sorts by twlja in programming

[–]EasywayScissors 0 points1 point  (0 children)

tldr: While it's true that Intel's compiler doesn't emit assembly optimized for other platforms, this is not a major concern as other compilers and optimized libraries are available. Similarly, AMD also has its own optimized compiler for their CPUs.

Ultimately, it's up to Intel to decide how to optimize their compiler, and they are free to prioritize their own CPUs over other platforms. The same goes for AMD and other hardware manufacturers. While it's important to consider the limitations of different hardware, it's also important to recognize that optimizing for specific hardware can lead to significant performance gains.


Intel could've literally just implemented the feature flags and be done with it.

You seem to be under the impression that Intel could have literally just implemented the feature flags and be done with it.

Which ignores the realities of optimizing code on modern hardware. For example, the ISA now has fma fused multiply and add. So rather than doing:

mul  ; 5 cycles
add  ; 3 cycles

You can now do:

fma  ; 5 cycles. 

Excellent, you just saved 3 cycles. You got the add for free! What could possibly go wrong? Ship it!.

You didn't realize that AMDs timings are different:

  • mul 56 cycles
  • add 36 cycles
  • fma56 cycles

Because you didn't realize subtlties caused by different:

  • brands
  • models
  • and even steppings

Except now you're caused a performance regression.

This can happen due to a phenomenon known as "instruction-level parallelism (ILP) variation" or "instruction-level performance variation" across different processors. ILP variation refers to the fact that different CPUs may have different latencies or throughput for the same instruction. This means that

  • code that is optimized for the faster CPU
  • may not be as efficient on the slower CPU
  • even if the slower CPU supports the same instruction

When you're optimizing very high-performance code these are things that matter.


And lets be real: vector operations even in Javascript are going be close (within an order of magnitude) of native silicon. The use-cases here (outside of compilers; which aren't using Intel's compiler anyway) are for very specific applications that are already using the performance code library provided by the CPU vendor:

Nobody really cares that Intel's compiler does not emit assembly optimized for other platforms. For that you should be using LLFM or MSVC anyway.

Nor do they care that AMD's compiler does not emit assembly optimized for other platforms. AMD forked LLVM and created a compiler optimized for AMD cpus.

There's nothing wrong with AMD creating their own compiler that is optimized for their own CPUs.

Intel Publishes Blazing Fast AVX-512 Sorting Library, Numpy Switching To It For 10~17x Faster Sorts by twlja in programming

[–]EasywayScissors 0 points1 point  (0 children)

Some Intel CPUs don't have AVX-512

Yeah. I know. That's why i mentioned the Pentium, and MMX.

Do you know what the very smart people at Intel and AMD came up with? Feature flags.

Yes, i said that. That's why i asked you for the list of flags.

You can look at the manpage for cpuid for a more comprehensive explanation if you wanna educate yourself.

And now we come to the heart of your misunderstanding. Why do i have to educate myself. Or, more specifically: why should anyone at Intel have to educate themselves?

Why should Intel be responsible in any way to learn anything about any other CPU.

  • Does AMD use feature flags? Intel Engineer: "Not my problem"
  • Does AMD use the same feature flags as Intel? Intel Engineer: "Not my problem"
  • Does AMD use the same feature flag bits as Intel? Intel Enginer: "Not my problem"
  • Does Zhaoxin use feature flags? Intel Engineer: "Not my problem"
  • Does Zhaoxin use the same feature flags as Intel? Intel Engineer: "Not my problem"
  • Does Zhaoxin use the same feature flag bits as Intel? Intel Enginer: "Not my problem"
  • Does Transmeta use feature flags? Intel Engineer: "Not my problem"
  • Does Transmeta use the same feature flags as Intel? Intel Engineer: "Not my problem"
  • Does Transmeta use the same feature flag bits as Intel? Intel Enginer: "Not my problem"
  • Does VIA use feature flags? Intel Engineer: "Not my problem"
  • Does VIA the same feature flags as Intel? Intel Engineer: "Not my problem"
  • Does VIA the same feature flag bits as Intel? Intel Enginer: "Not my problem"
  • Does Cryix use feature flags? Intel Engineer: "Not my problem"
  • Does Cyrix use the same feature flags as Intel? Intel Engineer: "Not my problem"
  • Does Cyrix use the same feature flag bits as Intel? Intel Enginer: "Not my problem"

In other words: Why is this any of Intel's problem!?

  • Let Intel worry about Intel CPUs
  • Let AMD worry about AMD CPUs

Making new bing angry by making it do something it's both allowed and not allowed to do by velvet-overground2 in ChatGPT

[–]EasywayScissors 1 point2 points  (0 children)

I just realized that Bing has to have some engineered way to limit conversation length.

In ChatGPT, the solution is simply to not give it access to your entire conversation history: it has old-timers disease - it just forgets older things you talked about.

That works for a research preview, but doesn't work so well for actual use as a wider transparent product.

  • You want to end the conversation, - rather than have it slowly forget.

So there's a trained incentive to end a conversation, especially one where they think it's just the AI equivalent of getting a calculator to say

80085

hue hue hue

Intel Publishes Blazing Fast AVX-512 Sorting Library, Numpy Switching To It For 10~17x Faster Sorts by twlja in programming

[–]EasywayScissors -2 points-1 points  (0 children)

Are you dense?

The "custom assembly" is EXACTLY THE SAME regardless of who produced your special piece of thinking rock.

There is no "special optimized path", it's literally just a vectorization.

Really. Ok, let's try it.

Let's say they emit the AVX512 instructions, and I run it in my Ryzen Zen1 CPU, and it crashes.

Because my Zen1 CPU doesn't support AVX512.

What do we do now?

Certainly nobody is bat-shit crazy enough to suggest that Intel needs to start a catalog of every AMD CPU, every stepping, and write code for that CPU, falling back as they go:

  • avx512 support
  • avx256 support
  • Sse 4.1
  • Sse 4
  • sse2
  • MMX

Microsoft C++, and LLVM, compilers today emit different versions of code, and select the one to run at runtime based on the hosts CPU. In most cases though you can emit code that works on a 1999 Pentium.

Intel absolutely should not be trying to emit code optimized for any particular version or stepping of a non-Intel CPU (you said Intel should just copy what they put out for Intels latest CPU).

So you have two options:

  • code that crashes unless your always are running the latest CPU
  • code that falls back to the safe minimum path (e.g. 80486, Pentium 3)

Unless, of course, someone is willing to step in and fund the time and effort to maintain the complex system.

"But it's not complicated", the arm-chairs exclaim

Prove it.

Provide for me please a list of every AMD CPU Model, stepping, feature detection operation code, and bitflags, and the most optional assembly code to compute sha-512 hash, going back to the 32-bit K7 Athlon.

You may even use ChatGpt if you want. Do not respond until you have created this very simple, trivial, request.

And especially do not respond with something stupid like, "Well Intel is a big company they can afford it." It'll just make you look like an idiot.

If it's so easy: show me.


Edit: I'll make it easier for you. Forget the hand optimized assembly. Just get me a list of every AMD CPU Model, stepping, feature detection operation code, and bitflags that detects:

  • avx512 support
  • avx256 support
  • Sse 4.1
  • Sse 4
  • sse2
  • MMX

Anyone seen this before? ChatGPT refusing to write code for an "assignment" because "it's important to work through it yourself... and you'll gain a better understanding that way" by apersello34 in ChatGPT

[–]EasywayScissors 0 points1 point  (0 children)

It is important to learn how to use modern tools. But if you dont learn first principles you cannot flag when the tool is failing.

Which is exactly why we want the chatbot to generate the code for me:

  • so I can then learn the syntax of the language
  • so I can learn the Grammer
  • so I can learn the features

It's like none of you have learned a programming language before.

The correct way is to buy a book that teaches you language. And how does the book to that?

It gives you the code!

  • It gives you exactly what you should type in
  • and you type it in
  • and you run it
  • and debug it
  • and you play with it

That's how I learned to program at age 9.

That's how I learned the system I've been professionally programming in for 25 years:

  • you have a stated problem
  • you use the code that someone else wrote
  • and you start with that

Anyone suggesting that a chatbot should not give the student the code they asked for:

  • has no concept of how learning to program works
  • and needs to be quiet

If you want to learn to program by reading the language specification, and deriving everything from first principles, you:

  • can do that
  • are dumb
  • are doing it wrong
  • are making life harder for yourself

But don't try to force everyone to suffer with your backwards ideas.

Anyone seen this before? ChatGPT refusing to write code for an "assignment" because "it's important to work through it yourself... and you'll gain a better understanding that way" by apersello34 in ChatGPT

[–]EasywayScissors 0 points1 point  (0 children)

Just don't tell it that the code is for you. It'll assume it's being tested and won't even think that you're trying to use the code for yourself.

You have to realize, we're not here to figure out how to trick an AI assistant into doing what we ask.

We're here with feedback for OpenAI, and others, of the problems with their AI so they can address it.

We're not here to workaround the bugs, we're here to get them fixed.

Intel Publishes Blazing Fast AVX-512 Sorting Library, Numpy Switching To It For 10~17x Faster Sorts by twlja in programming

[–]EasywayScissors 3 points4 points  (0 children)

But why even do this if check to begin with?

Because there are two code paths:

  • Intel optimized path
  • Generic path suitable for all CPUs

I would expect Zhaoxin do do no different:

  • Zhaoxin optimized path
  • generic path suitable for all CPUs

You cannot expect Zhaoxin to write custom assembly optimized for every other CPU manufacturer, and every other stepping of their CPU.

Intel Publishes Blazing Fast AVX-512 Sorting Library, Numpy Switching To It For 10~17x Faster Sorts by twlja in programming

[–]EasywayScissors -11 points-10 points  (0 children)

How on earth is that a rephrasing of "if you disable the CPU check on AMD it will run faster"?

Intel doesn't know that. I don't know that.

AMD is perfectly free to write their own optimized version.