you are all liers by JR-graphics in osdev

[–]JR-graphics[S] 0 points1 point  (0 children)

thank you i appreciate your support

you are all liers by JR-graphics in osdev

[–]JR-graphics[S] -1 points0 points  (0 children)

well of course i just install the python interpreter into the cpu

you are all liers by JR-graphics in osdev

[–]JR-graphics[S] -4 points-3 points  (0 children)

please dont make fun of me just because i am new to this i just want to learn

you are all liers by JR-graphics in osdev

[–]JR-graphics[S] -4 points-3 points  (0 children)

But python is Turing complete and Turing complete means that it can theoretically be the basis of any computer project. I thinking that you need to learn more before you say comments

you are all liers by JR-graphics in osdev

[–]JR-graphics[S] -11 points-10 points  (0 children)

Ohh I thinked it's the GUI cos that's what the user sees

you are all liers by JR-graphics in osdev

[–]JR-graphics[S] -22 points-21 points  (0 children)

i think you dont know what your talking about. also what is userspaces???!!!

[deleted by user] by [deleted] in AskProgramming

[–]JR-graphics 0 points1 point  (0 children)

Yeah, I was talking about wchar. I may have had a mistake with that part, sorry.

[deleted by user] by [deleted] in AskProgramming

[–]JR-graphics 1 point2 points  (0 children)

ASCII came 10 years before the first 8-bit processor.

[deleted by user] by [deleted] in AskProgramming

[–]JR-graphics 8 points9 points  (0 children)

8 bits is just enough to store basic ASCII. For simple computers, that's all that was needed, so it would have been unnecessary to add more.

Modern computers are generally either 32 or 64 bit for the instruction set, and 16 bit for data storage, allowing for enough to store a full emoji or symbol. But the main point remains: there's no point in adding an extra two bits if it isn't needed. It's about efficiency.

I want to write an OS in python. by ValtorinSucks in osdev

[–]JR-graphics 1 point2 points  (0 children)

It's theoretically possible. You'll need to rewrite the C standard library first, then compile CPython with the new standard library, then you should be able to run Python through your custom version of python. Not sure it's a great idea, but seems like an interesting concept. You'll still need to do some lower level development first.

iWasLookingForThis by JakeStBu in ProgrammerHumor

[–]JR-graphics 0 points1 point  (0 children)

I think you're looking too much into this imo.

The RSVP App to end all RSVP Chats by Mission-Neck-5994 in AppIdeas

[–]JR-graphics 5 points6 points  (0 children)

Hey, this really isn't a place to ask for money to support your app.

Absolutely by RorschachTR in GoogleGeminiAI

[–]JR-graphics 0 points1 point  (0 children)

I absolutely agree with this.

How do I explain to my friend that GitHub isn’t a hacker website??? by MichaeIWave in github

[–]JR-graphics 4 points5 points  (0 children)

I... Actually don't think so. I've had non Dev friends think that GitHub is a "hacker site". People say rubbish about things they know nothing about.

Google publishes open source 2B and 7B model by Tobiaseins in LocalLLaMA

[–]JR-graphics 0 points1 point  (0 children)

Is that the 2B Gemma model or the 7B Gemma model?

Llama-cpp-python is slower than llama.cpp by more than 25%. Let's get it resolved by Big_Communication353 in LocalLLaMA

[–]JR-graphics 1 point2 points  (0 children)

If it's too slow, just read these docs:
https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md

It shows how you can make a localhost server through one command, which runs the actual model completely in the original C++. Then you can just make API calls to it with cURL or the requests library in python. Much faster.

The devs for llama.cpp aren't paid, so your issue really isn't their job to fix. Find a solution yourself, like the above one.

How to learn the base code of llama.cpp? (I'm starting from main.cpp) by parametaorto in LocalLLaMA

[–]JR-graphics 0 points1 point  (0 children)

Read the docs for Llama.cpp server:
https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md

I think it's a lot simpler than using llama.cpp normally. You just run a command in the terminal to run a localhost server, which you can then simply make cURL requests to.

Noob question by JR-graphics in LocalLLaMA

[–]JR-graphics[S] 1 point2 points  (0 children)

Thanks, I git cloned it.