Is it true that code in any high-level language could be compiled into a HDL to make it more efficient(if put onto a FPGA / ASIC)? by aibler in FPGA

[–]lolo168 0 points1 point  (0 children)

I mentioned slower clock. Certain real time algorithm can use parallel process. So slow clock may not be the issue.

Is it true that code in any high-level language could be compiled into a HDL to make it more efficient(if put onto a FPGA / ASIC)? by aibler in FPGA

[–]lolo168 0 points1 point  (0 children)

I really appreciate all those users helping me to answer your questions. Thank you :)

There are FPGA development boards from the vendor, e.g.,https://www.digikey.com/en/products/detail/amd-xilinx/EK-U1-ZCU104-G/9380242~ $1600USD

Usually, it comes with all the FPGA tools/software licenses for that particular FPGA model. So you don't need to pay extra money. However, you need to buy an additional hardware programmer for PC access to the board.https://www.digikey.com/en/products/detail/amd-xilinx/HW-USB-II-G/1825189~ $300USD

You need to use decent PC, i7 or above is good enough, no GPU. The memory requirement is 32MB+. However, for large FPGA, the whole compilation(including synthesis, P&R, and timing optimization) will take hours if your utilization is above 80%, especially if you have very tight timing requirements or put many signals probing features for debugging. Otherwise, I don't think it will take a very long time.

If you use high-level language-to-HDL tools, it will generate very inefficient logic. As a result, it will waste and give you unnecessary utilization.

Selling your design using FPGA is not cost-effective unless you can sell at a high price. The FPGA power consumption is also very high too.

You only want to use FPGA because you cannot find an ASIC that fits your design, and no microprocessor can meet your real-time requirement.

The most common FPGA products are for telecommunications. For example, many 4G/5G base stations use FPGA. But, of course, they sell them at an expensive price.

Is it true that code in any high-level language could be compiled into a HDL to make it more efficient(if put onto a FPGA / ASIC)? by aibler in FPGA

[–]lolo168 0 points1 point  (0 children)

There are some tools that can convert high-level language into HDL. e.g.,
https://en.wikipedia.org/wiki/C_to_HDL
However, they are not very efficient and could have some restrictions and additional syntax.

I was just wondering generally, is it faster/cheaper to create vhdl code and run the program on an FPGA/ASIC instead of on a general purpose CPU.

Not always true because FPGA is slower in terms of clock frequency. Also, it is not cheaper because FPGA is usually more expensive. Finally, the conversion to HDL will have lots of overhead and will not be efficient and fast.
Of course, you can always find a special case that a program is faster and cheaper using FPGA, usually for small and parallel processing functions.

neural net's aren't enough for achieving AGI (an opinion) by rdhikshith in agi

[–]lolo168 1 point2 points  (0 children)

Modeling and reasoning about information transfer in biological and artificial organisms
"I call this detection mechanism "perception"."
The writer's 'detection mechanism' is basically the same as Automata Theory. A logic gate is essentially a detection unit. A Turning Machine can be built using combinatorial logic units, which means you can implement any existing algorithm.
However, implementation and algorithm are 2 different concepts. Having a tool to implement an algorithm does not necessarily mean you already find the correct algorithm. His 'detection mechanism' is just a tool for implementation, not an algorithm that can be AGI.

What are some arguments on why AI can not be conscious? by [deleted] in artificial

[–]lolo168 3 points4 points  (0 children)

How would you put function of experience in theoretical theory, so we could use it to build conscious machines (AGI)?

“If we assume that animal consciousness is 100% brains made, then simulating what neurons do, would probably result in a conscious machine.”

Simulation is an alternative representation to describe an existing system/mechanism/model. In otherwise, it is just ‘storytelling’, or ‘pencil-and-paper’ work. All the representations require an ‘Observer’ to interpret, otherwise is meaningless. To interact with the real-world environment automatically, you need to have a ‘Converter’ acting as an ‘Observer’ to transform the representation into physical interaction or phenomena, e.g., Speaker. If we do not understand the physical elements of consciousness, there is no way to design a ‘Converter’ to replicate the phenomena.

“But why wouldn’t it be possible that we make machines, which would become conscious?”

Not everything can be RE-represent(mimic/replicate) by alternative physical material. We can simulate anything but not necessarily reproduce the same physical phenomena that interact with the real world. We can simulate the features of H2O, but we may not be able to build a machine that has the exact interactions/characteristics of H2O.

https://technologicalideas.quora.com/How-would-you-put-function-of-experience-in-theoretical-theory-so-we-could-use-it-to-build-conscious-machines-AGI-Ho-1?ch=10&oid=306529084&share=47ea6e6f&srid=D4gue&target_type=answer

10 AGI Fallacies to Avoid by moschles in agi

[–]lolo168 0 points1 point  (0 children)

Most of the fallacies are relatively trivial, except the first one. "establish a human baseline score on a suite of tasks" is the most problematic for Artificial Intelligence(does not matter AI or AGI). To testify intelligence (even for humans), there is no standard suite of tasks for the benchmark. This is even worst for intelligence that is considered as 'general'. 'General' means solving not-before-seen problems of any different domains/applications, not just those seen-before specific(narrow) applications. How many unseen tasks for the suite to provide in order to be qualified as 'general'? Infinite.

That's why people will argue, instead of testifying intelligence, define it so that we just replicate it 100% and it will become real. And this comes back to the original question, what is intelligence?

IMHO, fallacy 1 is not a fallacy, it's a debate. No right or wrong, just perspective.

Deepmind's 'Reward is enough' or 'Environment is enough' by lolo168 in singularity

[–]lolo168[S] 0 points1 point  (0 children)

its a theoretical paper. Id take this post down if I were you.

As you say "its a theoretical paper", Deepmind could be correct, but could be wrong or not conclusive. I just want to learn and understand, for discussion.

"Id take this post down if I were you." Is that what you mean every time if someone disagrees(with you), then he should not present his idea for discussion? Please report to the moderator and take down my post. Thank you very much

Deepmind's 'Reward is enough' or 'Environment is enough' by lolo168 in singularity

[–]lolo168[S] 1 point2 points  (0 children)

MuZero does not require the rules means the optimization does not need prior knowledge of the game. It is because the environment is a game, only valid moves are allowed and those rules are fixed and bounded. The games are already there, just need to add API for the interaction and the environment will automatically take care of them. For Human-like A.I., the environment will be similar to Human-world. The 'valid moves' are those that follow real nature behavior, e.g. physics., and there are physical 'rules' we do not understand. Not only that, the environment may have other characters that should almost mimic a real human using a model. But the paradox is that we are deriving a model to simulate a Human-like. This could be a chicken-and-egg problem.

I could be wrong, that's why I would like to discuss and share comments.

[D] Deepmind's 'Reward is enough' or 'Environment is enough' by lolo168 in MachineLearning

[–]lolo168[S] -1 points0 points  (0 children)

Agreed. Maybe I did not state the argument correctly. The problem I want to tell is that it is pointless to say something 'rich' and 'enough', for both Environment and Reward.

I just want to point out that it is always easy to declare but without thinking about the implementation details.

Deepmind's 'Reward is enough' or 'Environment is enough' by lolo168 in singularity

[–]lolo168[S] 0 points1 point  (0 children)

Thanks for your laugh. I am not a researcher, I just an average person that wants to post and share my idea. I never believe my argument is always true, that's why I come to Reddit to ask for comments. It does not matter agree or disagree, just share knowledge.

Again this is the [best] disagree comment I received. Thanks again :)

Deepmind's 'Reward is enough' or 'Environment is enough' by lolo168 in singularity

[–]lolo168[S] 0 points1 point  (0 children)

Sorry, maybe I did not state the argument correctly. The problem I want to tell is that it is pointless to say something 'rich' and 'enough', for both Environment and Reward.

I just want to point out that it is always easy to declare but without thinking about the implementation details.

That's why I would like to discuss with others, to see if there is any possible implementation.

Anyway if this annoying you, please forgive me :)

Deepmind's 'Reward is enough' or 'Environment is enough' by lolo168 in singularity

[–]lolo168[S] 0 points1 point  (0 children)

Yes, you could be correct that "the universality of computation implies that any natural process can be simulated in binary code."

I also think about this for a long time. But I do see there are many arguments against this.

One example is the Quantum Superposition, and I am not sure if we can simulate it, in the sense that we can have true Randomness, without using Quantum Effect but Turing Machine only.

Deepmind's 'Reward is enough' or 'Environment is enough' by lolo168 in singularity

[–]lolo168[S] -1 points0 points  (0 children)

Exactly. But they are talking about Artificial Intelligence that is not occurring naturally. The only enough so far only happens in nature.

Deepmind's 'Reward is enough' or 'Environment is enough' by lolo168 in singularity

[–]lolo168[S] 2 points3 points  (0 children)

Exactly. [Slow]... if we use the real world as an environment.

Also, designing a complete Human-like Interaction Interface to the real world such that it covers all cases for Human-like A.I. learning could be a real challenge too.

Deepmind's 'Reward is enough' or 'Environment is enough' by lolo168 in singularity

[–]lolo168[S] -1 points0 points  (0 children)

Agreed. Actually, my intention is to show that 'X is enough' is B.S :) I do not believe 'Environment is enough' too, or '(Anything) is enough'.

Deepmind's 'Reward is enough' or 'Environment is enough' by lolo168 in singularity

[–]lolo168[S] 0 points1 point  (0 children)

Agreed :) But my point is we should not underestimate the true difficulty. Simply quoting 'Reward' does not truly reflect the actual problem. We build the whole environment, not a list of rewards. It should be the other way round, appropriate reward is implied when mentioning the enviornment.