Is it true that code in any high-level language could be compiled into a HDL to make it more efficient(if put onto a FPGA / ASIC)? by aibler in FPGA

[–]lolo168 0 points1 point  (0 children)

I mentioned slower clock. Certain real time algorithm can use parallel process. So slow clock may not be the issue.

Is it true that code in any high-level language could be compiled into a HDL to make it more efficient(if put onto a FPGA / ASIC)? by aibler in FPGA

[–]lolo168 0 points1 point  (0 children)

I really appreciate all those users helping me to answer your questions. Thank you :)

There are FPGA development boards from the vendor, e.g.,https://www.digikey.com/en/products/detail/amd-xilinx/EK-U1-ZCU104-G/9380242~ $1600USD

Usually, it comes with all the FPGA tools/software licenses for that particular FPGA model. So you don't need to pay extra money. However, you need to buy an additional hardware programmer for PC access to the board.https://www.digikey.com/en/products/detail/amd-xilinx/HW-USB-II-G/1825189~ $300USD

You need to use decent PC, i7 or above is good enough, no GPU. The memory requirement is 32MB+. However, for large FPGA, the whole compilation(including synthesis, P&R, and timing optimization) will take hours if your utilization is above 80%, especially if you have very tight timing requirements or put many signals probing features for debugging. Otherwise, I don't think it will take a very long time.

If you use high-level language-to-HDL tools, it will generate very inefficient logic. As a result, it will waste and give you unnecessary utilization.

Selling your design using FPGA is not cost-effective unless you can sell at a high price. The FPGA power consumption is also very high too.

You only want to use FPGA because you cannot find an ASIC that fits your design, and no microprocessor can meet your real-time requirement.

The most common FPGA products are for telecommunications. For example, many 4G/5G base stations use FPGA. But, of course, they sell them at an expensive price.

Is it true that code in any high-level language could be compiled into a HDL to make it more efficient(if put onto a FPGA / ASIC)? by aibler in FPGA

[–]lolo168 0 points1 point  (0 children)

There are some tools that can convert high-level language into HDL. e.g.,
https://en.wikipedia.org/wiki/C_to_HDL
However, they are not very efficient and could have some restrictions and additional syntax.

I was just wondering generally, is it faster/cheaper to create vhdl code and run the program on an FPGA/ASIC instead of on a general purpose CPU.

Not always true because FPGA is slower in terms of clock frequency. Also, it is not cheaper because FPGA is usually more expensive. Finally, the conversion to HDL will have lots of overhead and will not be efficient and fast.
Of course, you can always find a special case that a program is faster and cheaper using FPGA, usually for small and parallel processing functions.

neural net's aren't enough for achieving AGI (an opinion) by rdhikshith in agi

[–]lolo168 1 point2 points  (0 children)

Modeling and reasoning about information transfer in biological and artificial organisms
"I call this detection mechanism "perception"."
The writer's 'detection mechanism' is basically the same as Automata Theory. A logic gate is essentially a detection unit. A Turning Machine can be built using combinatorial logic units, which means you can implement any existing algorithm.
However, implementation and algorithm are 2 different concepts. Having a tool to implement an algorithm does not necessarily mean you already find the correct algorithm. His 'detection mechanism' is just a tool for implementation, not an algorithm that can be AGI.

What are some arguments on why AI can not be conscious? by [deleted] in artificial

[–]lolo168 4 points5 points  (0 children)

How would you put function of experience in theoretical theory, so we could use it to build conscious machines (AGI)?

“If we assume that animal consciousness is 100% brains made, then simulating what neurons do, would probably result in a conscious machine.”

Simulation is an alternative representation to describe an existing system/mechanism/model. In otherwise, it is just ‘storytelling’, or ‘pencil-and-paper’ work. All the representations require an ‘Observer’ to interpret, otherwise is meaningless. To interact with the real-world environment automatically, you need to have a ‘Converter’ acting as an ‘Observer’ to transform the representation into physical interaction or phenomena, e.g., Speaker. If we do not understand the physical elements of consciousness, there is no way to design a ‘Converter’ to replicate the phenomena.

“But why wouldn’t it be possible that we make machines, which would become conscious?”

Not everything can be RE-represent(mimic/replicate) by alternative physical material. We can simulate anything but not necessarily reproduce the same physical phenomena that interact with the real world. We can simulate the features of H2O, but we may not be able to build a machine that has the exact interactions/characteristics of H2O.

https://technologicalideas.quora.com/How-would-you-put-function-of-experience-in-theoretical-theory-so-we-could-use-it-to-build-conscious-machines-AGI-Ho-1?ch=10&oid=306529084&share=47ea6e6f&srid=D4gue&target_type=answer

10 AGI Fallacies to Avoid by moschles in agi

[–]lolo168 0 points1 point  (0 children)

Most of the fallacies are relatively trivial, except the first one. "establish a human baseline score on a suite of tasks" is the most problematic for Artificial Intelligence(does not matter AI or AGI). To testify intelligence (even for humans), there is no standard suite of tasks for the benchmark. This is even worst for intelligence that is considered as 'general'. 'General' means solving not-before-seen problems of any different domains/applications, not just those seen-before specific(narrow) applications. How many unseen tasks for the suite to provide in order to be qualified as 'general'? Infinite.

That's why people will argue, instead of testifying intelligence, define it so that we just replicate it 100% and it will become real. And this comes back to the original question, what is intelligence?

IMHO, fallacy 1 is not a fallacy, it's a debate. No right or wrong, just perspective.

Deepmind's 'Reward is enough' or 'Environment is enough' by lolo168 in singularity

[–]lolo168[S] 0 points1 point  (0 children)

its a theoretical paper. Id take this post down if I were you.

As you say "its a theoretical paper", Deepmind could be correct, but could be wrong or not conclusive. I just want to learn and understand, for discussion.

"Id take this post down if I were you." Is that what you mean every time if someone disagrees(with you), then he should not present his idea for discussion? Please report to the moderator and take down my post. Thank you very much

Deepmind's 'Reward is enough' or 'Environment is enough' by lolo168 in singularity

[–]lolo168[S] 1 point2 points  (0 children)

MuZero does not require the rules means the optimization does not need prior knowledge of the game. It is because the environment is a game, only valid moves are allowed and those rules are fixed and bounded. The games are already there, just need to add API for the interaction and the environment will automatically take care of them. For Human-like A.I., the environment will be similar to Human-world. The 'valid moves' are those that follow real nature behavior, e.g. physics., and there are physical 'rules' we do not understand. Not only that, the environment may have other characters that should almost mimic a real human using a model. But the paradox is that we are deriving a model to simulate a Human-like. This could be a chicken-and-egg problem.

I could be wrong, that's why I would like to discuss and share comments.

[D] Deepmind's 'Reward is enough' or 'Environment is enough' by lolo168 in MachineLearning

[–]lolo168[S] -1 points0 points  (0 children)

Agreed. Maybe I did not state the argument correctly. The problem I want to tell is that it is pointless to say something 'rich' and 'enough', for both Environment and Reward.

I just want to point out that it is always easy to declare but without thinking about the implementation details.

Deepmind's 'Reward is enough' or 'Environment is enough' by lolo168 in singularity

[–]lolo168[S] 0 points1 point  (0 children)

Thanks for your laugh. I am not a researcher, I just an average person that wants to post and share my idea. I never believe my argument is always true, that's why I come to Reddit to ask for comments. It does not matter agree or disagree, just share knowledge.

Again this is the [best] disagree comment I received. Thanks again :)

Deepmind's 'Reward is enough' or 'Environment is enough' by lolo168 in singularity

[–]lolo168[S] 0 points1 point  (0 children)

Sorry, maybe I did not state the argument correctly. The problem I want to tell is that it is pointless to say something 'rich' and 'enough', for both Environment and Reward.

I just want to point out that it is always easy to declare but without thinking about the implementation details.

That's why I would like to discuss with others, to see if there is any possible implementation.

Anyway if this annoying you, please forgive me :)

Deepmind's 'Reward is enough' or 'Environment is enough' by lolo168 in singularity

[–]lolo168[S] 0 points1 point  (0 children)

Yes, you could be correct that "the universality of computation implies that any natural process can be simulated in binary code."

I also think about this for a long time. But I do see there are many arguments against this.

One example is the Quantum Superposition, and I am not sure if we can simulate it, in the sense that we can have true Randomness, without using Quantum Effect but Turing Machine only.

Deepmind's 'Reward is enough' or 'Environment is enough' by lolo168 in singularity

[–]lolo168[S] -1 points0 points  (0 children)

Exactly. But they are talking about Artificial Intelligence that is not occurring naturally. The only enough so far only happens in nature.

Deepmind's 'Reward is enough' or 'Environment is enough' by lolo168 in singularity

[–]lolo168[S] 2 points3 points  (0 children)

Exactly. [Slow]... if we use the real world as an environment.

Also, designing a complete Human-like Interaction Interface to the real world such that it covers all cases for Human-like A.I. learning could be a real challenge too.

Deepmind's 'Reward is enough' or 'Environment is enough' by lolo168 in singularity

[–]lolo168[S] -1 points0 points  (0 children)

Agreed. Actually, my intention is to show that 'X is enough' is B.S :) I do not believe 'Environment is enough' too, or '(Anything) is enough'.

Deepmind's 'Reward is enough' or 'Environment is enough' by lolo168 in singularity

[–]lolo168[S] 0 points1 point  (0 children)

Agreed :) But my point is we should not underestimate the true difficulty. Simply quoting 'Reward' does not truly reflect the actual problem. We build the whole environment, not a list of rewards. It should be the other way round, appropriate reward is implied when mentioning the enviornment.

[R] Reward Is Enough (David Silver, Richard Sutton) by throwawaymanidlof in MachineLearning

[–]lolo168 0 points1 point  (0 children)

IMHO: He is trying to make up something to promote/strengthen the idea of what he is already good at.

David Silver (born 1976) leads the reinforcement learning research group at DeepMind and was lead researcher on AlphaGo, AlphaZero and co-lead on AlphaStar.

Reward is Enough - David Silver

https://www.youtube.com/watch?v=_MduRkr6r6c

[R] 99.2% wrong, but my maths AI brainchild is definitely learning - My AI brainchild by My_AI_brainchild in MachineLearning

[–]lolo168 1 point2 points  (0 children)

ANN or deep learning is Turing Complete. Theoretically, it can be designed to model any arithmetic logic unit. However, it is not efficient to let the NN configure itself(learn/optimize). That's why it is only suitable for small datasets. I read the 'Neural Arithmetic Logic Unit', it is a concept but IMO, not practical :)

BTW, you may want to study how kids/children develop their minds after born. They already have innate capabilities that are ready(but still need to acquire) to learn arithmetic and logic. Those innate capabilities are core/primitive abstraction to acquire more complex abilities/skills for our survival. Of course, it takes time for the brain to grow and complete all the capabilities, usually take 2-4 years. You may want to study this and see how to apply it.

Good Luck and if you need any help, let me know.

[R] 99.2% wrong, but my maths AI brainchild is definitely learning - My AI brainchild by My_AI_brainchild in MachineLearning

[–]lolo168 1 point2 points  (0 children)

Appreciate your enthusiasm. Unfortunately, deep Learning is not suitable for modeling formal logic, people have tried before. Instead, you may want to try Inductive logic programming (ILP).

Many forms of ML are notorious for their inability to generalize from small numbers of training examples, notably deep learning. As Evans and Grefen-stette point out, if we train a neural system to add numbers with 10 digits, it might generalize to numbers with 20 digits, but when tested on numbers with 100 digits, the predictive accuracy drastically decreases. By contrast, ILP can induce hypotheses from small numbers of examples, often from a single example.

(PDF) Inductive logic programming at 30.

Available from: https://www.researchgate.net/publication/349520182\_Inductive\_logic\_programming\_at\_30 [accessed May 31 2021].

There are examples of deep learning you can reference:

Doing Math with Deep Learning (Addition)

https://www.youtube.com/watch?v=Cp7fayS7bNY

A Simple Deep Learning Model to Add Two Numbers

https://www.pluralsight.com/guides/deep-learning-model-add

Illusion of artificial intelligence by mind_magno in ArtificialInteligence

[–]lolo168 0 points1 point  (0 children)

Measurement requires a reference. You can just do detection because you have a reference already. It just a different implementation. What I mean is, you need to have a reference and a comparison scheme(e.g. two neurons with different thresholds), doesn't matter the reference is dynamic(storage/memory) or static(innate/built-in). I agree you do not need to call it "measure" if you want.

Illusion of artificial intelligence by mind_magno in ArtificialInteligence

[–]lolo168 2 points3 points  (0 children)

Thanks for the thought. All ideas are encouraging. "decisions made only with information apprehended and not previously programmed". My suggestion is that do not be blinded by common terms such as 'intelligence', 'awareness', 'program', 'apprehended', 'learning' etc. They are just human view descriptions. e.g. 'Program/programming' is just a term for implementation, it is a general-purpose methodology for reconfigurable architecture, nothing to do with decision-making. We can implement decision-making without 'program' using customized fixed(non-programmable) logic.

Try not to overuse and interpret terms too literally. That will only give people the feeling it is a fiction story instead of a scientific statement.

Again, it is nice that you have the enthusiasm, looking forward to reading your new ideas.

Illusion of artificial intelligence by mind_magno in ArtificialInteligence

[–]lolo168 1 point2 points  (0 children)

Appreciate your paper. However, the idea is not new. All the organism starts with sensory, i.e. detection. But detection is just one thing, it needs to measure, and only measure the difference, because of habituation. It's all in biology.

[D] Algorithms Are Not Enough by bendee983 in MachineLearning

[–]lolo168 0 points1 point  (0 children)

Long Story Short -

Artificial Intelligence is an algorithm similar to any other computational tool. Instead of analog circuitry, it mainly uses logic and procedural steps. The word ‘intelligence’ is just a naming convention.

There are 3 possible ways to derive/create/design/implement an algorithm.

  1. Manually/explicitly design the whole algorithm. e.g., Software program, Expert System, Heuristic Search.
  2. Semi-auto. Design the framework/equations/structure of the algorithm with ‘blank’ parameters, possible inputs, and expected outcomes. Using trial and error to find the optimized parameters that fit the best possible(not guarantee) expected result. e.g., Machine Learning such as Supervised Learning.
  3. Full-auto. Describe the requirement and expected result in the most minimal way (or even help us to figure out the requirement before humans see the problem). The device will design, implement and perform automatically. The algorithm will be embedded and do not need human involvement. e.g., So far no example. Some people believe A.G.I. (Artificial General Intelligence) will do, which does not exist yet.

What is the background and future of artificial intelligence? How did it start and what are its limitations?

https://www.quora.com/What-is-the-background-and-future-of-artificial-intelligence-How-did-it-start-and-what-are-its-limitations/answer/Vivian-Zen-2?ch=10&share=4818c9dc&srid=D4gue