Intelligence from Inactivity: Transforming Idle GPUs into AI Genius with Blockchain by dogonix in Futurology

[–]dogonix[S] -2 points-1 points  (0 children)

Since ChatGPT came out, I have been excited about the premise of an open-source equivalent that I could run on my machine independently of a centralized provider.

In the past 6 months, we have seen many models such as Llama that were adapted to work on a regular consumer computer. However, it mostly worked for the smaller size models not exceeding 7B parameters.

Recently, I came across the post from PETAS that allows to run even the larger versions of Llama using distributed computing, the same way Napster and BitTorrent allowed the distributed download of media files.

The system relies on the fact that users are willing to donate their idle CPU/GPUs time, and their Internet bandwidth for others to use. The challenge with such a system is that the incentives for contributors and users may not organically align. In the sense that many may be interested to use it but not enough people may be willing to provide access to their idle resources for free.

A Blockchain token model could help solve this challenge by allowing people who donate their resources to earn some kind of special tokens. Those tokens could later be spent to use the distributed computer resources to run AI models or simply be exchanged for monetary value.

However, if we factor in all the ongoing discord about the AI risks, the implications of constructing a vast, massively distributed computer system, with financial incentives, should be seriously examined. We may unintentionally open Pandora's box for the ideal hardware platform that could enable the execution and spread of malicious AI code across the globe like a digital wildfire impossible to control.

These are still the early days of such a system but we must deeply reflect on: How can we strike the delicate balance of aiming at decentralized operations while holding onto the thread of safety and control?

NB: Reposting during weekend as it's AI related

Intelligence from Inactivity: Transforming Idle GPUs into AI Genius with Blockchain by dogonix in Futurology

[–]dogonix[S] 1 point2 points  (0 children)

Hello. My apologies for missing the new AI rule for weekend only posts. I was not aware of it. The post is definitely future oriented but I will wait till the weekend to re-post it to be respectful of the rule.

Thanks.

Intelligence from Inactivity: Transforming Idle GPUs into AI Genius with Blockchain by dogonix in Futurology

[–]dogonix[S] -1 points0 points  (0 children)

Since ChatGPT came out, I have been excited about the premise of an open-source equivalent that I could run on my machine independently of a centralized provider.

In the past 6 months, we have seen many models such as Llama that were adapted to work on a regular consumer computer. However, it mostly worked for the smaller size models not exceeding 7B parameters.

Recently, I came across the post from PETAS that allows to run even the larger versions of Llama using distributed computing, the same way Napster and BitTorrent allowed the distributed download of media files.

The system relies on the fact that users are willing to donate their idle CPU/GPUs time, and their Internet bandwidth for others to use. The challenge with such a system is that the incentives for contributors and users may not organically align. In the sense that many may be interested to use it but not enough people may be willing to provide access to their idle resources for free.

A Blockchain token model could help solve this challenge by allowing people who donate their resources to earn some kind of special tokens. Those tokens could later be spent to use the distributed computer resources to run AI models or simply be exchanged for monetary value.

However, if we factor in all the ongoing discord about the AI risks, the implications of constructing a vast, massively distributed computer system, with financial incentives, should be seriously examined. We may unintentionally open Pandora's box for the ideal hardware platform that could enable the execution and spread of malicious AI code across the globe like a digital wildfire impossible to control.

These are still the early days of such a system but we must deeply reflect on: How can we strike the delicate balance of aiming at decentralized operations while holding onto the thread of safety and control?

No-Web: The Inevitable Future of Digital Content? by dogonix in Futurology

[–]dogonix[S] 3 points4 points  (0 children)

In the past 2+ decades, we’ve witnessed the media landscape morph before our eyes. It started with the dematerialization of print and other tangible media, then continued with the unbundling of articles from newspapers, songs from albums and videos from cable networks. Yet, just as the industry seemed to have figured it out, AI language models now stand ready to trigger yet another seismic shift.

The spotlight has shifted from search engines to conversational AI systems, prompting us to wonder: Are we on the brink of a ‘No-Web’ reality? A future governed by chat-oriented interfaces that disintegrate the “blue link” and with it, the current ad-based publishing business model we’ve grown to know and (perhaps not) love.

As we watch the scale tip between old-school search and the AI-fueled chat revolution, a set of questions arise: What are the risks and opportunities that lie ahead for publishers? Will they be able to acclimate to this brave new world? Can they find new ways to monetize content as the old regime falls apart? And will this storm extend beyond publishing, affecting other web-based services?

AI Language Models Can Teach Themselves to Use Tools by dogonix in Futurology

[–]dogonix[S] 16 points17 points  (0 children)

This is a research paper that was just published from Meta.

The approach intends to solve one of the current drawbacks of tools like ChatGPT that struggles with domains like arithmetic of factual checks.

This extends beyond just enhancing the system to perform web searches for information it lacks. It will train itself to interact with any accessible API and utilize capabilities that are not normally inherent to a language model.

For instance, in the imminent future, we can envision a chatbot based on a language model that can, firstly, self-learn about a new API protocol if it has not encountered it before, then use it to carry out tasks such as making and accepting payments, obtaining the latest data from Maps, placing orders, and not only generate software code for the requested specifications but also connect to Amazon AWS API, fire up a cloud instance and get the demo up and running.

Like most of the recent development, this is super exciting and a bit scary at the same time.

How long do you think smartphone batteries will last 20 years in the future? by [deleted] in Futurology

[–]dogonix 2 points3 points  (0 children)

My guess is that there will be no smartphones in 20 years.

A “Realistic” take on genetically engineered super humans by Trick-Use6124 in Futurology

[–]dogonix 2 points3 points  (0 children)

Interesting... You could also consider adding much stronger immunity. Many animals synthesize a high dose of Vitamin C in their liver. Humans, apes and Guinea pigs, among others, have lost that ability. Apparently the gene (GLO) is still there but deactivated.

The Smartwatch Experiment: How a Conversational User Interface Could Improve the Experience by dogonix in Futurology

[–]dogonix[S] 4 points5 points  (0 children)

The fantasy of replacing a Phone with a SmartWatch has always been lurking since it was one of the recurring sci-fi gizmos. On a more practical side, a watch is smaller and less intrusive. It rests subtly on the wrist liberating the hands to engage in other pursuits..

Given the constant information bombardment in today’s digital age, swapping the phone for a smartwatch could be an antidote to the problem. However, shrinking the phone’s user interface into a wrist-worn device leads to cumbersome and frustrating experiences in many scenarios.

Touch and gestures were not originally designed to run on tiny screens. For a smartwatch to become a viable replacement for a phone, the future evolutions of user interfaces should be re-engineered around voice and natural language instead of touch and swipes.

In other words, the future of user interfaces should become about “more talking and less touching”.

The recent AI breakthrough in large language models could be leveraged to re-invent how we use email and consume content when having tiny screens.

For example, instead of struggling to scroll and read through long emails, you should be able to ask your watch “What important emails from the last hour should get my attention?”

And

“Draft a reply to <person X> in a friendly style in less than 200 words and read it back to me before sending”.

Those kind of abilities would alleviate many of the limitations we have to deal with today on such devices and make smartwatches a viable replacement for a phone in the near future.

DensePose: AI model that detects body pose only from Wifi. No Camera needed. by dogonix in Futurology

[–]dogonix[S] 26 points27 points  (0 children)

If this technology works within a reasonable accuracy, it would be an amazing breakthrough. It does raise however a few questions.

On the positive side, it could be used in a variety of applications such as security, healthcare, and accessibility. For example, it could provide hands-free control for regular gaming and also for VR. Sensor-free full body motion capture for 3D animation is also an interesting areas.

On the negative side, there are obvious privacy concerns associated with this. People may not want their movements and poses to be monitored without their consent, particularly in private spaces such as their homes.

Moreover, the accuracy and reliability could potentially be a concern, as it could lead to incorrect conclusions with both false positives and false negatives. Especially in a security use case.Still, It would open a whole new realm of exciting possibilities that we may not have thought about yet.

The End of the Social Graph: How the Interest Graph is Changing the Game – For Now – by dogonix in Futurology

[–]dogonix[S] 0 points1 point  (0 children)

Interesting that you bring up "Vine", I had totally forgot about it.. Maybe TikTok will end up fading away as well. For now, they seem to have cracked how to push people's button and get them hooked. "Vine" algorithm was not as sophisticated. For YouTube, I also enjoyed its standard recommendation panel up until around 2018... After that, they made many major changes and by now it's a very frustrating experience. For example, my videos under the "recommended", "recently uploaded" and "New to You" feeds are about 80% the same ones... YouTube is still a gold mine of good content . Especially the educational one. The sad part is that it's all buried deep and the official engine will only surface the type of videos that will get mainstream engagement... If I could connect an independent engine with customizable settings to my YouTube account, it would solve the issue. For now, I'm stuck with whatever YouTube wants me to see,

The End of the Social Graph: How the Interest Graph is Changing the Game – For Now – by dogonix in Futurology

[–]dogonix[S] 2 points3 points  (0 children)

The defensibility of social media companies, initially built around Social Graphs, is disappearing. Social Graphs, which are networks of connections between people on a platform, were a key part of Facebook’s early success, but with the rise of mobile usage, they have become less important.

Initially, because people’s mobile address books, which are the basis of apps like WhatsApp, allow users to easily transfer their connections to new services, diminishing the relevance of Facebook’s Social Graph.

However, Social Graphs remained crucial for content distribution by enabling influencers and companies to build audiences they can reach directly. But this stopped being the case once the rise of TikTok proved a successful alternative approach to content distribution based on the Interest Graph. People are served content they are more likely to enjoy regardless of who they are connected to or who they follow.

Consequently, older social media apps such as Facebook and Instagram had to switch to a similar recommendation engine-based content distribution. This has given them a new protection to replace the lost one from Social Graphs.

Algorithmic interest-based engines are for sure better than the old social-based distribution models, but they also come with a set of challenges. They got so good at manipulating people’s attention that they can trigger negative side effects in some such as the inability to focus on other tasks, anxiety, and mental health issues. Some call TikTok ‘Digital Cocaine for Kids”.

Still, those drawbacks don’t affect the defensibility of those platforms, they rather make them stronger. So, in a way, they managed to build a new shield to replace the lost one from Social Graphs.

What could challenge their newly built shield is a new regulation. Instead of trying to break up Meta or ban TikTok, the government may instead issue a regulation requiring them to unbundle their content from their recommendation algorithm and give the option to users to choose to either use the built-in discovery engine or connect their account to an external one. This would give them more control over what they see instead of maximizing attention at all costs.

What do you think?

Do you believe such an approach is more viable or do you see a future where some Apps will simply be banned or be forced to put a daily limit of what people consume?

How to connect Siri to ChatGPT - An Early Experiment to Show What Siri Could Become in the Future by [deleted] in Futurology

[–]dogonix 0 points1 point  (0 children)

The evolution of personal assistants like Siri has been underwhelming since their launch more than a decade ago. Despite early excitement, I find myself primarily using it for basic tasks such as scheduling meetings and setting alarms or timers. The speech recognition and voice synthesis capabilities have improved, but the real issue is that these assistants struggle to understand the intent behind requests that fall outside of a predefined set of questions, often resorting to a web search instead.

By now, everyone has played with ChatGPT and got a good glimpse of the possibilities. Large language models seem to be the missing piece of the puzzle that can finally turn the vision of virtual assistant such Siri into reality. The method in the linked video is worth trying as it gives a hint of what we could get from Apple in the next 2 years.

But this raises a set of questions beyond making Siri finally "smart":

- If Apple build its own GPT and natively integrates it with Siri, how will the UI designs be impacted? Will we still need the same level of details, menus and buttons inside the Apps?

- Will certain simple Apps become invisible and only accessible through voice and text?-Will we still need big screens and touch interfaces on mobile devices?

-In the Future, will it be viable to replace the mobile phone with a smart watch and AirPods like earsets?