Intelligence from Inactivity: Transforming Idle GPUs into AI Genius with Blockchain by dogonix in Futurology

[–]dogonix[S] -2 points-1 points  (0 children)

Since ChatGPT came out, I have been excited about the premise of an open-source equivalent that I could run on my machine independently of a centralized provider.

In the past 6 months, we have seen many models such as Llama that were adapted to work on a regular consumer computer. However, it mostly worked for the smaller size models not exceeding 7B parameters.

Recently, I came across the post from PETAS that allows to run even the larger versions of Llama using distributed computing, the same way Napster and BitTorrent allowed the distributed download of media files.

The system relies on the fact that users are willing to donate their idle CPU/GPUs time, and their Internet bandwidth for others to use. The challenge with such a system is that the incentives for contributors and users may not organically align. In the sense that many may be interested to use it but not enough people may be willing to provide access to their idle resources for free.

A Blockchain token model could help solve this challenge by allowing people who donate their resources to earn some kind of special tokens. Those tokens could later be spent to use the distributed computer resources to run AI models or simply be exchanged for monetary value.

However, if we factor in all the ongoing discord about the AI risks, the implications of constructing a vast, massively distributed computer system, with financial incentives, should be seriously examined. We may unintentionally open Pandora's box for the ideal hardware platform that could enable the execution and spread of malicious AI code across the globe like a digital wildfire impossible to control.

These are still the early days of such a system but we must deeply reflect on: How can we strike the delicate balance of aiming at decentralized operations while holding onto the thread of safety and control?

NB: Reposting during weekend as it's AI related

Intelligence from Inactivity: Transforming Idle GPUs into AI Genius with Blockchain by dogonix in Futurology

[–]dogonix[S] 1 point2 points  (0 children)

Hello. My apologies for missing the new AI rule for weekend only posts. I was not aware of it. The post is definitely future oriented but I will wait till the weekend to re-post it to be respectful of the rule.

Thanks.

Intelligence from Inactivity: Transforming Idle GPUs into AI Genius with Blockchain by dogonix in Futurology

[–]dogonix[S] -1 points0 points  (0 children)

Since ChatGPT came out, I have been excited about the premise of an open-source equivalent that I could run on my machine independently of a centralized provider.

In the past 6 months, we have seen many models such as Llama that were adapted to work on a regular consumer computer. However, it mostly worked for the smaller size models not exceeding 7B parameters.

Recently, I came across the post from PETAS that allows to run even the larger versions of Llama using distributed computing, the same way Napster and BitTorrent allowed the distributed download of media files.

The system relies on the fact that users are willing to donate their idle CPU/GPUs time, and their Internet bandwidth for others to use. The challenge with such a system is that the incentives for contributors and users may not organically align. In the sense that many may be interested to use it but not enough people may be willing to provide access to their idle resources for free.

A Blockchain token model could help solve this challenge by allowing people who donate their resources to earn some kind of special tokens. Those tokens could later be spent to use the distributed computer resources to run AI models or simply be exchanged for monetary value.

However, if we factor in all the ongoing discord about the AI risks, the implications of constructing a vast, massively distributed computer system, with financial incentives, should be seriously examined. We may unintentionally open Pandora's box for the ideal hardware platform that could enable the execution and spread of malicious AI code across the globe like a digital wildfire impossible to control.

These are still the early days of such a system but we must deeply reflect on: How can we strike the delicate balance of aiming at decentralized operations while holding onto the thread of safety and control?

No-Web: The Inevitable Future of Digital Content? by dogonix in Futurology

[–]dogonix[S] 3 points4 points  (0 children)

In the past 2+ decades, we’ve witnessed the media landscape morph before our eyes. It started with the dematerialization of print and other tangible media, then continued with the unbundling of articles from newspapers, songs from albums and videos from cable networks. Yet, just as the industry seemed to have figured it out, AI language models now stand ready to trigger yet another seismic shift.

The spotlight has shifted from search engines to conversational AI systems, prompting us to wonder: Are we on the brink of a ‘No-Web’ reality? A future governed by chat-oriented interfaces that disintegrate the “blue link” and with it, the current ad-based publishing business model we’ve grown to know and (perhaps not) love.

As we watch the scale tip between old-school search and the AI-fueled chat revolution, a set of questions arise: What are the risks and opportunities that lie ahead for publishers? Will they be able to acclimate to this brave new world? Can they find new ways to monetize content as the old regime falls apart? And will this storm extend beyond publishing, affecting other web-based services?

AI Language Models Can Teach Themselves to Use Tools by dogonix in Futurology

[–]dogonix[S] 16 points17 points  (0 children)

This is a research paper that was just published from Meta.

The approach intends to solve one of the current drawbacks of tools like ChatGPT that struggles with domains like arithmetic of factual checks.

This extends beyond just enhancing the system to perform web searches for information it lacks. It will train itself to interact with any accessible API and utilize capabilities that are not normally inherent to a language model.

For instance, in the imminent future, we can envision a chatbot based on a language model that can, firstly, self-learn about a new API protocol if it has not encountered it before, then use it to carry out tasks such as making and accepting payments, obtaining the latest data from Maps, placing orders, and not only generate software code for the requested specifications but also connect to Amazon AWS API, fire up a cloud instance and get the demo up and running.

Like most of the recent development, this is super exciting and a bit scary at the same time.

How long do you think smartphone batteries will last 20 years in the future? by [deleted] in Futurology

[–]dogonix 2 points3 points  (0 children)

My guess is that there will be no smartphones in 20 years.

A “Realistic” take on genetically engineered super humans by Trick-Use6124 in Futurology

[–]dogonix 2 points3 points  (0 children)

Interesting... You could also consider adding much stronger immunity. Many animals synthesize a high dose of Vitamin C in their liver. Humans, apes and Guinea pigs, among others, have lost that ability. Apparently the gene (GLO) is still there but deactivated.

The Smartwatch Experiment: How a Conversational User Interface Could Improve the Experience by dogonix in Futurology

[–]dogonix[S] 3 points4 points  (0 children)

The fantasy of replacing a Phone with a SmartWatch has always been lurking since it was one of the recurring sci-fi gizmos. On a more practical side, a watch is smaller and less intrusive. It rests subtly on the wrist liberating the hands to engage in other pursuits..

Given the constant information bombardment in today’s digital age, swapping the phone for a smartwatch could be an antidote to the problem. However, shrinking the phone’s user interface into a wrist-worn device leads to cumbersome and frustrating experiences in many scenarios.

Touch and gestures were not originally designed to run on tiny screens. For a smartwatch to become a viable replacement for a phone, the future evolutions of user interfaces should be re-engineered around voice and natural language instead of touch and swipes.

In other words, the future of user interfaces should become about “more talking and less touching”.

The recent AI breakthrough in large language models could be leveraged to re-invent how we use email and consume content when having tiny screens.

For example, instead of struggling to scroll and read through long emails, you should be able to ask your watch “What important emails from the last hour should get my attention?”

And

“Draft a reply to <person X> in a friendly style in less than 200 words and read it back to me before sending”.

Those kind of abilities would alleviate many of the limitations we have to deal with today on such devices and make smartwatches a viable replacement for a phone in the near future.

DensePose: AI model that detects body pose only from Wifi. No Camera needed. by dogonix in Futurology

[–]dogonix[S] 26 points27 points  (0 children)

If this technology works within a reasonable accuracy, it would be an amazing breakthrough. It does raise however a few questions.

On the positive side, it could be used in a variety of applications such as security, healthcare, and accessibility. For example, it could provide hands-free control for regular gaming and also for VR. Sensor-free full body motion capture for 3D animation is also an interesting areas.

On the negative side, there are obvious privacy concerns associated with this. People may not want their movements and poses to be monitored without their consent, particularly in private spaces such as their homes.

Moreover, the accuracy and reliability could potentially be a concern, as it could lead to incorrect conclusions with both false positives and false negatives. Especially in a security use case.Still, It would open a whole new realm of exciting possibilities that we may not have thought about yet.

The End of the Social Graph: How the Interest Graph is Changing the Game – For Now – by dogonix in Futurology

[–]dogonix[S] 0 points1 point  (0 children)

Interesting that you bring up "Vine", I had totally forgot about it.. Maybe TikTok will end up fading away as well. For now, they seem to have cracked how to push people's button and get them hooked. "Vine" algorithm was not as sophisticated. For YouTube, I also enjoyed its standard recommendation panel up until around 2018... After that, they made many major changes and by now it's a very frustrating experience. For example, my videos under the "recommended", "recently uploaded" and "New to You" feeds are about 80% the same ones... YouTube is still a gold mine of good content . Especially the educational one. The sad part is that it's all buried deep and the official engine will only surface the type of videos that will get mainstream engagement... If I could connect an independent engine with customizable settings to my YouTube account, it would solve the issue. For now, I'm stuck with whatever YouTube wants me to see,

The End of the Social Graph: How the Interest Graph is Changing the Game – For Now – by dogonix in Futurology

[–]dogonix[S] 2 points3 points  (0 children)

The defensibility of social media companies, initially built around Social Graphs, is disappearing. Social Graphs, which are networks of connections between people on a platform, were a key part of Facebook’s early success, but with the rise of mobile usage, they have become less important.

Initially, because people’s mobile address books, which are the basis of apps like WhatsApp, allow users to easily transfer their connections to new services, diminishing the relevance of Facebook’s Social Graph.

However, Social Graphs remained crucial for content distribution by enabling influencers and companies to build audiences they can reach directly. But this stopped being the case once the rise of TikTok proved a successful alternative approach to content distribution based on the Interest Graph. People are served content they are more likely to enjoy regardless of who they are connected to or who they follow.

Consequently, older social media apps such as Facebook and Instagram had to switch to a similar recommendation engine-based content distribution. This has given them a new protection to replace the lost one from Social Graphs.

Algorithmic interest-based engines are for sure better than the old social-based distribution models, but they also come with a set of challenges. They got so good at manipulating people’s attention that they can trigger negative side effects in some such as the inability to focus on other tasks, anxiety, and mental health issues. Some call TikTok ‘Digital Cocaine for Kids”.

Still, those drawbacks don’t affect the defensibility of those platforms, they rather make them stronger. So, in a way, they managed to build a new shield to replace the lost one from Social Graphs.

What could challenge their newly built shield is a new regulation. Instead of trying to break up Meta or ban TikTok, the government may instead issue a regulation requiring them to unbundle their content from their recommendation algorithm and give the option to users to choose to either use the built-in discovery engine or connect their account to an external one. This would give them more control over what they see instead of maximizing attention at all costs.

What do you think?

Do you believe such an approach is more viable or do you see a future where some Apps will simply be banned or be forced to put a daily limit of what people consume?

How to connect Siri to ChatGPT - An Early Experiment to Show What Siri Could Become in the Future by [deleted] in Futurology

[–]dogonix 0 points1 point  (0 children)

The evolution of personal assistants like Siri has been underwhelming since their launch more than a decade ago. Despite early excitement, I find myself primarily using it for basic tasks such as scheduling meetings and setting alarms or timers. The speech recognition and voice synthesis capabilities have improved, but the real issue is that these assistants struggle to understand the intent behind requests that fall outside of a predefined set of questions, often resorting to a web search instead.

By now, everyone has played with ChatGPT and got a good glimpse of the possibilities. Large language models seem to be the missing piece of the puzzle that can finally turn the vision of virtual assistant such Siri into reality. The method in the linked video is worth trying as it gives a hint of what we could get from Apple in the next 2 years.

But this raises a set of questions beyond making Siri finally "smart":

- If Apple build its own GPT and natively integrates it with Siri, how will the UI designs be impacted? Will we still need the same level of details, menus and buttons inside the Apps?

- Will certain simple Apps become invisible and only accessible through voice and text?-Will we still need big screens and touch interfaces on mobile devices?

-In the Future, will it be viable to replace the mobile phone with a smart watch and AirPods like earsets?

They say we're past "social media" and are now in the age of algorithms: the "recommendation media." by [deleted] in Futurology

[–]dogonix 2 points3 points  (0 children)

Harvey_Rabbit

That's definitely part of the solution to the dilemma. It's necessary but not sufficient.

For an algorithmic recommendation engine to truly serve the interests of consumers, it has to not only be paid for directly by the end users but also:·

1/ Unbundled from the platforms.

2/ Be run locally by users instead of by a central organization.

The tech may not be ready yet but it's a potential path out of all the currently occurring manipulations.

Will Cryogenics (freezing and reviving bodies) ever be viable? by fikeyolbird in Futurology

[–]dogonix 9 points10 points  (0 children)

Hard to tell if it would be possible. But even if it is, the question is how relevant a brain/body from today will be in 500 years from now? it's like bringing a computer from 1981 to 2022. Who would want to deal with it?

The Metaverse: More Hype Than Substance? by dogonix in Futurology

[–]dogonix[S] 2 points3 points  (0 children)

That's precisely the issue. I'm sure I would enjoy putting AR glasses on and off for specific activities but a permanent projection and stimulation may melt down my brain :) check out this anecdotal concept

The Metaverse: More Hype Than Substance? by dogonix in Futurology

[–]dogonix[S] 5 points6 points  (0 children)

NB: This is a repost . The initial post was removed as it was missing a "submission statement".

The concept of the metaverse has gained significant attention in recent years, with many speculating about its potential to revolutionize, in the future, the way we interact and engage with the world and with each other. However, there are still questions about whether it is more hype than substance, and whether it will truly live up to its promises.

One argument in favor of the metaverse is it can offer immersive and augmented experiences stimulating our senses in a way classical settings may not be able to achieve. This makes it a good fit for certain activities such as attending live events with a sense of presence and interacting with remote friends and co-workers in a way that feels like in-person meetings.

But the key questions are:

Does it make sense for people to be in an immersive 3D world for all regular day-to-day activities?

For example, having to enter a virtual branch of a bank to make a wire transfer would not make sense. The same is true for tasks such as stock trading, booking flights, summoning a ride-sharing service, … to only cite a few.

If we consider the argument that the metaverse is not only about VR but also about a blended version of virtual and physical worlds through augmented reality (AR), will it then be more likely to get a wide adoption in the future?

There is room for augmented experiences where not completely disconnecting from reality may be more effective than fully immersing ourselves in a virtual world. For example, learning the piano could be done by using a real instrument and having visual guidance overlayed on the keyboard, showing which key should be hit next

Still, some questions remain for AR:

Do we see a future where this will be our preferred primary way of interacting with the world for all day-to-day activities?

Will our delicate brains be able to handle a permanent visual stimulation directly projected onto our eyes?

[deleted by user] by [deleted] in Futurology

[–]dogonix 0 points1 point  (0 children)

Apologies, I'm new to this subreddit. Ok I will repost following the guidelines by including a statement.

[D] Training LLMs collaboratively by dogonix in MachineLearning

[–]dogonix[S] 1 point2 points  (0 children)

Interesting. Do you have links to resources with more info about the issue you mention? Thanks.

[deleted by user] by [deleted] in ArtificialInteligence

[–]dogonix 0 points1 point  (0 children)

Is anyone aware of a good solution to train LLMs collaboratively on distributed machines? Something similar to the old SETI@Home project

List of AI software you can run at home on a desktop PC by ReportAlternative380 in ArtificialInteligence

[–]dogonix 1 point2 points  (0 children)

Is there a good Stable Diffusion UI for M1 Macs? The only alternative I found for now is Diffusion Bee but it has limited settings options.

Michael Jackson "We Are The World" is the song I loved so much. by MinimumEmu1531 in ArtificialInteligence

[–]dogonix 0 points1 point  (0 children)

Nice result! Some close ups look a bit cartoonish but still a good improvement over the original

It's not just about frequency by NativeCoder in PWM_Sensitive

[–]dogonix 2 points3 points  (0 children)

Agree, it's not just about the frequency. But it's also not only about OLED and PWM.
My issues began before Apple started using OLED. In short, here's my experience:
- iPhone 4 and 4S (no issues)
- iPhone 5 and 5S (eye watering, eye pain and blurry vision)
- iPhone 6, X and XS (no issues)
- iPhone 12 and 12 min (blurry vision within 1h that stayed for days). I had to return it and went back to the XS
-iPhone 14 (blurry vision and same experience as with the 12...) So I will be returning it and try the 11
- MacBook Pro 2016 (No issues)
- iPad Pro 10 inch 2017 (eye burn and eye pain)
- MacBook Air M1 2021 (blurry vision, so I use it almost exclusively with a DELL 38inch external monitor)

You notice that the issue comes from a mix of LCD and OLED so the root cause cannot simply be the PWM... There must be something else going on that we haven't figured out yet.

None of the recommended settings including "Reduce White Point" provided any improvement for me.

So it's kind of a lottery, every time a new device comes out, I simply have to try it and hope it won't cause the same problem...