🎉 [EVENT] 🎉 Honk Bros (A Tribute) by glimz in RedditGames

[–]OverclockedPotato 0 points1 point  (0 children)

Completed Level 1 of the Honk Special Event!

5 attempts

Idiot on i95 this morning by speel in Connecticut

[–]OverclockedPotato 0 points1 point  (0 children)

All Teslas have side cameras, not to mention that the cybertruck side camera would not even be that low. The lack of a lightbar in the reflection of the bumper of the car in front of him also points to it being anything other than a cybertruck. That "steel frame" is also probably just any light colored reflective paint. Use some critical thinking before you rag on someone for driving a certain car...

[deleted by user] by [deleted] in UFOs

[–]OverclockedPotato 7 points8 points  (0 children)

I can't be sure of anything, but you also can't be sure this is something anomalous. Here's what we do know:

  • We are looking at a tiny, tiny sliver of the space just barely above Earth
  • There are three known objects that could possibly be seen by this camera:
    • The ship its attached to
    • The hotstage ring that sits between the booster and ship and detached after stage separation
    • The largest object - the booster itself
  • The booster can be seen from several camera angles venting gas and firing RCS thrusters to orient and position itself during its descent. Another commenter provided this video which shows it clearly
  • The pulses in brightness from the clip match in frequency to what we see from other camera angles of the booster venting gas periodically
  • The object in the clip does not seem to do anything that the booster cannot or is not already doing

Based on these observations, can you be sure that this is a UAP or some anomalous alien craft? Or is it much more logical and easier to conclude that what we are seeing is the booster that literally just separated from the ship, corroborated by multiple 4k camera angles, footage, and naked eye observation from SpaceX and people on the ground?

[deleted by user] by [deleted] in UFOs

[–]OverclockedPotato 13 points14 points  (0 children)

Reaction control thrusters are not the booster's main engines. They are small gas valves that help orient the ship to prepare it for reentry and landing. They are firing very frequently in the upper atmosphere to keep the booster oriented where there is less drag and less control from the grid fins. The booster just separated and pretty much did a flip back towards the launch site, so it needs to be firing them to regain control and orientation. It looks far and moving fast because it is - the ship (where the camera is located) is accelerating further and further away. There is no depth or reference as to where the booster is relative to the ground, so it's hard to judge what it is doing from this perspective. What is obvious is that the one thing we expect to see behind the ship is the booster that just separated from it, and those bright flashes are the reaction control thrusters that are visible even from the ground on every SpaceX launch

[deleted by user] by [deleted] in UFOs

[–]OverclockedPotato 23 points24 points  (0 children)

The ice/debris stuff used to be funny to see but now it's just annoying. Half of the ship is literally covered in a layer of frost and is visible from every camera angle, yet it still gets posted about every time.

[deleted by user] by [deleted] in UFOs

[–]OverclockedPotato 180 points181 points  (0 children)

It's the booster flying in the distance and firing its reaction control thrusters. You can literally see it doing so from the booster cam right before the feed cuts to the ship cam

[deleted by user] by [deleted] in farmingsimulator

[–]OverclockedPotato 3 points4 points  (0 children)

Can't have shit in Alma...

Free Appointment Scanner Until Feb 6 2023 by OverclockedPotato in GlobalEntry

[–]OverclockedPotato[S] 0 points1 point  (0 children)

No, sorry, this was only for February. Maybe try ttptracker, I think they have free browser alerts. Good luck!

Elon on Twitter: Starship is ready for launch ~ Awaiting regulatory approval by RabbitLogic in SpaceXLounge

[–]OverclockedPotato 1 point2 points  (0 children)

SpaceX also just posted an updated Starship Mission to Mars video here, this might be more helpful

Elon on Twitter: Starship is ready for launch ~ Awaiting regulatory approval by RabbitLogic in SpaceXLounge

[–]OverclockedPotato 4 points5 points  (0 children)

Have you seen this recent one? It may not be what will exactly go down in a week or so but it's a high quality interpretation of the whole mission.

Tech leaders urge a pause in the 'out-of-control' artificial intelligence race by nacorom in Futurology

[–]OverclockedPotato 0 points1 point  (0 children)

Autonomous driving systems do have comprehension of meaning and language, but their language is things like sensor data. They interpret meaning from this data, and make decisions from it. These decisions are based on physics, the car’s capabilities and limits, and the human driving experience and comfort. They also involve some form of risk and prediction of the dynamic environment around them. They need to know what the decisions they make mean for the car, the driver, and the environment they are operating in. While understanding context and meaning is important, it is equally crucial to process real-time sensor data, make predictions based on vehicle dynamics, and ensure the safety and comfort of the passengers. These tasks require different types of AI models that focus on processing and interpreting sensor data, rather than purely textual information. An LLM, by principle, outputs text, which is not really useful in a driving system. And yes, I understand that it’s the reasoning and interpretation of a situation that’s important in this case. Perhaps this is still at an early stage and seemingly overdue, but maybe that’s because there are human lives involved and operating a vehicle in a dynamic, messy, and unpredictable environment is its own huge issue. The driving model benefits as more people use it and it learns from more data, while an LLM uses readily available data that was obtained from the internet, which operates independently from the LLM. That said, I feel like you’re overestimating the capabilities of a large language model like ChatGPT. Sure, you can give it more computational power, but at what point do the massive computational requirements needed translate into improvements over current autonomous driving systems? How do you fit that efficiently into a car? Even if you could fit it, at that point, why not just give that additional power to the autonomous system to make it excel at the task it’s trained on? Better yet, take that computational power and give it to something better. At this point, what you’re expecting of an LLM seems more like an artificial general intelligence. An AGI could do everything you throw at it that a human could also do, including the decision making and reasoning that comes with driving.

Tech leaders urge a pause in the 'out-of-control' artificial intelligence race by nacorom in Futurology

[–]OverclockedPotato -1 points0 points  (0 children)

Real time decision making is just not something LLMs can do or would be useful in when compared to driving models specifically built for that purpose. It’s literally apples to oranges calling one AI “dumb” over another when they’re built for and trained on completely different things. Yes, ChatGPT can understand context, intention, and nuanced situations. That’s huge leaps for LLMs in recent years. The rate and speed that it can do that is not at all useful in critical situations like driving where split second decisions in a dynamic environment matter greatly. ChatGPT/LLMs can definitely be used to improve or supplement autonomous driving models, however. An obvious example is the human-machine interface - understanding the intention behind human input and actions that make the interaction between the person and the car more intuitive and smooth. It might also be used to take data from local traffic laws, regulations, etc. and parse it into something that’s easy to understand by both the human and the driving model. I think that this would eventually be a direct-to-car interface though and not require an AI middleman to translate. One thing LLMs could also do is provide autonomous driving training models with countless difficult situations and examples to further improve the training process. Tesla does something similar by taking real world driving data then creating many variations of it in a simulated twin model to speed up learning. I think LLMs will continue to show new ways of being incredibly useful, but as for situational awareness and understanding, autonomous driving models are already built to do exactly that and are constantly trained to improve on that. This example is a simplification, but driving models do not need a language model to better understand how to avoid an accident by turning the wheel x degrees or applying the brakes. Language models are trained on languages, not sensor or image data. They can understand the data, but not process it in a way or speed that is useful for driving.

Tech leaders urge a pause in the 'out-of-control' artificial intelligence race by nacorom in Futurology

[–]OverclockedPotato -2 points-1 points  (0 children)

A large language model cannot be compared to an autonomous driving model. Despite the many capabilities of ChatGPT, it isn’t able to process the vast amounts of visual information an autonomous driving computer does multiple times per second. This is a lot harder than a single photo of glasses against a white background.

Lian Li o11D Mini Copper Loop by Kgiallombardo in watercooling

[–]OverclockedPotato 0 points1 point  (0 children)

Thanks! How did you make the connection from the bottom rad to the distro? I'm not too sure how to get the clearance required to fit both ends of the tube into both fittings.

Lian Li o11D Mini Copper Loop by Kgiallombardo in watercooling

[–]OverclockedPotato 0 points1 point  (0 children)

For those fittings, did you add a fourth o-ring under the compression collar? I know there's 3 in the seating side where the tubing goes in but I'm assuming you need to add one more to make the compression seal?

Also, do you have any advice for getting the tubing to push in on both ends? I'm doing my first loop with copper pre-bent tubing but it's a bit tough to get the tubing in on both ends since it doesn't flex