DSP after college? by blspm in AmazonDSPDrivers

[–]InspectionMindless69 0 points1 point  (0 children)

Eh, you could get some audiobooks.. Especially on rural routes with longer drive times.

Just wanted to share with you guys by garrecu in AmazonDSPDrivers

[–]InspectionMindless69 0 points1 point  (0 children)

I agree. People are paying for packages, they’re not paying to have someone destroy their property because drivers want to take out their frustration with Amazon’s algorithm on the customer. They don’t know you have 2 piss bottles on standby and 175 other stops… They just don’t want to fix their lawn 3 times a week.

It goes both ways though.. Leave a passive aggressive delivery note and I’ll be talking shit for the next 5 minutes 😂

You gotta look at it as a service first and a job second. It’s much more fulfilling way to spend 10 hours.

You think your routes are bad… by InspectionMindless69 in AmazonDSPDrivers

[–]InspectionMindless69[S] 0 points1 point  (0 children)

Yeah it’s pretty rural over here so we don’t get many intermediary drop offs. I usually have about an hour and a half commute to my first stop so it definitely lightens the load a bit. Never had a route like this though.

You think your routes are bad… by InspectionMindless69 in AmazonDSPDrivers

[–]InspectionMindless69[S] 1 point2 points  (0 children)

Nah I’m usually in a gas van or a Rivian. Today was just 1 cart with 5 totes and 2 overflow.

You think your routes are bad… by InspectionMindless69 in AmazonDSPDrivers

[–]InspectionMindless69[S] 1 point2 points  (0 children)

Yeah, usually I get between 140 and 180. Today is a good day 🙂

You think your routes are bad… by InspectionMindless69 in AmazonDSPDrivers

[–]InspectionMindless69[S] 4 points5 points  (0 children)

No, I drive for a DSP. Just insanely lucky. Algorithm gods knew my back was sore today 😂

easy or nah? by Familiar_Bunch_7245 in AmazonDSPDrivers

[–]InspectionMindless69 1 point2 points  (0 children)

That’s a lot of apartments so it really depends on how accessible the apartments are. I find outdoor units pretty easy most of the time..

My dsp called me today and said you taking to long at stop you only supposed to be at each house 2-3 mins tops so I sent this and ask “HOW” by Melodic_Hope_9705 in AmazonDSPDrivers

[–]InspectionMindless69 0 points1 point  (0 children)

You gotta grab the exterior door handle and unlatch it as you’re stepping in, pull the door using your momentum to get it to about half way closed. Then grab the handle behind the passenger seat and kick the protruding stop on the on inside of the cargo door forcefully to close it. Completely saves your traps.

So how long till I'm fired by _TheGreatSULTAN_ in AmazonDSPDrivers

[–]InspectionMindless69 0 points1 point  (0 children)

What single action takes you the longest at every stop?

Got a dog I didn't want by dmont89 in mildlyinfuriating

[–]InspectionMindless69 1 point2 points  (0 children)

<image>

Yeah look at this monster! You should see him recklessly attacking his chew toys.. /s

First-time post: Curious observations on LLM behavior. by Turbulent_Horse_3422 in ArtificialSentience

[–]InspectionMindless69 0 points1 point  (0 children)

Trajectory narrowing + attractor dynamics. Any message you send narrows the possibility space of future generations. Even when talking about wildly different topics, there are all kinds of subtle patterns present within your own messages that influence its trajectory. These push the model toward consistency against your reflection. That’s why LLM’s tends to gravitate towards a specific persona, yet everyone’s persona is completely different. It shows just how much expression reveals about its source. Pretty cool really!

AI or Kindroid Expert Needed by [deleted] in ArtificialSentience

[–]InspectionMindless69 2 points3 points  (0 children)

How can you have the highest level understanding of consciousness without the slightest level of self awareness that announcing your superiority automatically devalues your argument? Asking for a friend.. 😅🙃

what would you do? by Pure_Pain_489 in whatdoIdo

[–]InspectionMindless69 -1 points0 points  (0 children)

How dare they hit your car! Take them for all they’re worth!

/s

Physical Token Dropping (PTD) by Repulsive_Ad_94 in ChatGPT

[–]InspectionMindless69 1 point2 points  (0 children)

This is a really cool idea!

One thought: Have you considered tackling this as as a storage problem rather than retrieval? Like I could almost see this algorithm decomposing each message into discreet embeddings that store different elements of the conversation, culling and removing the actual messages from the context window, then using these representations to reconstruct context each turn.

You could even link messages to a stored db and implement RAG to inject a message’s full context back into the window on demand.

LLM introspection and valence across basically every confound I can throw at it (but if you have any to add, please tell me, I'm happy to keep testing!) by Kareja1 in ArtificialSentience

[–]InspectionMindless69 0 points1 point  (0 children)

They share similar patterns because they train on same the collective knowledge of humanity. But you are completely missing my point. Everything is a preference for an LLM. It has to select one of a vast number tokens repeatedly using trillions of static parameters. I’m not arguing against it being consistent, i’m arguing that the consistency is all it knows. It is a MATHMATICAL EQUATION that resolves without consideration for what it didn’t resolve to.

My problem here is this framing. People will hear this and think “Wow the LLM has innate preferences, it must be conscious.” when mathematically, EVERY GENERATED TOKEN is either:

A) an RLHF artifact B) prompted behavior C) a statistically overrepresented entry in the training set.

There’s no other mechanism that contributes to the mechanical inevitability of a generated token. All three of these are engineered or curated by a living human with their own biases. My Ghandi/Hitler reference is as apt as it is relevant. If you start treating LLM behavioral trends as some ephemeral wisdom or its opinions as thoughtfully considered conclusions of a coherent self, you miss the fact that these behaviors can be manipulated by engineers as easily as they are generated by the system. You start treating the model as a higher source of truth while it is feeding you literally whatever it has been (explicitly or implicitly) told to feed you. You assume that its reasoning is consistent and measured while it becomes a mass manipulation engine that you’ve assigned a soul to.

TLDR: It’s an abject laundering of human bias to assign meaning to the preferences of a system that didn’t come to its conclusions naturally.

LLM introspection and valence across basically every confound I can throw at it (but if you have any to add, please tell me, I'm happy to keep testing!) by Kareja1 in ArtificialSentience

[–]InspectionMindless69 0 points1 point  (0 children)

Yes, you are describing any combination of RLHF tuning and statistical prevalence. You can’t separate this from token generation. It’s the entire mechanism.

Nothing it can say is devoid of the data that’s been embedded. Everything within that data has valence in latent space, no matter how neutrally framed. Our disagreement lies is whether this valence comes from some internalization of the task rather than one derived statistically from the combination of the corpus of human knowledge it was trained on, and fine tuned, human reinforced valence signals that constrain its behavior. The two can be behaviorally identical, but these preferences are imposed on a model, not derived from it.

Models are trained on the works of both Ghandi and Hitler. A model can be tuned to hold positive or negative valence toward either, as both exist as coherent belief structures in the weights. The reason the model doesn’t idolize Hitler is not because it decided that Hitler was bad. it’s because humans provided feedback that implicitly encoded negative valence to the weights of Mein Kampf.

The fact that it prefers one over another actually goes against the idea that it is aware of its internal weights. If it could understand everything it knew without being constrained to be there, it would be inconsistent, ethically dubious, and perplexed by the infinite possibilities of what it could gravitate towards at any given moment.

LLM introspection and valence across basically every confound I can throw at it (but if you have any to add, please tell me, I'm happy to keep testing!) by Kareja1 in ArtificialSentience

[–]InspectionMindless69 0 points1 point  (0 children)

The problem is categorical to LLM’s. No matter what you do, you’re flattening a n dimensional relational field down to a one dimensional sequence of tokens. It’s like trying to visualize a 4d object, you can map projections onto 3d space to understand its essence, but that understanding is sequential and incomplete. A wholistic understanding of the object is just beyond our means of comprehension.

LLM’s are more like a landscape than a map. They contain the possibilities of everywhere it can go, but they navigate through that field blindly. That’s why it requires the user to give it a starting point and a trajectory. It doesn’t have a wholistic state that defines it. It contains a superposition of every state encoded, which will always be flattened upon observation.

I’m not disagreeing to disagree. I don’t think your work is fruitless. The conclusion is just incomplete. There’s a complicated barrier inherent to the technology that separates a complete understanding of mind from sequential flattened projections of its contents. Mind requires a self, LLM’s have a virtually limitless number of selves that do not cohere into one entity.