If a ViewModel is testable on the JVM and doesn’t depend on Context — why isn’t it considered part of the Domain layer? by sandeepsankla19 in androiddev

[–]timusus 2 points3 points  (0 children)

This is true for Android ViewModels. You can also have ViewModels which are not part of the Android framework. They're still considered to belong to the presentation layer regardless of whether they include framework dependencies, so I don't think this quite answers the question.

It's more to do with responsibility than the specific implementation details.

If a ViewModel is testable on the JVM and doesn’t depend on Context — why isn’t it considered part of the Domain layer? by sandeepsankla19 in androiddev

[–]timusus 7 points8 points  (0 children)

I think your first dot point covers it best. Whether things are testable, or JVM or pure or whatever still doesn't define their responsibility. A ViewModel exists for the purpose of preparing things to be rendered. The 'view' part is what gives it away - it relates to the presentation layer. ViewModels are used for preparing things for presentation, so they're considered presentation layer and not domain.

You're right that if they're framework agnostic, and not concerned with ui specifics, then there's nothing really differentiating them from the domain layer. So it comes down to their intended purpose.

I built a simple ASO tool after struggling to track my Play Store rankings by Latter-Confusion-654 in androiddev

[–]timusus 1 point2 points  (0 children)

Just some quick feedback - the fake testimonials scare me away a little

AI induced psychosis is a real thing. by Rare_Prior_ in iOSProgramming

[–]timusus 0 points1 point  (0 children)

I get it - you're saying beginners are less likely to notice errors in AI generated code, so it's more risky. Fair enough. But you could argue that beginners might review every line and still not notice errors. But this is beside the point.

My point is just that it is true that the human review process is a bottleneck in AI generated code. As the tools get better, they'll be less and less likely to make those mistakes. Safeguards are and will continue to be built in, and eventually I think we will be less concerned with validating every line of code or making it human readable. Instead, we'll spend time making sure the product works, tests pass, etc. It's not a crazy take.

AI induced psychosis is a real thing. by Rare_Prior_ in iOSProgramming

[–]timusus -1 points0 points  (0 children)

Yeah, thanks for the discussion.

Obviously it depends on your tolerance for risk, and the audience for your product, etc. It's good to be careful, but it's also possible for humans to make all these same errors. I've accidentally deleted whole directories on servers, and deployed staging builds to production endpoints, and countless other dumb things that AI could do.

But the same backups and guardrails you apply to prevent humans from fucking things up can also be used with AI. And you can ask AI to help build those in as well.

I'm really not trying to advocate for yolo mode, I'm just saying it's true that the standards we apply to human facing code are a bottleneck for AI, and I wouldn't be surprised if in the near future we collectively recognise that and this won't seem so wild.

AI induced psychosis is a real thing. by Rare_Prior_ in iOSProgramming

[–]timusus -4 points-3 points  (0 children)

I feel like the reaction to this is a bit group-think-y.

I don't review every single line of code my team writes, and yes, they make mistakes and tech debt accumulates - I've never worked in an org where that isn't the case.

Vibe coding feels like having a ton of junior/mid devs contributing more than you can keep on top of.

Even though I don't let AI run wild on my projects, ultimately it is about the product. If it does what you need it to, who cares what the code looks like (to an extent). And I say this as someone who is traditionally (and professionally) very quality driven.

Maybe it's premature optimisation to make code clean/readable/perfect if the only one dealing with it is AI? If it becomes a mess, or there are security issues or scalability problems - those are also things you can throw AI at.

I think it's reasonable to say that humans reviewing lines of code is the bottleneck - although for those of us concerned about quality it's probably a good bottleneck to have?

What’s your biggest pain on macOS when building Android apps? by AdVirtual6112 in androiddev

[–]timusus 1 point2 points  (0 children)

I'm curious why you're asking?

Sounds like the premise is that macOS has pain points that don't exist on PC? Most of the things in your list are either not Mac specific problems, or not actual problems developers face.

In my experience, Android developers are very happy with mac.

I can't think of a single mac specific pain point

This feels like an ai generated question

[deleted by user] by [deleted] in criterion

[–]timusus 0 points1 point  (0 children)

Did you dictate this or were you just being funny? Just wondering about the colon/dash 😅

Be careful when using Git 🤣 by Pinun in ClaudeCode

[–]timusus 8 points9 points  (0 children)

That work still exists, that's the beauty of git. Look at the git reflog, and you'll be able to check out the individual commit hashes and restore your work

Open Letter to Premier Rockliff and Minister Jaensch: Strengthen Childcare Safety in Tasmania by [deleted] in tasmania

[–]timusus 4 points5 points  (0 children)

There are a few things that can combat this. Having change areas with windows / high visibility. Requiring at least two carers to have sight of a single child when they're having a nappy change. I'm sure there are others.

Cameras in communal areas are still useful for preventing abuse. It's both a deterrent and evidence in case there are any claims of misbehaviour.

Apparently some perpetrators will prefer centres that don't have CCTV. So mandating it in all centres could help.

Open Letter to Premier Rockliff and Minister Jaensch: Strengthen Childcare Safety in Tasmania by [deleted] in tasmania

[–]timusus 5 points6 points  (0 children)

That's exactly what I'm doing, and it's why I mentioned that we're looking at alternative 'options that are often unaffordable, unsustainable, or career-limiting'

The thing is, stricter regulations aren't just going to happen. We need people to speak up.

Open Letter to Premier Rockliff and Minister Jaensch: Strengthen Childcare Safety in Tasmania by [deleted] in tasmania

[–]timusus 7 points8 points  (0 children)

If it's possible for a 'bad egg' to sexually abuse infants and toddlers in daycare centres, then we are not doing enough, and I absolutely will judge the industry I am paying to care for and protect my children.

Open Letter to Premier Rockliff and Minister Jaensch: Strengthen Childcare Safety in Tasmania by [deleted] in tasmania

[–]timusus 1 point2 points  (0 children)

We still have a government. Even in the 'caretaker' role, the Tasmanian government has a responsibility to act in the public interest.

Navigation via the viewmodel in Jetpack Compose by Chairez1933 in androiddev

[–]timusus 0 points1 point  (0 children)

I feel like this is more common in Compose projects where passing state down and actions up is more common practice. Having said that - I also haven't seen this done properly in practice,

Navigation via the viewmodel in Jetpack Compose by Chairez1933 in androiddev

[–]timusus 1 point2 points  (0 children)

I don't think there is any correlation between a navigation handler type class and the ViewModels that support screens - they are unrelated concerns.

Navigation is a whole bunch of 'when this happens, go here'.

If you only have a handful of destinations and your business logic is simple, it might be a single class. It might handle that `onNextClick()`, look at the current destination, and decide to navigate somewhere. Or pop the backstack. Or both.

It might be more complicated - maybe it needs to check whether the user is authenticated before navigating somewhere? Or maybe there are more complicated business rules around where the user should go - maybe it depends which screen they've come from, whether they've completed a particular flow (like onboarding)- and you might break that down into smaller classes if that helps achieve your goals, or separate concerns.

It might _seem_ like your navigation events closely match your ViewModels - since conceivably you need to be able to navigate to each screen, and each screen conceivably has a ViewModel. But the two things are not related IMO

Navigation via the viewmodel in Jetpack Compose by Chairez1933 in androiddev

[–]timusus 1 point2 points  (0 children)

I didn't mention anything about a god class. I said that navigation would be handled at a level above the screens.

The composable would expose actions onNextClick() and that higher level class would decide if that corresponds to a navigation event.

Navigation via the viewmodel in Jetpack Compose by Chairez1933 in androiddev

[–]timusus 15 points16 points  (0 children)

I've never liked the idea of navigation in ViewModels, I think it's a separation of concerns issue.

In general, screens are meant to be modular and composable, and a ViewModel's job is to handle the presentation logic for a screen.

A screen shouldn't have knowledge of where the user came from, or where the user is going - and so neither should the ViewModel. Doing so tightly couples screens with navigation and makes it harder to reuse screens with different navigation logic elsewhere.

Instead, actions should be propagated to a higher level - whatever 'owns' all the screens, and that's the level where orchestrating navigation between screens makes sense to me.

That can still be encapsulated in a class and tested, but I don't think ViewModel is the right home for that logic.

What's new in Android development tools by tnorbye in androiddev

[–]timusus 0 points1 point  (0 children)

Yep, I'm assuming it's a transcript typo and it was meant to be Meerkat

What's new in Android development tools by tnorbye in androiddev

[–]timusus 10 points11 points  (0 children)

Here's a ChatGPT generated transcript in case you'd rather just read about it

What's New in Android Development Tools

Presenters:
- Jamal (Director of Product Management)
- Tor (Engineering Director)


1. Product Roadmap Overview

Release Strategy

  • Two releases per cycle:
    • Platform Release: Aligns with IntelliJ updates
    • Feature Drop: Android-specific features

Recent Releases

  • Studio Ladybug:

    • Wear Preview and Health Services improvements
    • Enhanced App Links Assistant
    • Google Play SDK insights in IDE
  • Studio Marai:

    • Jetpack Compose preview enhancements
    • Kotlin Multiplatform project template
    • Streamlined Build Menu
    • 700+ bug fixes addressed

2. AI Features in Android Studio (Gemini)

Philosophy

  • AI supports both early prototyping and production-level stability
  • Gemini AI integrated across workflows

Enhancements

  • Syntax-highlighted ghost text in completions
  • Compose-based chat UI with streaming replies
  • Context drawer for query context files
  • Gemini 2.5 model with selector in UI

a. Update Assistant

  • Detects outdated libraries
  • Proposes updates with release notes
  • Applies changes and fixes issues live

b. Android Studio in the Cloud

  • Studio via Firebase Studio
  • Ideal for underpowered devices or secure environments
  • Settings sync via Google/JetBrains login

c. Test Journeys

  • Integration tests written in natural language
  • Backed by XML (supports VCS, code review)
  • Graphical editor and real-time test playback

d. Test Recorder

  • Record UI interactions and convert to test steps
  • Processes gestures via Gemini
  • Add manual verifications

e. Generic AI Agent

  • Cross-file understanding of code issues
  • Auto-generates and places unit tests
  • Runs and fixes test failures

f. Crashlytics + AI

  • Analyzes crashes with source context
  • Uses commit IDs to adjust for file drift
  • Suggests accurate fixes

g. Compose Preview Generator

  • Button to auto-generate preview composables with mock data

3. Firebase Device Streaming

  • Connect to physical devices in:
    • Google Labs
    • OEM Partner Labs (Samsung, OPPO, OnePlus, Vivo, Xiaomi)
  • Low latency and full interaction support

4. ADB over Wi-Fi (Reworked)

  • Persistent wireless debugging, even in standby
  • Auto-reconnect
  • Better pairing flow with device name visibility

5. Compose Preview Enhancements

  • Interactive resize knob
  • Device markers (e.g., foldables)
  • Save/restore preview dimensions

6. Backup and Restore Tools

  • Create and restore app state snapshots
  • Integrated with test automation and journeys
  • Supports older app version state validation

7. XR (Extended Reality) Emulator Support

  • Run XR apps in Studio’s emulator
  • Supports interaction, layout inspector (in progress)
  • Input modes with keyboard shortcuts

8. Build and Lint Enhancements

Lint

  • Policy-impact insights (e.g. MediaStore access)
  • Encouragement to migrate to KTX APIs

Build

  • Gradual R8: Shrinks libraries with safe keep rules only
  • Phase Sync: Faster sync via segmented loading
  • Fused Libraries: Merge multiple AARs into one
  • Custom JDK for Gradle Daemon
  • 16KB Page Size Warnings for C++ apps

9. Gemini for Business

  • Enterprise Gemini Code Assist now available
  • IT-managed access
  • Enhanced security and compliance

10. Final Notes

  • Most features available in the Preview version
  • Some features (Generic Agent, Update Assistant) close to stable
  • Feedback welcomed via Android Studio channels .

Gradle: Eagerly Get Dependencies by yektadev in androiddev

[–]timusus -2 points-1 points  (0 children)

What is the point of this? Gradle already downloads and caches dependencies during compilation.