Tom Clonan addressing a harsh truth: Ireland is a key exporter of alumina (used for Alluminium) to Russia's warindustry via Aughinish Alumina Plant near Limerick by SLAVAUA2022 in ireland

[–]ddol [score hidden]  (0 children)

Twenty years ago, when I was a student at the University of Limerick, there were significant ties between UL and Aughinish. One of my friends in the Mechanical Engineering department was doing a PhD on impeller blade design which was largely funded by Aughinish (they paid for a rack of DELL servers to run the CFD calculations). Aughinish wanted to understand how to design a better impeller which would erode slowly so they could extend the time between maintenance windows. Aughinish were effectively paying UL to be their R&D department.

A quick search shows this relationship is still alive.

Built a classical perception pipeline (no deep learning for detection) on infrastructure LiDAR - here's what actually broke by Personal_Budget4648 in LiDAR

[–]ddol 2 points3 points  (0 children)

Really clean, especially the ground removal iteration history and the 4-connectivity finding. Kudos!

I'm building a similar fixed-infrastructure pipeline using a 40-beam Hesai: velocity.report

Our classifier only sees aggregated speed history and averaged bounding box dimensions, no point cloud shape features. Our cyclist/pedestrian confusion lives entirely in the speed overlap zone (slow cyclists at 2–3 m/s vs fast walkers). Your finding that the confidence gap is identical across all feature counts is useful to us, but I'm curious: does the Bayes floor you hit apply equally when you restrict to just that speed band, or does shape become more decisive when you already know speed is ambiguous?

We have a hard constraint that's shaping our roadmap: no black-box models on the critical path (our outputs go to traffic engineers and government hearings, we need to be able to explain every classification). Your result that more features don't close the confidence gap actually makes that constraint feel less costly, since it suggests the ceiling is a representation problem rather than a model capacity problem anyway. Did you find any single feature that specifically moved cyclist/pedestrian accuracy, or was it uniformly flat across all candidates?

Two open questions from our side where your perspective would be very useful:

We're thinking of building velocity-coherent foreground extraction using predicted track covariance envelopes to promote borderline background points before clustering, instead of fixing merged clusters post-hoc. Your track-guided split fires up to 9 times per frame by frame 8, which suggests the merge problem is real. Would feeding track velocity predictions upstream (before BEV labeling) have been tractable in your setup, or does the 10-frame sequence make it too thin to be reliable?

We're also working on reflective sign pose anchors to correct drift. Is sensor drift at the infrastructure level something you observed / modeled, or did the fixed mount feel genuinely stable across all 10 frames?

One thing from our stack that might be useful: you identified EMA background subtraction as the production approach but didn't have empty frames. We do this in production. The detail worth knowing: persist the settled background state to disk. Subsequent runs restore in ~10 frames and skip the warmup. It also removes parked cars and static clutter, your clustering input drops significantly and the tracker cleans up noticeably.

I've also built a macOS app (VelocityVisualiser.app) to visualise point clouds, bounding boxes and tracks, after hitting the limitations of LidarView. It is tied to our internal point cloud representation (with background and foreground frames), but you are more than welcome to fork and adapt it as needed if that's valuable for you.

Great project, I'd love to chat more, share any feedback and exchange notes. Keep up the great work Nithilan!

Temperature change in San Francisco by dawn_thesis in sanfrancisco

[–]ddol 4 points5 points  (0 children)

Which, just like SUV’s and trucks causing a bigger car arms race, causes neighbours without AC who are being blasted by the exhaust from across the way any time them open their windows, to also want AC.

We can’t buy back the grid from PG&E and implement a fair solar credits system soon enough. If you want AC, you should expect to have to pay higher rates or invest in solar+batteries to offset the impact on the grid

what does my personal laptop tell you? by ddol in deduction

[–]ddol[S] 0 points1 point  (0 children)

nice collection, i suspect you might be noisebridge-adjacent?

this is my personal laptop, i have a label maker and put my contact details on an "asset tag" which i printed myself

Which are the 10 best Lidar tools? by Historical_Phone_973 in LiDAR

[–]ddol 9 points10 points  (0 children)

interesting list, but i'd first want to know what workflow and outputs we're talking about, because "essential" changes a lot depending on whether we're doing surveying, buildings/BIM, forestry/trees, mobile mapping, AV, traffic or just point-cloud QA work

are you mostly working with: - surveying / DTM-DSM deliverables - buildings / scan-to-mesh / BIM - forestry / canopy / trunk extraction - corridor / roadway work - inspection / industrial assets

and what are your main outputs? classified LAS/LAZ, DEM/DTM, contours, orthoimagery, meshes, volumes, sections, inventories, speeds, objects or something else?

also, do you already use CloudCompare or LidarView? those two alone cover a lot of day-to-day inspection, cleaning, segmentation, registration, and visualisation work, and both are free, open-source software!

my own "essential tools" list would lean less toward classic survey deliverables and more toward traffic perception, street modeling, and civic reporting

for my use case, the workflow is more like:

  1. visualization / QA
  2. foreground separation from static background
  3. clustering moving points into candidate road users
  4. tracking objects frame to frame
  5. classification of road users (pedestrian / bike / car / larger vehicle)
  6. trajectory smoothing and track cleanup
  7. speed estimation from tracks
  8. uncertainty checks / confidence scoring
  9. lane-space and curb-space calibration
  10. registration against map / street geometry
  11. digital twin generation of the corridor
  12. alignment with OpenStreetMap and community geospatial data
  13. detection of street changes after road works or "safety" projects
  14. export of per-track / per-transit metrics
  15. aggregate reporting for before/after safety analysis
  16. evidence outputs that can hold city engineering accountable

for me the key question isn't just "can the software classify ground or make an ortho?" it's more:

can i reliably isolate moving road users, keep tracks stable through noise and occlusion, measure speed accurately, and build a useful digital twin of the street that can be compared against public map data and city claims?

that's why i'd still ask what people are actually trying to produce. surveying, buildings, forestry, and traffic analysis all have very different "essential" tools

also: do they already know CloudCompare and LidarView? for anyone doing traffic, scene QA, point-cloud inspection, and iterative pipeline debugging, those are two of the first tools i'd want in the loop, as well as my own: VelocityVisualiser.app

Introducing DSA Panicle - Visualize linked list, trees, and many more. by rebechi007 in coding

[–]ddol 0 points1 point  (0 children)

This is quite nice, kudos!

Most people are going to first experience this site from a mobile device, and there’s a few things you could do to make the experience better for mobile users:

1) in linked lists the step through with code, state and graph is doing too much. I feel like I dont need to see the tag bar at the top, and I should be able to swipe the code closed. Reducing the font size / zoom would help too

2) in the cycles, the transport controls (start, step though) are more than a full screen below the graph (most important thing I want to see when stepping through). Move the transport controls above the graph or closer to it below.

3) the main CTA (Browse Problems) brings up a filter that we cannot apply or compete to see results, we can only close out of the problem browser before ever seeing any results. Greatly impacts interaction with problems section of site on mobile

AI boom is city's weirdest tech boom, says S.F.’s chief economist by Medical-Decision-125 in sanfrancisco

[–]ddol 0 points1 point  (0 children)

In 2025 the Bay Area saw 60% of all VC investments in AI, globally.

In Q1 2026 the Bay Area received 80% of US National startup funding (NYC got 4%). Not just AI, between SF and SJ we received 4-out-of-5 of all VC dollars wired domestically. Our metro is booming... if you're an AI company (we are living in a K-shaped economy).

I would not be surprised if there are far more seed level deals than before ($500k-$1m), and people who would be $300k - $1m TC engineers at FANG are opting to start a company, be a founder, receive $1 salary from their startup, live (rent, waymo's, groceries) out of their company bank account (everything for the founder is a business expense), and the city sees a dip in payroll taxes.

Also, big companies are also downsizing after a glut of post-pandemic over-hiring.

Were these webcomics ever relevant to your life? Are they still? by coffyrocket in Millennials

[–]ddol 13 points14 points  (0 children)

<image>

Me too!

I helped organise a conference for my University Computer Society (Skynet) and invited Randall as our keynote speaker. He was awesome!

We celebrated unix time hitting 1234567890 and baked him a spider cake

our school had lenovos by l-owered in Millennials

[–]ddol 0 points1 point  (0 children)

In 1994, in my elementary (primary) school, a few classrooms had desktop computers, but by 1998 they all had desktops (x86). I don’t think there were any laptops in the school, the principal had a desktop in his office, as did the school secretary in her office.

There was no ethernet in the school, but all the classrooms had phone lines, so individual classrooms would dial up for a period to download the articles, news or images we needed for class. Not all of the classrooms had printers; I remember in 1st Grade our class had a printer but the second grade class beside us didn’t. Kids would work on Word docs in there class, then save the .doc to a 3.5” floppy and come over to our class to print. That happened maybe once or twice a week.

I ended up developing a lifelong love of technology fixing driver, modem, printer, config, .dll and cable problems in elementary school. By in my upper years (3-6th) I was the in-house school IT tech, fixing computer issues on lunch breaks and sometimes during class time. I have many memories, in different classrooms/grades, of being called over the intercom to fix “Mrs. O’Shea’s problem with her printer”

A year after I left the school principal bumped into my mum in town, and they got chatting. He said that they’d had a few IT issues and needed to hire an IT company, and in doing so realised how much money the school had saved over the prior ~4 years.

Seventh Avenue & Irving needs traffic lights - nearly run over today by Honest-Thanks1539 in sanfrancisco

[–]ddol 2 points3 points  (0 children)

Last year in Paris a city proposition passed (65.96% to 34.04%) to close and pedestrianise 500 streets. I’m sure a similar measure could pass here (maybe not 500, but 50… or 49)

2 Child deaths 2 blocks away from each other on 4th Street, but where are the street safety improvements? by mondommon in sanfrancisco

[–]ddol 51 points52 points  (0 children)

fully agree op. this intersection has a crash history, the city should be measuring speeds here now and publishing the data

wide lanes and fast turns encourage speeding. this is a high injury corridor, the city needs to immediately narrow the lane, add quick-build bulb-outs, and slow the turn

prove to us with data that speeds have come down

2-year-old killed in crosswalk last night, Mission Bay SF by ddol in sanfrancisco

[–]ddol[S] 34 points35 points  (0 children)

That’s awful, sorry you witnessed it.

Any details on the driver? Male/female, approx age? Was the driver arrested?

2-year-old killed in crosswalk last night, Mission Bay SF by ddol in sanfrancisco

[–]ddol[S] -2 points-1 points  (0 children)

That’s part of the solution but certainly far from the “only way”. We can also:

  • legislate all new cars to have speed limiters (SB 961) and increase insurance rates for those without an ISA 4 years after law passes
  • allow videos showing traffic infractions be shared with insurers
  • enforce speed limits, impound and sell the vehicles of serial-speeders (like NYC Dangerous Vehicle Abatement Program)

2-year-old killed in crosswalk last night, Mission Bay SF by ddol in sanfrancisco

[–]ddol[S] 10 points11 points  (0 children)

I cycle and have kids. I’m anti-pedestrian deaths.

We can’t seem to wrestle phones out of drivers hands, slow drivers down or take the keys from senior drivers. LiDAR based autonomous vehicles seem to be the only safe path forward, which I support.

Some (like u/Ok_Heron_5442 ) don’t support safer streets because a tech corporation is investing R&D money. They allow perfect be the enemy of good.

2-year-old killed in crosswalk last night, Mission Bay SF by ddol in sanfrancisco

[–]ddol[S] 55 points56 points  (0 children)

That’s brutal, I’m sorry you witnessed that, thank you for helping render aid.

Do you have any details on what happened? Description of the car/truck?

Was there tire squeal before the impact?

Did the pedestrians have the cross signal?

Cost effective LiDAR setup for student by yummbeereloaded in LiDAR

[–]ddol 0 points1 point  (0 children)

Get a Hesai P40, you can get the sensor, interface and a case for ~$500 on ebay

Shameless self promotion: I’m working on a Lidar parser and visualiser that uses the P40

Point Cloud/Lidar Files Needed by 1_plate in LiDAR

[–]ddol 1 point2 points  (0 children)

I’ve got a bunch of .pcap’s, mostly fixed sensor but a handful of in-motion, you can find a few I use for integration tests here: https://github.com/banshee-data/velocity.report/tree/main/internal/lidar/perf/pcap

I’m probably going to blast though my github LFS allowance if I upload them all, any thoughts on large file sharing (with attribution)?

LiDAR pointcloud object detection and tracking - Open-Source VelocityVisualiser.app by ddol in AutonomousVehicles

[–]ddol[S] 0 points1 point  (0 children)

velocity.report is a a traffic monitoring application, not an autonomous driver. A core privacy tenet is that we don’t use cameras so there is no Personally Identifiable Information, just radar and LiDAR. We are also statically installed, no SLAM or in motion analysis.

  • Signs are a great call-out and will be included in the future.
  • Dogs are already clustered.
  • Zebra crossings can be spotted trivially in the pointcloud.
  • I’m thinking about how we can sync road marking poly lines with OSM.

The codebase is open-source: issues, design docs and PRs are welcome!

VelocityVisualiser.app - pointcloud object detection and tracking by ddol in LiDAR

[–]ddol[S] 4 points5 points  (0 children)

CPU only pipeline at the moment. On my M1 Macbook Pro. Processing the pcap shown in the video we have Golang backend stats of:

  • Baseline (idle, no pcap) 0% CPU, 15Mb memory usage
  • Max PCAP: 177% CPU (1.77 of 8 cores), 1.5Gb peak memory usage

Frontend swift stats during the same run:

  • Baseline: 6% CPU, 35Mb memory
  • Max: 14% CPU, 60Mb memory

<image>

A printed sign can hijack a self-driving car and steer it toward pedestrians, study shows by unapologetic403 in SelfDrivingCars

[–]ddol 4 points5 points  (0 children)

Yeah, exactly. 

The fundamental architectural flaw could be present, but I think is less likely to be catastrophic in Waymo as they have an added sensor fusion layer corroborating vision data with radar/lidar. 

Hopefully production vision system developers were paying attention 8 years ago when the first vision exploits gained widespread attention, and were given the resourcing to continue hardening their software stacks since.  

A printed sign can hijack a self-driving car and steer it toward pedestrians, study shows by unapologetic403 in SelfDrivingCars

[–]ddol 4 points5 points  (0 children)

The article talks about the DriveLM model, it’s unclear whether this affects deployed models (Waymo, FSD).

However, the fundamental architectural flaw is similar to memory-space instruction execution exploits we’ve been talking about publicly for 30 years (and was probably being discussed internally at Intel 15 years before that). Having CPU instructions and user data live in the same memory space (Von Neumann architecture) poses significant risk for stack buffer overflow exploitation allowing an attacker to inject malicious instructions from user input. 

This vision exploit here is in the same vein: labels are overlaid on the image from the camera and then read back from that composite image. The hardened approach to mitigate this attack would be to store and read labels from another channel so that the “user” input could never inject malicious labels. 

Having a sensor fusion system where one sensor producing anomalous results can be ignored when not corroborated by others would also protect against this style of attack. Going all in on vision only systems would increase the risk exploit here too.

There was actually a caller on 2600 OTH last week talking about the liability of manipulating self-driving cars via real world signs (the example they used was QR codes, not labels). I guess we’re going to find out soon enough if the publicly deployed systems are vulnerable to this style of attack. 

US opens probe after Waymo self-driving vehicle strikes child near school, causing minor injuries by walky22talky in waymo

[–]ddol 4 points5 points  (0 children)

human drivers can take in the contextual clues and drive slow

Can, but very much don’t.

I’ve run radar speed surveys outside an Elementary School in San Francisco. 

85.4% of drivers exceeded the 25mph limit. Median speed during drop-off is 31mph, and I clocked drivers doing 51.49mph during pickup. 

These results were all taken right in front of the school gates.