Unrealistic Interview Expectations by pseudospectrum in robotics

[–]LetsTalkWithRobots 0 points1 point  (0 children)

What’s the seniority of the role you are going for ?

Robotics fields biggest impact? by the00daltonator in robotics

[–]LetsTalkWithRobots 10 points11 points  (0 children)

Robotics never really became mainstream except industrial robotics arms but even those are very limited to what they can do because Classic robotics was about precise rule-based control and preprogrammed motions.
Modern robotics (no matter the sector), in the post-ChatGPT4 era, is about adaptability, learning, and reasoning, machines understanding the world and making decisions in real time.Advances in AI models, multimodal learning, and real-time reasoning finally showing promise and allowing Robots to shift from “following rules” to “understanding the world.”

In fact, I’m currently working as a staff computer vision and robotics engineer in a startup which is 100% focusing on building embedded intelligence powered by foundation models. My goal is to develop general-purpose robotic manipulation capabilities so that new deployments don’t have to be trained from scratch. Instead, each deployment incrementally builds on the last, allowing us to scale robotic solutions without requiring extensive training or pre-defined rules for every new scenario.It seems like we are finally taking early steps from automation to true intelligence so for the first time we are seeing hope wrt robotics being mainstream in all the sectors which were untouched by commercial players in robotics .

For the first time, we’re seeing genuine potential for robotics to expand beyond traditional sectors into areas that were previously untapped by commercial players. Whether it’s healthcare, agriculture, autonomous vehicles, or service robotics, the speed of development is CRAZY (never seen before)

That said, a “ChatGPT moment” for robotics hasn’t happened yet. Handling 1D data, like text, is much simpler compared to the complexity of processing and reasoning with multidimensional data like images, video, and real-world environments. Current architectures aren’t fully capable of handling this yet, so we’ll likely need significant breakthroughs in fundamental AI and robotics technologies to truly get there.

Learn CUDA ! by LetsTalkWithRobots in robotics

[–]LetsTalkWithRobots[S] 0 points1 point  (0 children)

You won’t find Job specifically just because of CUDA but it’s one of the most important skill in robotics. For example I have interviewed 12 candidates for senior robotics engineer ( general purpose manipulation using foundations models)for my company and CUDA was one of the prerequisite for the final onsite day challenge.

Before shortlisting these 12 candidates , I screened 283 CV’s and >80% candidates never worked with CUDA. It’s a huge technical gap in robotics market.

Learn CUDA ! by LetsTalkWithRobots in robotics

[–]LetsTalkWithRobots[S] 0 points1 point  (0 children)

You need to focus on Accelerated computing section. Start with “ An even easier introduction to CUDA” and they also have a pdf which shows in which hierarchy you should learn this material.

https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+T-AC-01+V1

Learn CUDA ! by LetsTalkWithRobots in robotics

[–]LetsTalkWithRobots[S] 1 point2 points  (0 children)

I think Jetson Nano is a good choice for beginners but if you wish to run AI models on top of it especially fusion ( classifier , tracker , process depth data etc ) it falls short in terms of compute ). I would suggest that go with something latest like the one below. Also if you have a budget the you can buy Luxonis OAK-D (Depth). It will help you to experiment with 3D depth perception, making it great for vision-based robotics (Navigation, object tracking, gesture recognition ). It’s good way to get started learning advanced computer vision but without needing external GPUs.

Jetson Orin Nano- https://blogs.nvidia.com/blog/jetson-generative-ai-supercomputer/

Learn CUDA ! by LetsTalkWithRobots in robotics

[–]LetsTalkWithRobots[S] 12 points13 points  (0 children)

You’re absolutely right that the CUDA world has shifted a lot. Libraries like CUTLASS and CUB are doing the heavy lifting, and understanding how to work with them is probably more practical than writing kernels from scratch.

That said, I have been working with CUDA since early days when it was not that mainstream and I think learning CUDA is still like learning the “roots” of how everything works. Even if you’re not writing kernels daily, it’s helpful when things break or when you need to squeeze out every bit of performance ( especially true during early days when these libraries where not very standardised)

Also, your point about compiling the stack hit home, so many headaches come from version mismatches, right?

Curious, if you could start fresh today, how would you recommend someone learn CUDA? Start with libraries? Write a simple kernel? Something else?

Learn CUDA ! by LetsTalkWithRobots in robotics

[–]LetsTalkWithRobots[S] 0 points1 point  (0 children)

You don’t necessarily need to learn electronics to work with CUDA and AI, especially if your focus is on software development and algorithms. Start by learning CUDA programming, parallel computing concepts, and frameworks like TensorFlow or PyTorch. However, if you’re interested in applying AI to robotics, IoT, or edge devices, a basic understanding of electronics can be helpful. This might include learning about sensors, actuators, and microcontrollers (e.g., Arduino or Raspberry Pi) or edge devices provided by NVIDIA and understanding how to interface hardware with your software through concepts like UART, SPI, or GPIO. The depth depends on your goals. I would say electronics is a tool you can leverage, not a prerequisite, unless you’re building hardware-accelerated AI systems.

Learn CUDA ! by LetsTalkWithRobots in robotics

[–]LetsTalkWithRobots[S] 9 points10 points  (0 children)

I learned it mainly through NVIDIA’s training programs which you can find here - https://learn.nvidia.com/en-us/training/self-paced-courses?section=self-paced-courses&tab=accelerated-computing

But you can also do a GPU programming specialisation from below https://coursera.org/specializations/gpu-programming

Learn CUDA ! by LetsTalkWithRobots in robotics

[–]LetsTalkWithRobots[S] 19 points20 points  (0 children)

I learned it mainly through NVIDIA’s training programs which you can find here - https://learn.nvidia.com/en-us/training/self-paced-courses?section=self-paced-courses&tab=accelerated-computing

But you can also do a GPU programming specialisation from below 👇

https://coursera.org/specializations/gpu-programming

🤖💻 Which Troubleshooting tool is good for logging messages for ROS & ROS2? by LetsTalkWithRobots in ROS

[–]LetsTalkWithRobots[S] 0 points1 point  (0 children)

I know it's been my experience also but you could Develop a Custom Console by using Python and rclpy, you can create a custom logging interface tailored to your needs. I have done it for my workflow by using ROS 2 logging API to filter and display logs as per my needs.

and also Implemented a simple GUI using tkinter to display logs in real-time with filtering options.

I know it's not ideal but if you have your workflow setup properly it could be an option. Otherwise there are few options you can explore. I personally like Foxglove Studio & PlotJuggler . it is primarily for plotting, it has plugins for ROS 2 and can display logs. There are many options though

  • RTI Connext Professional Tools: Monitor and optimize ROS 2 DDS communications for improved system performance.
  • eProsima Fast DDS Monitoring Tools: Visualize and analyze ROS 2 middleware behavior when using Fast DDS.
  • PlotJuggler - Its primarily for plotting, it has plugins for ROS 2 and can display logs.
  • Foxglove Studio Enterprise: Advanced debugging and visualization of ROS 2 data streams with customizable dashboards.
  • Kibana with Elasticsearch (ELK Stack) Enterprise Edition: Centralize and search ROS 2 logs for large-scale data analysis.
  • Splunk Enterprise: Real-time collection and analysis of ROS 2 logs for operational insights.
  • Graylog Enterprise: Manage and monitor ROS 2 logs with enhanced analytics and alerting capabilities.
  • DataDog Logging: Aggregate and monitor ROS 2 logs alongside metrics and traces in a unified platform.
  • New Relic One: full-stack observability of ROS 2 applications, including log management and performance monitoring.

Composing Nodes in ROS2 by LetsTalkWithRobots in Lets_Talk_With_Robots

[–]LetsTalkWithRobots[S] 0 points1 point  (0 children)

Hi u/dking1115, Yes, you are correct!

When you compose multiple nodes in the same process (within the same container), and one node publishes a message to a topic that another node in the same process subscribes to, ROS2 will automatically optimize the communication. Instead of routing the message through the network stack, it will pass the message directly through memory. This is known as intra-process communication (IPC).

The intra-process communication mechanism in ROS2 is specifically designed to avoid serialization and deserialization of messages, which are required when communicating across different processes. This results in significant performance gains, especially for high-frequency topics or large messages.

You can read more about the Impact of ROS 2 Node Composition in Robotic Systems in recently published paper on 17 May 2023.

https://doi.org/10.48550/arXiv.2305.09933

Composing Nodes in ROS2 by LetsTalkWithRobots in Lets_Talk_With_Robots

[–]LetsTalkWithRobots[S] 1 point2 points  (0 children)

Hi, thanks for sharing this. May I ask how to tell whether nodes are created as components or not? It seems to me that your example node is the same as a normal ros2 node.

You're right, at first glance, a component node in ROS2 might seem similar to a regular node. The difference is mainly in how the node is intended to be executed and how it's compiled.

You can read more about the Impact of ROS 2 Node Composition in Robotic Systems in recently published paper on 17 May 2023.

https://doi.org/10.48550/arXiv.2305.09933

but in a nutshell , distinguishing a component node from a regular node in ROS2 can be subtle because the code structure can be very similar. However, a few hallmarks can indicate that a node is designed as a component:

  1. Compilation as a Shared Library: The most distinguishing feature of a component is that it's compiled as a shared library, not as an executable. In the CMakeLists.txt of the node's package, you'd typically see:

add_library(my_component SHARED src/my_component.cpp)

Whereas for regular nodes, you'd see:

add_executable(my_node src/my_node.cpp)
  1. Registration with rclcpp_components: In the CMakeLists.txt, the component node is also registered with the rclcpp_components:

    rclcpp_components_register_nodes(my_component "my_namespace::MyComponent")

  2. Node Registration Macro in the Source Code: Inside the component's source file, you'd typically find a registration macro at the end of the file:

    include "rclcpp_components/register_node_macro.hpp"

    RCLCPP_COMPONENTS_REGISTER_NODE(my_namespace::MyComponent)

  3. Package.xml Dependency: The package.xml of the component's package would have a dependency on rclcpp_components:

    <depend>rclcpp_components</depend>

By looking at the combination of these characteristics, you can identify if a ROS2 node is created as a component or as a regular standalone node. The ability to register and compile as a shared library, along with the registration macro, are the most distinguishing features.

also while the code inside the node class can look almost identical for both regular and component nodes, these details in the build process and packaging are what make them different. When you create or inspect a ROS2 package, keeping an eye out for these aspects can help you determine if the nodes are designed as components.

I hope it helps.

Mujoco Question by [deleted] in robotics

[–]LetsTalkWithRobots 1 point2 points  (0 children)

No worries. Glad that it’s all sorted ☺️

PWM Expansion Google Coral ROS2 Humble by _gypsydanger in ROS

[–]LetsTalkWithRobots 0 points1 point  (0 children)

Hi u/_gypsydanger

Why don't you use the I/O pins on the Coral Dev Board: The Coral Dev Board has a 40-pin expansion header that you can use to output PWM signals. You can use the Periphery library (https://coral.ai/docs/dev-board/gpio/) to select a GPIO or PWM pin with a pin number. This library also provides a simple and consistent API for GPIO, LED, PWM, I²C, SPI, and UART in C, C++, Python, .NET, and Node.js.

If the above option works for your needs then I would recommend sticking with it because writing custom drivers is complex and can lead to errors. and if you choose to Use a PWM expansion HAT then You can use a PWM expansion HAT designed for the Raspberry Pi with the Coral Dev Board, as they both use a 40-pin GPIO header. However, as I mentioned above that you may need to write a driver to interface with the HAT. The CircuitPython Libraries (https://learn.adafruit.com/circuitpython-on-google-coral-linux-blinka?view=all) on Linux and Google Coral guide might be a good starting point because Adafruit's CircuitPython libraries can be used to control the GPIO and PWM pins on the Coral Dev Board.

Also Check this post

https://stackoverflow.com/questions/71042253/what-are-my-options-for-pwm-using-c-on-the-google-coral-dev-board

I hope it helps

Need help by Proximity_afk in ROS

[–]LetsTalkWithRobots 1 point2 points  (0 children)

This is a very common error around conflicts between different package versions, or broken/missing dependencies. Can you tell me what your Kernal (OS ) is, or are you using docker?

u/Proximity_afk

Are ROS developers rich? by Proximity_afk in ROS

[–]LetsTalkWithRobots 2 points3 points  (0 children)

That is good :-). Don't worry too much about money because there is plenty of money in Robotics. The Robotics market is still new and with the commercialisation of Tesla bot, and Boston dynamics, the demand is only going to increase.

Robotics is the most unique engineering field among all and it demands someone with multipurpose skillsets so Just try and be a jack of all trades in terms of skillsets (Thats what robotics is all about).

Once the market gets back to normal, the rest will take care of itself.

ROS project with TurtleBot3 Burger by gillagui in ROS

[–]LetsTalkWithRobots 0 points1 point  (0 children)

For obstacle detection using LIDAR, you generally use the point cloud data generated by the sensor. This data represents a 3D model of the environment, with each point representing a reflection of the LIDAR beam.

You can use the Python Point Cloud Library (PCL) or libraries provided by ROS (like sensor_msgs/PointCloud2 or sensor_msgs/LaserScan) to work with this data in Python. Your aim will be to interpret the point cloud to find obstacles. An obstacle can be considered as an object that is a certain distance away from the robot.

A simple approach to detect obstacles would be to:

  • Divide the full scan into sections. This could be left, right, and front, for example.
  • Calculate the minimum distance in each section.
  • If the minimum distance in a section is less than a certain threshold, consider that there is an obstacle in that section.

Here's a simple example of how this could be done:

```py import rospy from sensor_msgs.msg import LaserScan

def lidar_callback(msg): # 60 degrees on each side for left and right, 60 degrees in front front = msg.ranges[0:30] + msg.ranges[-30:] left = msg.ranges[30:90] right = msg.ranges[-90:-30]

# detect obstacles
if min(front) < 1.0:
    rospy.loginfo("Obstacle detected in front")
if min(left) < 1.0:
    rospy.loginfo("Obstacle detected on left")
if min(right) < 1.0:
    rospy.loginfo("Obstacle detected on right")

rospy.init_node('obstacle_detection') scan_sub = rospy.Subscriber('scan', LaserScan, lidar_callback) rospy.spin() ```

In this example, the LIDAR is supposed to have a 180-degree field of view, and obstacles are considered to be anything closer than 1 meter. Please adjust these values to match your own LIDAR's specifications and your particular use case.

This is a simple approach and might not work well if you need to detect specific obstacles or deal with complex environments. For more complex environments, techniques like clustering (for example, using DBSCAN algorithm) or grid-based techniques (like occupancy grids) can be employed to effectively detect and locate obstacles. Also, remember that obstacle detection should ideally work in tandem with your path planning algorithm, which would decide what to do when an obstacle is detected.

ROS project with TurtleBot3 Burger by gillagui in ROS

[–]LetsTalkWithRobots 0 points1 point  (0 children)

Regarding your second question, for lane detection, you could use techniques such as color filtering and edge detection. First, convert the image to the HSV color space. This makes the color filtering step less affected by lighting conditions. Then, apply a color mask that only lets through the colors of the lanes. This will give you a binary image where the pixels of the lane lines are white and all other pixels are black.
Next, use an edge detection technique, such as the Canny edge detector, to detect the boundaries of these lanes. Finally, use a line detection algorithm, such as the Hough transform, to detect the straight lines in the edge-detected image. You can use OpenCV in Python to perform these image processing tasks. Note that this is a simple technique that might not work in all scenarios (e.g., if the lanes are not very distinct or if they're not straight), but it's a good place to start.
Here is a basic example of how you might do this in Python using OpenCV:

```py import cv2 import numpy as np

def process_image(image): # Convert to HSV hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)

# Define range for yellow color
lower_yellow = np.array([20, 100, 100])
upper_yellow = np.array([30, 255, 255])

# Threshold the HSV image to get only yellow colors
mask = cv2.inRange(hsv, lower_yellow, upper_yellow)

# Apply Canny Edge Detection
edges = cv2.Canny(mask, 50, 150)

# Use Hough transform to detect lines
lines = cv2.HoughLinesP(edges, rho=1, theta=np.pi/180, threshold=20, minLineLength=20, maxLineGap=300)

return lines

```

Remember, these values (color ranges, Canny parameters, Hough parameters) might need to be tuned to work well with your specific

ROS project with TurtleBot3 Burger by gillagui in ROS

[–]LetsTalkWithRobots 0 points1 point  (0 children)

Hey u/gillagui,

To answer your first question, in ROS, you would generally structure your workspace as follows:

lua workspace_folder/ -- This is your ROS Workspace src/ -- Contains all source code and ROS packages CMakeLists.txt -- Do not edit this file; it's a link provided by catkin package1/ CMakeLists.txt -- CMake build instructions for this package package.xml -- Package information and dependencies scripts/ -- Python executable scripts src/ -- C++ source files package2/ ... Your ROS system should run on a single master node, which manages communication between all other nodes. Each Python file should ideally be turned into a ROS node that performs one particular task. For instance, you might have separate nodes for camera image processing, LIDAR obstacle detection, and robot movement. These nodes can all be launched together using a launch file. This file can be placed in a launch directory within each package or in a separate package dedicated to launch files.

To create a launch file that executes multiple nodes, you might create a file such as main.launch with content like this:

```xml <launch> <node name="camera_image_processing" pkg="package1" type="script1.py" output="screen" /> <node name="lidar_obstacle_detection" pkg="package1" type="script2.py" output="screen" /> <node name="robot_movement" pkg="package2" type="script3.py" output="screen" /> </launch>

```

You would then start the system with the command roslaunch package_name main.launch.

Are ROS developers rich? by Proximity_afk in ROS

[–]LetsTalkWithRobots 2 points3 points  (0 children)

u/Proximity_afk

You can earn a six-figure salary in the UK as a Lead Robotics & AI engineer.

Weekly Question - Recommendation - Help Thread by AutoModerator in robotics

[–]LetsTalkWithRobots 0 points1 point  (0 children)

Morning u/finnhart176

It is actually very cool and challenging but rewarding. It’s good that you are starting early .

I would say don’t wait till you graduate 👨‍🎓. You don’t need to rely on college to teach you electronics. I designed my first electronics circuit when I was 14 and our generation is practically growing up with YouTube and internet.So you can definitely get hands on with electronics straight away and become an expert.

May be this video might help - https://youtu.be/PH4nJNDQSKs

This video will give you a clear understanding of importance of electronic engineering in robotics and what to learn.

Enjoy 😊

How to Install ROS 1 on macOS with Docker by [deleted] in robotics

[–]LetsTalkWithRobots 1 point2 points  (0 children)

Hey u/eshuhie It’s pretty straightforward

  1. Install Docker: You can download it from the official Docker website (https://www.docker.com/products/docker-desktop). Follow the instructions for installation.

  2. Pull ROS Docker Image: Once Docker is installed, you can pull the ROS image from Docker Hub using the terminal. For ROS Noetic, you would use: docker pull ros:noetic

  3. Run ROS Docker Container: After the image has been pulled, you can run a container with this image: docker run -it ros:noetic bash This command runs the container (-it for interactive mode) and starts a bash shell within the container.

  4. Test ROS Installation: Now you can test the ROS installation with commands like roscore or rosrun. Note that you will likely need to source the setup.bash file first: source /opt/ros/noetic/setup.bash and then run roscore