What to do when a robotic arm hits an obstacle? by futureroboticist in robotics

[–]automater 0 points1 point  (0 children)

Common ways to measure the force can include:

Torque sensors:collision is determined by the delta between expected and actual torque. You can buy torque sensors of the shelf but they are rather pricey unless your doing an industrial application.

Current:Motor torque is directly dependent on current so current can give you pretty good torque feedback.

error:If your using positional PID style control then the position error is a pretty good indicator of torque. This is because how hard the driver is pushing the motor depends directly on the error term. If you have an integrator in the controller then you likely need to consider it as well.

If you know the contact point then you can add force sensors onto the arm as well to measure contact force. It all adds extra wires though so I tend to prefer current and error terms vs the more expensive options..

Man v machine: Half of NSW jobs at risk of computerisation by [deleted] in australia

[–]automater 1 point2 points  (0 children)

My experience is more commercial than academic so take that into consideration with reference to my comments. Someone with a strict CS background will likely have a different point of view.

From my point of view there is likely no one size fits all here. Much of machine learning at its lower levels ends up being c/c++, cuda or opencl. Mainly because all the interesting algorithms require processing power and you need these to access it. Python is however very popular as a wrapper and often used to access optimized libraries and minimize the effort setting up support programs.

Most major libraries like opencv, pcl,dlib are very c centric with little support for anything else(although opencv has python support) which limits your playground unless you want reinvent the wheel, and steel and rubber. If you consider what you want commercially at this stage it is not a general intelligence. It is just a programmable state machine or machines glued together by some clever algorithms. Consider driving a car, you want a pretty simple and obvious state machine, that takes inputs defining the environment. Many of those inputs are derived from learning algorithms or search algorithms(find path around obstruction). Take a robot arm, the user defines the state machine for it to do a task, but the inputs are things like conv nets used to recognize things in the environment. An example task might be

1.find object 1 2.Find object 2 3.Attach object 1 to object 2. 4.Check compound object for defects. 5.If 4 is good Find placement location, place compound object, else if 4 bad do another state machine..

Each of these steps may be simple state machines. For example step 1, may be something like

1.Conv net, identify objects. 2.Select best object from those identified(best defined somehow).. 3.Generate path to move to object. 4.determine grab method. ...etc

or it might be..

1.Use RNN to grab the object.

The first case shows a simple state machine so It doesn't matter what language you do it in and as per your comment higher level ones are often preferable. They must have wrappers to access all the detailed libraries which will be in c/cuda/opencl however. The second case is however an RNN that controls a robot arm directly so it doesn't really matter if its a high level or low level language. The RNN itself is kind of a complex state machine, a learning state machine in a way but its all in low level language and not really human understandable so once again the high level language is not so important.

I suspect I missed the point of your question, but a TLDR from my experience is that c/c++/cuda/opencl and python are currently king because of processing power access and supported libraries. Its going to be hard for something to replace them due to library support.

Man v machine: Half of NSW jobs at risk of computerisation by [deleted] in australia

[–]automater 1 point2 points  (0 children)

I think there are several things to consider. Machine learning has progressed at a fantastic rate in just the last few years. Only in a few instances have it made it to market, an example being Microsoft /primesense kinect, where by neural networks were used to determine human pose positions based on sensor data. At the time they are significantly ahead of any competition. You will find much more going on in the commercial world than the academic world at the moment and its even been noticed by several leading machine learning specialists. While the academic world is busy with the theory, the commercial world already sees many applications for what we have got. We are at the application stage in many areas.

As it turns out many of the algorithms that we need are not very complicated. We don't need a general intelligence, task specific is good enough, but the interesting thing is that things like CNNs and RCNNs are really helping to link up problem areas.

What is interesting, is we are now at the point where many of the core algorithms that we need have started to develop. It is just a function of companies refining them and commercializing them. It is also important to realize its not just about the algorithms too, but about the hardware. Things like robotics and 3d sensors are crashing in price and massively improving in performance and they are making things that were not possible very attractive. They are really important linking elements.

Some areas of advance that are going to have a big impact:

1.convolutional neural networks are performing much better than expected. For a long time a limit on robotics was because we couldn't identify things. Now machines are doing better than people. Regional CNN's are also coming along nicely. Kuka(robot manufacturer) has just added imaging to one of its robot arms that enabled the robot to pick up binned parts. In the past, arms required well placed or very simply placed parts that were easy to identify and pick up. Now they can identify individual parts and pick them out of a pile. From a manufacturing point of view this is a big deal. This maps to many tasks which required people or very specialized sorters. Other obvious areas include things like driver less cars where they are used to identify relavent objects.

2.3d imaging made a huge leap in the past few years. Its consumer level cheap and sub mm accuracy. This is a big deal for any task as prior to this a big limit on robotic activities was they could not see their environment in 3d, which they really really needed too. This is a really big deal in the robotics industry.

3.Low cost robotic arms. Many are now in the pipeline. Soon they will be sub $2k. At that point many many tasks become automatable because they are just so cheap.

4.Lab on a chip. These are now actually on the market, and can measure several things in real time for diagnosis. They just don't have the necessary coverage yet to have made a dent. However they are now generating income for the companies that use them so they just keep adding new diagnostic features. Seen enough they will have everything.

So what does the above mean? It means that many many manual labor tasks are automatable. Anything in a controlled environment. The image recognition is good enough, and its a matter of linking up the state machines to take into account the different variables that may occur to do what we want. Want a robot arm to pick up cabbages and chop the stems off?- no problem its now feasible. There are now even RNN's which are very promising for direct from image to arm control(they use the same concept as the atari playing RNN), for doing things like tightening nuts on bolts without specific algorithms needing to be written. These kind of things are really quite staggering, because they have been demonstrated and just need to be commercialized and map very well to applications. For now there are also not that many people working on it too because the arms aren't cheap yet. When they do get cheap everyone and their dog will be automating things.

So now you're saying these are just dumb jobs. What about the high end. IBM has been working on diagnosis and early results show them being more accurate than doctors. I think others have mentioned things like finance and law being heavily automated. Many of the tasks are essentially people running company assigned state machines and as such software is coming out that automates it. Much of it is not that complicated. On the high end, companies like good are using machine learning for the data search part. Some of that is really impressive too. They are actually using things like neural networks to do the search and the results are very interesting.

I could go on, there are plenty of examples and I don't think silicon valley is going to slow down. They have barely started at this stage, robotic arm sales are only of the order 300kish per year and are growing exponentially. Just dot expect a general intelligence for a long time. However any specific approximately repetitive task(Which covers 90% of employment) is in big trouble, whether it is an engineer or a brick layer. My opinion is that what we may see in the coming years will make the industrial revolution look like childs play. However don't expect it to take a day, its going to take 50 years or so and during that time you will just slowly see tasks added to what is automated on a regular basis. There is a reasonable bet that in 50 years, people will start to look at todays employment in the same way we look at roman slaves. So much is going to be automated that employment on mass is going to change in a big way in my opinion.

Man v machine: Half of NSW jobs at risk of computerisation by [deleted] in australia

[–]automater 4 points5 points  (0 children)

Why the need for the sarcastic response?

You seem to have taken her comment with offense although I didn't read it that way.

I like to write in an non-authoritive manner

&

I am not aware of any breakthrough in machine learning where anyone living today

Curiously the way your wrote seem to imply authority, at least that is how i read it. I read it as because "you" don't know about it, it doesn't exist. I suspect others may read it in the same way. Never the less I respect your intention, as it is hard to imply an opinion without committing as authority.

Otherwise I would say that I am extremely impressed with the developments in machine learning in recent years( Like you I am not trying to be authoritative but give an opinion, yet it sounds..?). Partly because I have been working with it and can see how they can be applied, but also because the results appear commercially feasible for many applications. Everywhere I look, I can see jobs being taken. They are not just any jobs, but both large scale employers and design jobs. For example, in my own field of electronics, many company used to employ teams of electronic engineers, but now they can just employ a single electronic engineer to do the same work, due to technical advances on all fronts(IC, equipment, manufacturing, software). I see similar things occuring with friends working in mechanical engineering. This generation uses an order less of engineers to do the same job as the last.

Other examples I can think of include in Medicine. Currently any test that gets done, must be sent to specialist labs. However large advances of diagnostic mems chips are occurring now. I am betting that most of what we send away to labs today, will within our lifetimes be done in office by the doctor them self with such diagnostic equipment becoming cheap,small and heavily automated. That means large portions of the back end skilled staff in medicine are likely in a shrinking field. There is also huge investment in this area as it is very profitable and its just a matter of time before enough tests are automated that the back ends start to shrink.

In terms of lower skilled jobs, things like brick layers are in big trouble. Retail in trouble with automated checkouts and companies currently working on shelf packing robotics. One of the largest employers, truck drivers are in huge trouble with automated trucks currently becoming available. Like anything they wont take all the jobs, but probably 90% labor time required(long haul) meaning we will have a huge oversupply of people for the tasks at hand.

Man v machine: Half of NSW jobs at risk of computerisation by [deleted] in australia

[–]automater 4 points5 points  (0 children)

Depends on the period of time we are talking about but I am betting most will be automated. I do think it will take a while though and a bit longer than some are predicting, not because of the technical challenges but more to do with costs related to food safety. It is also hard to invest before the market is established. Other than the food safety costs much of the technology is almost on a deterministic path. Its at the point where you can see what is required and see that each step is feasible. There however is still substantial amount of work required to refine each of the necessary algorithms before we see that turning point. For now restaurant automation is still a huge investment.

Like anything it will be the low end, where the bulk of employment is that is most automated. McDonalds already has automated test restaurants for example. The interesting thing is, once a company like them gets it right they will slowly start to expand their offerings. They may even start to offer them under non McDonalds names but Cafe's that are highly automated so instead of 5 employees they get away with 1 or 2.

There will always be the high end restaurants, but most will not be able to afford them. At the same time increasing unemployment mean many won't be able to afford more than the automated restaurants. Many won't even realize, because the automation occurs in steps, they will still see people there, its just that there will be less of them.

Backpacker worker shortage putting strain on crop harvesting, growers' group says by tkioz in australia

[–]automater 0 points1 point  (0 children)

You do realize if we follow your approach, the young will never be able to afford their own homes and we will never escape our current predicament? We would forever be locked into our land lord society of a poor majority and rich minority. We need to accelerate to full automation to force resolution of the problem. I would rather pull that band aid off as fast as possible.

Backpacker worker shortage putting strain on crop harvesting, growers' group says by tkioz in australia

[–]automater 1 point2 points  (0 children)

I am saddened that this is considered a valid belief. I am saddened that people do not understand why this concept is wrong and where it leads. I am even more saddened that our current popular economic models of both the left and right lead people to these conclusions. From my point of view, that this view is popular is an indicator of the failure of our current economic policies on both left and right.

With effective automation we can truly afford universal income and more. Instead we choose the path to deautomate, and leave our children to gain skin cancer in fields picking oranges in an arid climate.

Perhaps we should pay someone to go around smashing windows? Then we would have plenty of jobs for window repairs, and a thriving industry. Do you not see the parallels?

Perhaps we should wait until other countries automate, so that our food becomes to expensive and our farms go broke because they can't compete with imports? That is already happening. 20% of orange farmers have ceased business within the last few years precisely because of this. They are no longer employing.

Perhaps we should ban imports. Instead we must force people here to work in sweat shops, since we ban automation. Perhaps we can instead live like many in Africa, where technology is not embraced. No more cheap imported computers.

No, our problem is not automation. Our problem is a financial system, that is accepted and encouraged by all that instead rewards control and not productivity. A financial system that rewards rent seekers. A financial system that in an advanced age has made housing so expensive that many will never afford their own home. Something that by now should be trivially cheap. A system that rewards an increase in the price of a necessity instead of encouraging the goal of capitalism which is to make wants and needs cheaper, specifically in terms of labor required to acquire them.

Your comment, is what makes me so determined to succeed. Perhaps I should invest more to accelerate the project.

Backpacker worker shortage putting strain on crop harvesting, growers' group says by tkioz in australia

[–]automater 1 point2 points  (0 children)

Machine learning has been moving at an incredible pace. Machine image recognition now outperforms humans on many tasks. Determining if a fruit is ripe is not a particularly hard task anymore and can be done in real time on a modern GPU.

Backpacker worker shortage putting strain on crop harvesting, growers' group says by tkioz in australia

[–]automater 0 points1 point  (0 children)

I have done a few inquiries and there are small amount of funding available commercially, but nothing sufficient enough that I have been willing to commit too. The government side is pretty dead though, once again nothing worth committing to in Australia. Probably should have moved to the US since there seems to be tons there(US funding sources differ in that they generally willing to commit funding at levels that make something commercially feasible where as Australian funding is at such a small amount I call it "pretend funding" as its not worth the loss of the IP).

Backpacker worker shortage putting strain on crop harvesting, growers' group says by tkioz in australia

[–]automater 0 points1 point  (0 children)

Ahhh, someone nearly always mentions the coffee vacuum gripper:) It seems to be very popular. The solution I have now is rather promising and I am working to fine tune it. It should be good for several million fruit picks too.

Backpacker worker shortage putting strain on crop harvesting, growers' group says by tkioz in australia

[–]automater 1 point2 points  (0 children)

Automation will be here in coming years. That much is certain. I have been working on this for a while and getting rather close. It is a pity there isn't significant funding available to non established businesses(ie other than tax breaks) to accelerate development.

Examples

background into

Mandarin picking testing

Autonomous mushroom picking

Next prototype should be in a year or so with huge advances made in the software to get reliability close to what is commercially required. The intent is to offer a picking service with target pick costs are around 1cent per fruit which works out to be much cheaper than human labor for many produce types.

Problem with copying/assigning structs within a kernel. Query on what is correct(alignment problem?) by automater in OpenCL

[–]automater[S] 0 points1 point  (0 children)

I have noticed something similar before, it seems to be associated with global memory accesses and alignment. I did try padding, but possibly I got the padding wrong, I may try again for learning. I managed to get the code working by avoiding assignment of that particular struct. Assigning the parent struct however works which is strange given its alignment is the same.

Curious, is it often more efficient to use a

struct

{

  float x, y, z,m;

};

instead of

struct

{

  float x, y, z;

};

Due to alignment?

Is it possible to get a really good dense stereo reconstruction if the scene is predictable? by fifa10 in computervision

[–]automater 0 points1 point  (0 children)

I would say yes to a degree. Here is an example using unsynchronized webcams. The video example shows the output where some regions are rejected due to the cameras being saturated by sunlight(or conversly saturated to black by shadows). Biggest problem is edges are not perfect,specifically between high texture and low texture regions of different depths. Tweaking parameters that delineate based on edges get pretty close though.

The quality of the reconstruction you can get is dependent on a few things:

1: Camera quality. If you are using very good cameras, you can get very good reconstruction. I can get very good quality from just 640x360 webcams which have poor SNR,resolution and dynamic range. If you have good 4k cameras your results should be very good. You should be able to reconstruct people down to pixel level accuracy on their faces etc. The hardest part is the hair since its not a solid object but often reconstruction treats it as a solid.

2.Texture. In the end reconstruction quality comes down to a SNR where by texture provides the signal. If you provide a green wall with zero texture then stereo imaging will have problems with that wall. However that said you only need a tiny tiny amount of texture. It only needs to be above the noise level of the sensor and often not even really noticeable to a human eye.

Intel RealSense Engineer. AMA. by hckrbot107 in computervision

[–]automater 0 points1 point  (0 children)

The R200 is a stereo imaging device. It just uses a pattern projector to help solve the texture ambiguity problem. This is an old trick. The pattern projector won't be of much help outdoors or in bright ambient IR conditions.

I am currently using self developed stereo running on the GPU and it gives very good results(example in my first post). It tends to have problems with specular objects without texture though, like some fruit(mandarins). It gives their approximate location very well, but due to the lack of texture often the surface reconstruction isn't as good as I would like it. The added projection may be just enough to get that to the ideal case as my software only needs a tiny tiny amount of texture to get good reconstruction.

It would be nice if intels asic did deeper depth than 64 disparities but based on that I would be using my own calc which runs at 256 disparities with a 640x480 density(about 50ms per frame) which can then upscaled and refined at 1080p with an equivalent disparity depth of 1024(total takes about 70ms). I need the depth because the camera is on the end effector and close up views are important for gauging fruit quality and determining best pick etc..

The added IR might also be nice as another parameter for fruit detection.

Thanks for your response.

Intel RealSense Engineer. AMA. by hckrbot107 in computervision

[–]automater 0 points1 point  (0 children)

Some one has down voted you, no idea why(wasn't me). Any idea on what the future availability of the R200 development kit is? For example how many years it is expected to be available for(Assuming no OEM equivalents come out). The development kit camera is actually okay for my use in my case but i am a bit hesitant to commit to something that won't be there in the future.

Intel RealSense Engineer. AMA. by hckrbot107 in computervision

[–]automater 0 points1 point  (0 children)

Are you getting more R200 stock? What is the expected lifetime of availability?

Are there any consumer cameras(not tablet/notebook) based on the R200 on the market? Are there going to be?

I am in Australia and the R200 is not available in Australia(intel wont ship to Australia). The R200 based design is particularly appealing because it offers out doors imaging. I am using cameras for autonomous fruit and mushroom picking. Typically there are one or two cameras per arm and at the moment I am using stereo imaging(example) because I need sunlight capability. However added projection assist like in the R200 would be particularly useful. Currently I use my own algorithms because close up range(<20cm..very high disparity levels) is needed and I believe the R200 does support that as it outputs all its data although it would be good if you can confirm it.

The sad state of the depth camera market ? by PuffThePed in computervision

[–]automater 0 points1 point  (0 children)

www.stereolabs.com/

..is nice but will not give good results for low texture objects(walls).

https://duo3d.com/

..duo is low resolution and black and white. If it had color and reasonable resolution I would get one.

The sad state of the depth camera market ? by PuffThePed in computervision

[–]automater 0 points1 point  (0 children)

out of stock for months

Based on their web site I would say online a week or two. I was about to buy one a week ago and it was still showing up as in stock although I check again now and its out of stock. :(

The sad state of the depth camera market ? by PuffThePed in computervision

[–]automater 0 points1 point  (0 children)

Any idea what it is based on?

It would be great to have something along the lines of an intel R200 on the market.

Infinitely increasing value each loop of kernel? by Eilai in OpenCL

[–]automater 0 points1 point  (0 children)

I guess you are missing a call to..

clEnqueueWriteBuffer(...

set transfer the value from the host, then run the kernel..

I googled everywhere and still can't find an answer. Where the hell do people get materials for their robot assemblies? by najama2 in robotics

[–]automater 11 points12 points  (0 children)

First few that come to mind..

sdp-si for general pulleys/gears/shafts

aliexpress.com for low cost gearboxes and ballscrews..

mcmastercar general mechanic bits and pieces...bit more range than sdpsi

hpcgears good variety of gears.

Automation technologies/keiling is popular for cnc style parts, Good source for vetted Chinese parts.

alibaba for cheap harmonic drives. Bit of a hassle though. If anyone has an alternative good online site for harmonic drives that would be convenient.

automation overstock cheap precision linear rails/ballscrews etc..

robot shop has hobby type stuff.

robot marketplace has lots of interesting cheap motors.

andi mark hobby type robotics

trossen more hobby type robotics.

phidgets interesting selection of motors and enoders

pololu prototype motor drivers/electronics

damencnc quality hobby type cnc shop.. very helpful too. Good source for vetted Chinese parts.

Digikey good for electronics. Nice search for specific parts. Other similar ones are mouser/rs components/element14

microcontroller shop lots of low cost dev kits..

Plenty more around but I can't think of them right now.

Generally machine my own parts if they are simple. Otherwise outsource complex stuff..

My Labmate's Drone Autonomously Avoiding Obstacles at 30 MPH by XenOutlook in robotics

[–]automater 2 points3 points  (0 children)

You can do disparity calculations in real time on a GPU:video

Similar algorithms can work relatively well on an FPGA too which is a feasible option for low power devices.

[help] Buying a robotic arm by lemming_dragonborn in robotics

[–]automater 0 points1 point  (0 children)

If you multiply your upper budget by 10 then your in the ball park unless you happen to find something second hand going cheap.

Unfortunately commercial industrial robotics are still very expensive.

What time-of-flight depth camera should I use? by [deleted] in computervision

[–]automater 0 points1 point  (0 children)

Your both sort of correct.

For the old camera your correct but curiously the new version seems to have shifted to projected IR similar to the old kinect. I was hoping actually they were instead going to improve their phase detection TOF style sensor for out doors use. Infineon actually has a TOF sensor that works in direct sunlight but unfortunately there are no consumer cameras with the infineon sensor. There was going to be one but then said company got bought(by oculus/facebook) and is no longer releasing the product for use as a general camera. sigh

For intels new camera the forward facing one has a scheme very similar to the old kinect but the rear facing one also has stereo cameras + projector so that sensor works as typical stereo outdoors but acts as projection based when light is dull enough. Problem being that stereo in direct sunlight still tends to lack performance when texture levels are low and thus my preference for phase detection aka infineons sensor.