Manufacturing automation has had its day. Welcome to the age of manufacturing autonomy.


Written by

Robert Graham

Automation changed the face of industrial manufacturing.

It gave us robots that accurately and dependably perform tasks as long as we program them correctly. Production lines became cost-effective and operations more efficient…provided there were no deviations.

Car manufacturers typically use automation in a static environment in which robots perform the same activity 24/7 for months or years. But, as soon as there is a problem, automation fails, and a person needs to recalibrate or reprogram the robot. 

This changes with autonomy. It’s a state in which a robot or piece of equipment operates independently, without explicit instructions from a human.

Autonomy has incredible potential for manufacturers in high-mix, low-volume environments, or those that produce made-to-order products in a batch size of one. 

But to achieve this degree of flexibility on the fly, a robot must be capable of acting in unfamiliar situations without explicit instructions. In effect, the robots must be able to program—and reprogram—themselves.

Automated welding robotAutomated welding robot in high-mix metal fabrication

The limitations of static programs

The traditional method of programming an industrial robot is to manually generate a set of positions for the robot to move through using a teach pendant. The robot then executes this static program. This method is time-consuming and results in downtime whenever the robot needs to be reprogrammed.

Alternatively, many robot manufacturers and third-party software providers offer offline programming tools. Although these tools can reduce robot downtime and overall programming time, the resulting program is still static.

For a static program to control an industrial robot successfully, the environment in which the robot operates must be engineered to be as structured as possible. This approach works well for large batch automation in large factories, but it is expensive and any changes to the engineered environment require reprogramming.

So while this kind of automation can be useful in well-controlled environments, managing a highly variable manufacturing process requires something different.

The path toward autonomous robots

Advances in modern robotics and artificial intelligence (AI) have the potential to address these challenges that come with automation and to enable robots to work autonomously. Specifically, the following approaches are key to moving towards autonomy in manufacturing:

Using sensor data smartly

When the robot can sense its environment, it can handle less structured and less predictable situations. The most commonly used sensors to help guide robot motions are touch or tactile and vision sensors.

Tactile sensors give the robot the ability to “feel” the environment and are often used for robotic welding. These sensors work by making electrical contact between the welding nozzle or electrode and a pre-defined point on the part. The robot then stores this information to maintain the highest degree of accuracy during welds. The sensors allow the robot to follow a surface without relying on a highly accurate model of the workpiece.

Vision sensors, on the other hand, allow the robot to “see” the environment. Most commonly, a 2D or 3D position is extracted from a camera image to control a pick-and-place motion.

Sensor data offers opportunities for robots to handle more complex processes. Recent developments in machine learning, notably convolutional neural nets, enable the extraction of more advanced features from sensor data and guide less structured tasks. In Finland, for example, AI-powered robots are using neural nets to identify different types of garbage and sort them accordingly.

Programming through digital geometry

Some offline programming tools allow users to generate robot toolpaths directly from digital CAD data. This is a great idea, but most such tools still require significant human input, making them too expensive in man-hours for small-batch automation.

For example, when programming a robot to polish a complex geometric surface, offline simulation is an invaluable tool, but the engineering work required makes this approach unfeasible for small batches that are typical in 3D printing.

Luckily, the technological frameworks to overcome these programming limitations are already available. First, for a specific robotic process, the trial-and-error aspects of offline programming can be largely automated by leveraging modern techniques in optimization, AI, and computing.

Second, sensor data can be used to eliminate the need for manual fine-tuning. To compensate for the geometric deviation between the real and the digital part, a 3D scanner can adjust the geometric models before planning a trajectory, and a force/torque or vision sensor can actively compensate for any geometric deviations in real-time.

When the robot is programmed directly from digital geometry using these techniques, product variability can be handled autonomously, making processes like robotic polishing of unique 3D printed parts economically feasible.

Capturing process knowledge

When traditional robot programmers automate a process by generating a static sequence of robot positions, all process information is lost. The program only represents a geometric toolpath, not the complex interaction between the robot and a workpiece.

For example, when a robot is programmed to weld two thin pieces of steel together, the way heat is transferred away strongly influences how the weld behaves. Every workpiece is clamped slightly differently and the resulting change in contact surface can change the thermal properties significantly. As such, a toolpath that generates a perfect weld on one workpiece might result in a big hole in another.

Typically, for a robot to adapt successfully to any changes in a tool or workpiece (be it geometry, hardness, or, in the case of welding, thermal conductivity), these scenarios must be explicitly programmed. Sensor feedback loops can be programmed to handle some of these changes, but without a proper simulation model, the robot is at the mercy of the programmer’s ability to come up with a good enough heuristic.

However, when robot programs are generated directly from up-to-date digital models by accurately simulating the expected result, these programs are inherently more robust.

So, in our welding example above, by monitoring the part with a thermal camera we can update our thermal model, predict how the part will behave and adjust the torch velocity or intensity accordingly to ensure the desired quality.

Welding robot cellThe Manufacturing OS for robotic welding enables autonomous end-to-end welding for high-mix, low-volume production.

Learning from experience

As the complexity of sensor data, models, and processes increases, building accurate models and controllers from first principles becomes exponentially more complicated. 

Recent developments in reinforcement learning show promising results in bypassing this complexity and having the robot “learn” its own model. By executing millions of motions without prior knowledge of the process, the robot learns the task from its successes and failures.

This approach can be applied at different levels of the program, and, in extreme cases, end-to-end learning of the entire task (e.g., sensor to actuators; pixels to torques). For example, University of California, Berkeley’s Sensorimotor Deep Learning research group taught a robot to screw the lid on a bottle by learning a model from sensor pixels to robot joint torques.

The biggest hurdle in robotic reinforcement learning today is that it’s impractical for a physical robot to fail a million times before it becomes usable. Therefore, researchers are trying to use simulated robots to do the learning and then transfer those models onto physical robots to do the actual work. This approach is allowed OpenAI to teach a robot hand how to rotate a block.

And beyond…

These approaches will allow industrial manufacturing robots to move away from simply executing static programs, and make the transition from automation to autonomy. Robots will be able to easily handle less structured environments and more variability in processes and products.

Autonomous robots are one facet of a digitally connected factory of the future, where all processes use data smartly, capture process knowledge and feed it to other systems. This is what Oqton is working towards with the Manufacturing Operating System (OS).

And that vision is slowly but surely becoming a reality. With advanced AI techniques, the Manufacturing OS has increased dental lab productivity, and significantly reduced robotic welding programming time. As we continue to collaborate closely with partners from other industries, we are paving the way to autonomous manufacturing, one step at a time.

Learn more about the Oqton Manufacturing OS as well as our robotic welding software