robot wars razer automation

Robots: they’re cold, calculating killing machines. Or helpful, durable workers – it really depends on who you ask. Our opinions of robotics and automation are coloured by science fiction, and the doom-laden proclamations of robots taking all our jobs have caused us to be even warier. But do we really have anything to fear from AI and automation on health & safety grounds?

Machine safety

Many people have already been working with robots for decades. In the automotive industry, robots have been building and painting cars since the 1960s. Indeed, across manufacturing and food production, robots and other machinery are already responsible for much of the most tedious and strenuous work. Humans take the oversight and quality control roles, positions which require greater manual dexterity, and the role of programming the machines. People and robots live harmoniously, side by side.

Of course, injuries and even fatalities in these lines of work are not unheard of. Industrial hardware is often capable of imparting an immense amount of force, and carelessness or the odd malfunction can be a threat to human life. But this has been true since the advent of machinery, going all the way back to the first mills and foundries. Our appreciation of risk and care for human safety has improved dramatically over time, and machinery now includes all manner of guards and stopping mechanisms, with training to mitigate the risks.

As our manufacturing processes have become more efficient, our goals for improving them have become loftier, and the products they produce have become more complex. In many industries, robotics are now heralded as the next evolutionary step from ‘dumb’ machinery. These new machines could be adaptable, relocatable, and possess some form of intelligence. Not the kind of intelligence we have, but the ability to at least memorise tasks, learn from mistakes and other inputs, and apply these to improving performance.

Robots in the workplace

These robots could, in theory, be dangerous. One of the benefits of automation is that robots can be far stronger than the average human, and thus able to alleviate the burden of strenuous tasks, such as repetitive lifting or other actions. Send one of these metal goliaths hurtling around a warehouse floor, and there is a chance of injuring human workers. The potential of AI to circumvent human control has been enough to worry Elon Musk and the late Stephen Hawking, both of whom called AI one of the greatest dangers to mankind.

The abilities of true artificial intelligence are far removed from today’s robots, however, which lack the processing power or smarts to rebel. And is a rogue robot worker that different from a forklift driver who hasn’t had his morning coffee? It seems that what we worry about most with automation and AI is the lack of control, rather than the ability to do harm. We look at HAL and SkyNet, and we’re afraid that some human error or oversight in programming will stop a robot from reacting properly; or that a broken sensor might stop it from seeing the person in front of it. We believe that robots are fallible and uncontrollable in a way that human beings aren’t.

Headlines about killer robots have already begun to worry people, such as the death of a man at a Volkswagen plant in 2015, or the deaths of nine soldiers at the hands of a semi-autonomous cannon. Read behind the headlines, however, and the stories tend to paint a different picture. The unfortunate incident at the Volkswagen plant, where a man was grabbed by a robot arm, was due to human error; safeguards were removed, and the room wasn’t checked to see if a person was in it. The robotic gun incident meanwhile has been deemed a mechanical error, more likely a result of a jammed shell than a malevolent AI.

 

Future perfect

Having said all this, caution is warranted when developing any new technology. Safeguards should not be lowered to fast track automation, as appears to have been the case in Arizona, where an autonomous Uber vehicle killed a pedestrian in March. Initial reports suggest the vehicle’s in-built automatic braking system was turned off in favour of Uber’s, which appears to have been unsuccessful. This is not the first death resulting from ‘driverless’ cars, but it does seem to be the first where the vehicle did not warn the driver before the incident.

Developed safely and tested rigorously, there is no reason that automation cannot enhance the enforcement of health & safety policies as well as complying with them. We have already seen things like drones being used for health & safety assessments; in future these could be autonomous. Robots too already have sensors to prevent collisions, and can be mechanically limited to operate at low speeds. In many jobs, robots could even assist human employees with their tasks.

These co-operative robots, or ‘cobots’, have padded arms capable of picking up, manipulating and sorting objects. This could be ideal for production lines and other menial tasks, although UK regulations are yet to catch up with them. Current legislation requires a guard around all autonomous machinery, something that wouldn’t be possible with a cobot. The ISO/TS 15066 specification has moved to accommodate this, and is an admirable first step towards an international standard.

The fear that robots may take people’s jobs – and some worries about unscrupulous businesses – are in many cases clouding over the real safety and usefulness of robotics and AI. The reality is that for dangerous tasks, and many repetitive or strenuous manual tasks, robots may be better suited than humans. After all, they will never get tired or lose concentration, and they can carry out the same task in exactly the same way, where human hands might cause imperfections.

Robotics technology is still developing, but in many ways, the future is already here. Mobile computer tech is already advanced enough to support complex algorithms, which allow robots to receive all sorts of information about their environment. Arrays of sensors meanwhile allow them to use it intelligently, spotting risks and taking preventative actions. The only barrier is to ensure these are completely infallible, and to let them react at least as fast as a human, and ideally much faster.

The fear that when something does go wrong, the robot will not be able to react or be turned off fast enough will persist. But this is still a step up from machinery that has no such safeguards. It’s also not recognisably worse than a reckless individual, who is much more likely to make the same mistakes (e.g. turning around quickly with an implement in their hand). It is the anxiety over a lack of control, rather than the familiarity of a reactive individual, which worries people.

This is a hang-up that we will have to overcome as robotics are integrated into more jobs, and become a tool that people use to assist them in their work. Yet equally, health & safety should remain at the absolute core of developments in robotics, with the protection of individuals being the highest priority. The development of robots as a solution to efficiency deficits – through power and speed – should never be prioritised to the point that risks are taken in regards to the safety of operators or other individuals nearby.

 

One response to “Robot wars: are we right to be afraid of automation?

Leave a Reply

Your email address will not be published.