Companies developing unmanned vessels must overcome multiple hurdles obstructing the path to safe and responsible operations
The majority of the accidents at sea point the finger of blame at human error, which is why autonomous ships, which remove seafarers from the decision-making process, are widely touted as making the shipping industry significantly safer.
However, irrespective of robust detection sensors, smart processors, and advanced navigation technology, a ship is ultimately only as safe as the code that drives it. Developers of autonomous ships will have to write a number of behaviours to handle the range of risky and dangerous scenarios seafarers deal with on a daily basis.
These smart ships will have to decide which route to take to avoid a hurricane and deal with fires that could damage circuitry, not to mention the risk that its sensors may not be functioning properly. All kinds of worst-case scenarios must be considered and the correct actions to take in those scenarios coded into the operational systems of the ships and tested before they are sent out to sea.
No matter how much effort manufacturers have put in, there will always be situations where accidents cannot be completely avoided. Accordingly, the coders must input routines to ensure that the unmanned ship inflicts the least damage and risk to other vessels that might be nearby.
Unlike a human being, who will potentially stop, back-pedal, and reassess when faced with new information, a program will get to the point where it has gathered a certain amount of information and then execute an action. Current technology makes it difficult to mimic the constant reassessments crew can perform.
Furthermore, crew members often act as a safety net, able to challenge a captain if a decision is based on incorrect or out-of-date information. Who will be a witness to the decision-making process on an autonomous vessel and, thus, able to step in?
Speaking to SAS sister title Fairplay, Walter Hannemann, product manager at digital maritime platform provider Dualog, said, “A machine may switch off an engine if it sees that it is functioning above operating parameters, but a human may realise that there is a reason to push on towards shore, such as bad weather approaching.”
The issues surrounding how far we can rely on autonomous vessels to take safe, well-informed actions have already been fiercely debated, not just in shipping but in the automotive and aerospace sectors too.
What is clear is that regulation will have its part to play. The International Maritime Organization’s (IMO’s) Maritime Safety Committee is already in the process of mapping out a new international legal framework for the safe operation of autonomous ships and it will be interesting to see how big a part coding plays in safety conversations, if at all. However, it is important to note the IMO jurisdiction at present only covers international waters, and given the dependency of autonomous shipping on shore based control centres, national and local authorities could also have a large role in developing regulation.
Meanwhile, the pace of development for unmanned vessels continues to accelerate; autonomous container ship Yara Birkeland is due to be launched later this year (albeit with a small crew on board for at least a few months). Ultimately the key to the success of autonomous shipping will be trust, not just by authorities but also by cargo suppliers and insurers.
In the meantime, Hannemann suggests that when confronted with a dangerous situation, autonomous ships should be programmed to simply shut down and alert an onshore team. This will allow human controllers to decide the best course of action. Until we can be certain that the programming underpinning autonomous ships is able to be as safety conscious and flexible with their approach to dangerous situations as a person is, the fact is that man is better than machine.