Governments must install AI safeguards

Advertisement

Advertise with us

“I didn’t see robots that way. I saw them as machines — advanced machines — but machines. They might be dangerous but surely safety factors would be built in. The safety factors might be faulty or inadequate or might fail under unexpected types of stresses, but such failures could always yield experience that could be used to improve the models. After all, all devices have their dangers. The discovery of speech introduced communication — and lies. The discovery of fire introduced cooking — and arson.”

Read this article for free:

or

Already have an account? Log in here »

We need your support!
Local journalism needs your support!

As we navigate through unprecedented times, our journalists are working harder than ever to bring you the latest local updates to keep you safe and informed.

Now, more than ever, we need your support.

Starting at $15.99 plus taxes every four weeks you can access your Brandon Sun online and full access to all content as it appears on our website.

Subscribe Now

or call circulation directly at (204) 727-0527.

Your pledge helps to ensure we provide the news that matters most to your community!

To continue reading, please subscribe:

Add Brandon Sun access to your Free Press subscription for only an additional

$1 for the first 4 weeks*

  • Enjoy unlimited reading on brandonsun.com
  • Read the Brandon Sun E-Edition, our digital replica newspaper
Start now

*Your next Free Press subscription payment will increase by $1.00 and you will be charged $20.95 plus GST for four weeks. After four weeks, your payment will increase to $24.95 plus GST every four weeks.

Opinion

Hey there, time traveller!
This article was published 20/02/2024 (810 days ago), so information in it may no longer be current.

“I didn’t see robots that way. I saw them as machines — advanced machines — but machines. They might be dangerous but surely safety factors would be built in. The safety factors might be faulty or inadequate or might fail under unexpected types of stresses, but such failures could always yield experience that could be used to improve the models. After all, all devices have their dangers. The discovery of speech introduced communication — and lies. The discovery of fire introduced cooking — and arson.”

— Issac Asimov, “Robot Visions”

When pioneering science fiction author Isaac Asimov dreamed of possible futures in which humanity had created sentient robots and artificial intelligences, a key component of his world-building was that these machines would surely include built-in safeguards against their misuse.

The first and most important of his three laws of robotics was that a robot could not injure a human being or cause one to come to harm through inaction.

Current artificial intelligences are not at the level where they could be mistaken for a human being under scrutiny, even if tech companies would like you to believe in their hype, but a recent case in British Columbia shows the need to build in safeguards or impose consequences on companies that use the technology improperly.

The B.C. Civil Resolution Tribunal ruled in favour of a man suing Air Canada who was told by a chatbot on the airline’s website that he could retroactively apply for bereavement pricing for his flights to and from his grandmother’s funeral.

However, when the man applied for reimbursement, Air Canada told him that he had to apply for the lower fare before making the trip and refused to compensate him.

According to The Canadian Press report on the situation, a member of the tribunal said “In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions.”

Even though a another page on Air Canada’s website stated the policies surrounding bereavement fares, it was determined that the airline is responsible for the content hosted on its website — including the chatbot — and that it shouldn’t be up to the consumer to figure out what aspects are and are not accurate.

This issue gets to the core of a major problem with chatbots and artificial intelligence: laziness.

Many companies are jumping to use them as a way of cutting the human element out of their processes, essentially allowing them to reduce staffing with a low-cost replacement.

The problem is, while humans are not infallible, they can learn and importantly, they can be held accountable.

Thankfully, this case did not put anyone’s life directly at risk. But what happens as use of this technology is more widely adopted?

It would be disastrous for people to improperly treat an illness or mix the wrong chemicals together based on the faulty advice provided by a chatbot, and then for the creator or owner of that bot to argue against liability because it’s a legally distinct entity.

Last year, online tech outlets CNET and Gizmodo had notable incidents in which AI-written articles had to be corrected. It’s a good thing that in Gizmodo’s case, the errors were contained in a piece incorrectly listing the chronology of the “Star Wars” franchise.

It wouldn’t be so funny if the story had bungled the details on a natural disaster or other public emergency. And that doesn’t even touch on the potential plagiarism concerns that come from AI absorbing and regurgitating the materials it is trained on without a shred of originality.

If companies are determined to use artificial intelligence in the place of human employees, then those companies should be held liable for the mistakes they make just as flesh and blood workers would.

The verdict against Air Canada is a good precedent to set, but with only a few hundred dollars at stake, the country has yet to see how the legal system will handle cases where more financial damage is done or people have been harmed because of artificial intelligence.

For this reason, it would behoove our provincial and federal governments to set some legislated standards to put safeguards in place or implement penalties to protect the public.

Report Error Submit a Tip

Opinion

LOAD OPINION ARTICLES