An earlier question asked, what part(s) of AI are dangerous and should be regulated?
Any AI that has the capability to perform a physical action should require some form of regulation.
My rational starts with an AI system being able to move an object, control the flow of a liquid, or unlock a door. I realized that when an AI system has control over any input or output that is not digital, any type of physical action, there could be some kind of risk. The purpose of regulations are to control or limit behavior, to mitigate risks or unwanted behavior.
So..
In the simplest of terms, regulations should apply to any AI system with the ability to directly change or influence the state or movement of matter.
Of course this definition is just a little too broad. Simply flipping a bit within an AI system changes the physical state of matter. Writing data to a hard disk drive causes the platters to spin, the actuator arm to move, and changes magnetic states of the storage medium.
Initiating an AI system, applying power, and it would fall under this broad definition. Attempting to narrow the definition has the potential overlook something. So there should be a single exception, allowing the AI system basic functionality. But no more.
The exception should only allow an AI system to transmit, receive, and store data in the course of normal computational processing. Limited to the basic processing and storage functionality of a computer system. Anything else and there's the potential to overlook something, allowing potential risks.
This exception probably needs to be stricter. Since the AI system shouldn't autonomously send data to a printer or make sounds through a speaker. What we really do not want, is for an AI system to directly move the actuator arm of a hard disk drive, exceeding the bare essentials that allow an AI system to function.
Basically, reading and writing the ones and zeros should be the only exception that would not require some form of regulation. Allowing for the state of matter in the bare metal to change, for the movement of electrons, or indirect movement of its own internal components.
Of course, many regulations may be safety related. If an AI has the ability to lock and unlock doors, being able to open a door in case of fire is important. However, this could fall under existing regulation, since the AI could be classified as an obstruction.
What will happen when an autonomous car fails to stop for the police? Some future regulation may require the police to have system to disable an AI controlled vehicle and the vehicle may be required to support such a system.
AI can process, observe, and manipulate data without any need for regulations. Generating reports, statistics, or even make predictions. But once AI has the ability to perform any type of "physical" action, it should be subject to some form of regulation.
There may also be a need for regulations that apply to some purely digital AI activities too, SEC, FAA, etc...
------------------------------
Brian LaVallee
INVITE Communications Co. Ltd
------------------------------
Original Message:
Sent: 07-28-2017 13:51
From: Melanie DiGeorge
Subject: Do we need AI regulation?
AI and machine learning are the future. Companies across multiple industries are investigating applications of AI that will have a direct impact on our day to day lives. From self-driving cars to automated medical diagnostics AI failure could have massive consequences resulting in loss of life.
This isn't a fact that has fallen on deaf ears. Elon Musk has even cautioned against the consequences of AI implementation. However, many leaders in the AI industry would still go as far to characterize Elon Musk's warning as alarmist.
Do you think that there are regulations that should be put in place to mitigate AI error? What would they look like?
Read more on this topic here.
------------------------------
Melanie DiGeorge
Community Manager
TM Forum
------------------------------