TM Forum Community

Expand all | Collapse all

Do we need AI regulation?

  • 1.  Do we need AI regulation?

    TM Forum Member
    Posted Jul 28, 2017 13:51

    AI and machine learning are the future. Companies across multiple industries are investigating applications of AI that will have a direct impact on our day to day lives. From self-driving cars to automated medical diagnostics AI failure could have massive consequences resulting in loss of life.

    This isn't a fact that has fallen on deaf ears. Elon Musk has even cautioned against the consequences of AI implementation. However, many leaders in the AI industry would still go as far to characterize Elon Musk's warning as alarmist.

    Do you think that there are regulations that should be put in place to mitigate AI error? What would they look like?

    Read more on this topic here.



    ------------------------------
    Melanie DiGeorge
    Community Manager
    TM Forum
    ------------------------------


  • 2.  RE: Do we need AI regulation?

    Posted Jul 31, 2017 04:07
    Yes, we need AI regulation.
    Artificial intelligence is certainly going to become a dominant factor in many social, economical and technical discussions. Its power and capacity will only grow. As we see already, existing market leaders are gathering more and more data for statistical analysis. The grip of just a couple of multinationals on global decision making is already on the edge of being unacceptable. AI  development will further serve many many peopl, but its control will be with only few people,  and will threaten local and individual independence. So, either we think of fair AI regulation now, or we wait and make ourselves subject to a "late" less constructive discussion in 2030.

    ------------------------------
    Anton Van der Burgt
    Xelas energy software
    ------------------------------



  • 3.  RE: Do we need AI regulation?

    Posted Jul 31, 2017 07:38
    The category "AI" is broad and often misunderstood. We may as well ask, "Do you think there are regulations that should be put in place to mitigate errors with new medicines?" And the answer to that would be, "Of course………… Be more specific."

    As with the flying of drones or any new technical innovations there may be risks. With "AI" much has to do with the risk of confusing a machine intelligence with a human one. Indeed that is at the very heart of the definition of "AI."

    A first proviso may be to delimit the scope of something classified as "AI" to something which passes a test for whether it can legitimately be called "AI" or not. Such as the proverbial "Turing Test." Although that may be too simplistic. But having enforceable criteria to determine legitimate classification as "AI" could help us avoid the overselling of "AI".

    We have past examples of the overselling of "AI" : From the over-enthusiastic prophecies of 1950's Cybernetics through the "Expert Systems" and "Neural Networks" initiatives of the 1980's and 90's.

    The dangers of overselling arise from too weak an understanding of technical limitations combined with captivation by titillating promises. Overselling is particularly annoying to those interested in advancing technologies rather than simply promoting them.

    So one important test for legitimising any "AI" should be whether it actually is "AI" or not. And that throws us back to very fundamental considerations. (Such as is there any virtue in calling this thing "AI?" If it works, does it need the epithet? Given that it works what are its specific risks and what should we do about them? Can we blame "AI" if we fail to be responsible technologists?)

    ------------------------------
    Peter Duxbury-Smith
    Gigaclear

    ------------------------------



  • 4.  RE: Do we need AI regulation?

    Posted Jul 31, 2017 04:58

    Yes,

    I believe that AI needs a lot of regulations. Many AIs use internet as its initial information base and it reads and understands information from the internet. As we all know, internet contains all sexist, racist, biased and partially correct or incorrect data, which propagate to AI systems. This causes the machines also to learn this data and over a period of time the view point AI systems will entirely change. AI information base must be thoroughly regulated and should be periodically audited to keep the AI systems in good health.

     

    Regards,

    Vineesh

    Solution Architect

    part1.06030406.04030100@ericsson.com

    Ericsson India Global Services Pvt. Ltd
    9th Floor | Block A | Bagmane World Technology Center

    Mahadevpura | Bangalore 560094 | India

    HandPhone: +91-7259024442|ECN: 59436

    Planned Leave:

     

     

     






  • 5.  RE: Do we need AI regulation?

    TM Forum Member
    Posted Jul 31, 2017 10:03
    The volume of data now available coupled with the limitless compute power of the cloud certainly makes AI more relevant today.  We're just beginning to see the potential of the technology across a broad spectrum of industries.  I don't quite subscribe to the "sky is falling" club, given that the technology has a long way to go before it can lead to the type of problems Musk describes.  That said, I don't have access to the "advanced AI" that he claims to have seen.  Nevertheless, even small advancements will begin to have social and economic repercussions in jobs, entertainment, and other areas.  I'm much more concerned with training displaced factory and retail workers in new careers once their jobs are fully automated, than robots firing indiscriminately down the street.  I agree that industry and regulators should already be preparing for this type of impact.
    As for broader regulation, particularly around moral and ethical concerns,  let's not jump the gun.  Stringent requirements without a thorough understanding of AI and its potential (by legislators and others) would curtail innovation.  The industry is aware of potential risks and challenges, and has already established the Partnership on AI to study and formulate best practices on AI technologies, to advance the public's understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

    ------------------------------
    Rick Lievano
    Microsoft Corporation
    ------------------------------



  • 6.  RE: Do we need AI regulation?

    Posted Aug 01, 2017 12:07
    I was at DEFCON in Las Vegas over the weekend, and sat in on a couple of presentations that touched on weaponized AI propaganda.  We will do the same with AI as we have with every technology; we build and deploy it before the full consequences can be understood.  There's no way around that really.  But we can be better prepared for the consequences by carefully considering the risks and creating a governing body dedicated to keeping an eye on the issues.  You can be sure there are bad actors evaluating AI's applications already.  

    At DEFCON Gary Kasparov made the point that even with advanced AI mankind will not be 'irrelevant' because computers may be good at coming up with answers, but they don't necessarily know good questions to ask.  If we don't regulate well (and btw, I am sure we will fail to do that well) AI will ask questions that may not be in anyone's best interest, even AI's.

    ------------------------------
    Tom Flynn
    Neural Technologies
    ------------------------------



  • 7.  RE: Do we need AI regulation?

    Posted Aug 02, 2017 03:24
    Why should computers "...not know good questions to ask?" There might be good questions machines could be better at identifying. I suspect what Kasparov meant was that there are good some questions which require attributes of humanity to make "good." Now the possibility remains that a machine could conceivably be made to have those attributes......or am I wrong? Would not "AI" fully defined require many of those attributes?   Or must they always have something transcending the kind of analysis necessary to make them machine-implementable?

    It seems a resort towards a transcendent soul or mind which humans have uniquely. I am not averse to that idea (though some may be.)  But I question whether only humans can have the kind of morality needed to protect all of humanity from immoral or "bad questions," which I take to be the inference here.

    Is there a viable scenario where we use robot lawyers to protect humanity from other robots?

    ------------------------------
    Peter Duxbury-Smith
    Gigaclear

    ------------------------------



  • 8.  RE: Do we need AI regulation?

    TM Forum Member
    Posted Aug 02, 2017 09:59
    Precisely - unless you want government regulators stepping in front of innovation and slowing it down until they (in all their wisdom) understand it.  Even then it becomes a political decision.  Worse than Net Neutrality ever thought of being.

    ------------------------------
    Jim Warner
    Westport Group
    ------------------------------



  • 9.  RE: Do we need AI regulation?

    Posted Aug 02, 2017 16:50
    To Tom Flynn:

    dear Tom, very good input/feedback, very true and to the point. Let's try to NOT repeat what we always did in the past...
    deploy - yes ofcourse,  but LETS talk openly about the consequenses. And mister Kasparow is of course also right, but I am not extremely confident that business will prefer wisdom, when they can get rapid automatics.

    ------------------------------
    Anton Van der Burgt
    Xelas software
    ------------------------------



  • 10.  RE: Do we need AI regulation?

    Posted Aug 03, 2017 04:04
    Anton, Tom,

    So we should talk to identify what to do. Let's be more specific. What are the topics we need to discuss?

    I suggest:
    * Delimitation of what is in scope - i.e. What is "AI" and which part of it is dangerous and needs to be legislated for?
    * What is safe "AI" that does not need to be legislated for?
    * How much of the projected danger requiring legislation is specific to "AI" and what can be/already is dealt with by other legislation?

    Over to you for more suggestions.

    I wonder whether discovery of what needs to be discussed could be a Machine Learning project all of its own?

    ------------------------------
    Peter Duxbury-Smith
    UNSPECIFIED
    ------------------------------



  • 11.  RE: Do we need AI regulation?

    Posted Aug 04, 2017 04:28
    Peter,

    good, lets be more specific.  I will think it over during the weekend and come back to what I think are the main issues.

    I get your first three points but not fully what you are trying to say in your last sentence... are you saying: lets try AI to define these questions and answers??  If that is what you are questioning...than I think we should do both, come up with our own list AND also trying to do a pilot to see what AI would answer, and compare these lists. But I think I would favor to do such a comparison a little bit later, with a more "closed" question.

    Anton.





  • 12.  RE: Do we need AI regulation?

    Posted Aug 04, 2017 06:17
    Anton,

    Really what I was musing over when I suggested that maybe there could be a Machine Learning project to try to discover the right questions to ask about regulating AI was:

    Bots or other machines might be very much faster and more adaptive than us human question ponderers. At finding the questions which "nip things in the bud" before they become problematic. Even under the limen of our human reasoning.

    So maybe we need such automated help to cope with the upcoming world of which Elon Tusk has such fear.

    It was all dreamt up in the Terminator movies, wasn't it? Or maybe "Robot Wars". 

    In any case, if we are faced with a future in which machines are going to start invading our moral/legal/enforcement space more, why not embrace it, rather than shivering in fear about it? Let's assert out advantage while we can.

    ------------------------------
    Peter Duxbury-Smith
    Gigaclear

    ------------------------------



  • 13.  RE: Do we need AI regulation?

    Posted Aug 08, 2017 06:21
    To Peter Duxbury-Smith (and the others following)

    Peter,
    I have been contemplating about our discussion, and yesterday I happened to have a conversation with Niamh Clancy from the TMForum. She told me that AI application is one of the key attention points of the forum in the coming period. I also saw a questionaire from TMForum about AI interests, to the members, I assume to further focus the TMF actions further.
    I now doubt about our idea to investigate regulation needs for AI in this discussion, because it may be a lot better to make it a part of the bigger TMF AI "program". 
    Meanwhile I was ofcourse mentally busy with thinking about the questions in your email, and concludes that I personally am less interested in using AI within robotics or generic process support systems. That does not mean I dont think that is important, it is... since it is already so far developed, and still there is not even a skeleton of regulation for it, as far as I am aware (I am not extremely good in fact finding).
    The prime reason I reacted on the topic is: the role of AI that I foresee in relation to big data analytics. and the role data analytics already have to influence/manipulate big audiences/masses...So the point where AI (machine found conclusions) are directly, focused and conciously used to influence/steer human behaviour.
    CSPs are both structural enablers and potential users/corporations using data gathering/analytics big scale...
    This I find extremely worrying, since I think it could be another step towards more central organisational power. While I am convinced centralisation of power and wealth has already gone too far, with 0.1 % of the population owning 95%... I am pretty sure human/earth prosperity and wisdom is better served by stimulating decentralisation/variety etc... (so I also like people playing analogue musical instruments). 
    I am in favor of regulation, that avoids further increase of power and influence of central organisations based on their use of AI systems to influence the public...
    sofar my personal concerns/interests...

    to Emmanuel Otchere's input:

    this is also very valuable to share, and probably an example of what I was trying to explain above- this ART of duplicating people will certainly influence the public... regulation FORCING users to NOTIFY the public that this is a duplication/fake would help, but still not avoid that these duplicates would influence the public perception more unconciously.  Maybe it would be a good point to start regulation there.  Hope you keep participating in the discussion...

    ------------------------------
    Anton Van der Burgt
    Xelas software
    ------------------------------



  • 14.  RE: Do we need AI regulation?

    Posted Aug 08, 2017 06:38
    To Peter Duxbury-Smith (and the others following)

    Peter,
    I have been contemplating about our discussion, and yesterday I happened to have a conversation with Niamh Clancy from the TMForum. She told me that AI application is one of the key attention points of the forum in the coming period. I also saw a questionaire from TMForum about AI interests, to the members, I assume to further focus the TMF actions further.
    I now doubt about our idea to investigate regulation needs for AI in this discussion, because it may be a lot better to make it a part of the bigger TMF AI "program". 
    Meanwhile I was ofcourse mentally busy with thinking about the questions in your email, and concludes that I personally am less interested in using AI within robotics or generic process support systems. That does not mean I dont think that is important, it is... since it is already so far developed, and still there is not even a skeleton of regulation for it, as far as I am aware (I am not extremely good in fact finding).
    The prime reason I reacted on the topic is: the role of AI that I foresee in relation to big data analytics. and the role data analytics already have to influence/manipulate big audiences/masses...So the point where AI (machine found conclusions) are directly, focused and conciously used to influence/steer human behaviour.
    CSPs are both structural enablers and potential users/corporations using data gathering/analytics big scale...
    This I find extremely worrying, since I think it could be another step towards more central organisational power. While I am convinced centralisation of power and wealth has already gone too far, with 0.1 % of the population owning 95%... I am pretty sure human/earth prosperity and wisdom is better served by stimulating decentralisation/variety etc... (so I also like people playing analogue musical instruments). 
    I am in favor of regulation, that avoids further increase of power and influence of central organisations based on their use of AI systems to influence the public...
    sofar my personal concerns/interests...

    to Emmanuel Otchere's input:

    this is also very valuable to share, and probably an example of what I was trying to explain above- this ART of duplicating people will certainly influence the public... regulation FORCING users to NOTIFY the public that this is a duplication/fake would help, but still not avoid that these duplicates would influence the public perception more unconciously.  Maybe it would be a good point to start regulation there.  Hope you keep participating in the discussion...

    ------------------------------
    Anton Van der Burgt
    Xelas software
    ------------------------------



  • 15.  RE: Do we need AI regulation?

    Posted Aug 10, 2017 10:49
    I think there is potential relevance here to the topic of AI regulation:

    • Within the context of IoT, is there an appropriate locus of application for some form of AI to perform a regulatory role?
    • Is the most appropriate way to prevent machines "doing things behind our backs" in the IoT to have some form of automated AI regulator? 

    I have looked again at reports of what Kasparov is supposed to have said about AI, and I understand her was advocating embracing AI, and using it for such purposes. What puzzles me in the report is that he apparently thought that AI has somehow progressed to have autonomies that it did not used to have. For me, and my gauge of truth, this does not compute. 

    I do think that any mention of "AI" inevitably invokes a demand for an explanation of what "AI" is.  But, particularly right now, there are plenty of uses of the term where there clearly has not been much thought of whther it appropriately applies.  So there certainly is a case for a discussion of "AI" in the more general forum. (Hopefully  precision will grow as people use the term more and more.)  


    ------------------------------
    Peter Duxbury-Smith
    Gigaclear

    ------------------------------



  • 16.  RE: Do we need AI regulation?

    Posted Aug 02, 2017 16:51
    To Tom Flynn:

    dear Tom, very good input/feedback, very true and to the point. Let's try to NOT repeat what we always did in the past...
    deploy - yes ofcourse,  but LETS talk openly about the consequenses. And mister Kasparow is of course also right, but I am not extremely confident that business will prefer wisdom, when they can get rapid automatics.

    ------------------------------
    Anton Van der Burgt
    Xelas software
    ------------------------------



  • 17.  RE: Do we need AI regulation?

    Posted Aug 04, 2017 07:17
    So a slightly different take on the question "Do we need AI regulation?" is "Can we use AI to regulate?"

    This has implications for what governments are beginning to demand of ISPs and  telcos  w.r.t. taking some form of responsible measures to prevent misuses of their facilities. (i.e. Holding the mediators responsible for what gets mediated.)

    The kind of measures we may take could include deployment of smart bots to police the traffic.  

    We'd have to build these smart bots first. Catalyst project anyone ????

    ------------------------------
    Peter Duxbury-Smith
    Intelligent Solutions Ltd

    ------------------------------



  • 18.  RE: Do we need AI regulation?

    TM Forum Member
    Posted Aug 07, 2017 03:17
    Interesting discussion!

    Perhaps the question is impregnated for simplification but packed with a chain of questions as complex as AI itself, that need to be unpacked carefully into separate topics, i.e. types of AI and the degree of existential value to mankind, when is/should AI regulation be triggered/enforced, to what extent in the application of AI" should legislation be viable, and also, how regulation can be managed in a way that it does not kill 'good' innovation.

    I saw a simple, implementation of AI in video editing that replicated Obama and other top figures making speeches and statements that were completely untrue (AI mimics Obama's speaking style and demeanor). These are basic forms of AI which are gradually becoming democratized as completely benign tools, but have far reaching implications as both "the good, the bad and ugly" in man gain access to them. Unpacking the question so we can actually get a brutal view of the ramifications of AI could help also raise the question of when to step in with regulation, and in such a way as to keep the 'good innovation' intentions going on.

    We are only just beginning to scratch the surface of AI applications, and it is scary to most of us the extrapolated implications. So far exploits have majorly been devoted to defense, advanced weaponry, movement, and to mimic the 5 senses of man. The proof point on the capability of AI becomes scary once we begin to hybridize applications. Asimov's three laws of robotics can be in some ways considered "informal" forms of legislation for the maker to integrate, however how this is enforced consistently is a question that keeps arising (if AI can rewrite it's own code).

    While Asimov's 3 laws consider the ultimate perception of AI, the very basic AI applications and tools of today can already trigger much more danger to the point of Elon's call to action.

    ------------------------------
    Emmanuel Amamoo-Otchere
    Huawei Technologies Co. Ltd
    ------------------------------



  • 19.  RE: Do we need AI regulation?

    TM Forum Member
    Posted Aug 14, 2017 05:30
    Edited by System Oct 30, 2018 15:01

    An earlier question asked, what part(s) of AI are dangerous and should be regulated?

    Any AI that has the capability to perform a physical action should require some form of regulation.

    My rational starts with an AI system being able to move an object, control the flow of a liquid, or unlock a door. I realized that when an AI system has control over any input or output that is not digital, any type of physical action, there could be some kind of risk. The purpose of regulations are to control or limit behavior, to mitigate risks or unwanted behavior.

    So..

    In the simplest of terms, regulations should apply to any AI system with the ability to directly change or influence the state or movement of matter.

    Of course this definition is just a little too broad. Simply flipping a bit within an AI system changes the physical state of matter. Writing data to a hard disk drive causes the platters to spin, the actuator arm to move, and changes magnetic states of the storage medium.

    Initiating an AI system, applying power, and it would fall under this broad definition. Attempting to narrow the definition has the potential overlook something. So there should be a single exception, allowing the AI system basic functionality. But no more.

    The exception should only allow an AI system to transmit, receive, and store data in the course of normal computational processing. Limited to the basic processing and storage functionality of a computer system. Anything else and there's the potential to overlook something, allowing potential risks.

    This exception probably needs to be stricter. Since the AI system shouldn't autonomously send data to a printer or make sounds through a speaker. What we really do not want, is for an AI system to directly move the actuator arm of a hard disk drive, exceeding the bare essentials that allow an AI system to function.

    Basically, reading and writing the ones and zeros should be the only exception that would not require some form of regulation. Allowing for the state of matter in the bare metal to change, for the movement of electrons, or indirect movement of its own internal components.

    Of course, many regulations may be safety related. If an AI has the ability to lock and unlock doors, being able to open a door in case of fire is important. However, this could fall under existing regulation, since the AI could be classified as an obstruction.

    What will happen when an autonomous car fails to stop for the police? Some future regulation may require the police to have system to disable an AI controlled vehicle and the vehicle may be required to support such a system.

    AI can process, observe, and manipulate data without any need for regulations. Generating reports, statistics, or even make predictions. But once AI has the ability to perform any type of "physical" action, it should be subject to some form of regulation.

    There may also be a need for regulations that apply to some purely digital AI activities too, SEC, FAA, etc...



    ------------------------------
    Brian LaVallee
    INVITE Communications Co. Ltd
    ------------------------------



  • 20.  RE: Do we need AI regulation?

    Posted Aug 14, 2017 06:33
    To Brian:
    Thanks for your thoughts, and doubts.
    It strengthens me in my suggestion to limit the AI regulation discussion to quite advanced AI application.

    I would suggest NOT to include "normal" computing applications and robotics, to support or perform processes that can still be easily overseen/supervised by humans. Ofcourse this is, as you correctly point out, also AI, and will also require better regulation, but it would make the discussion very wide, and we would need to go back decades for many applications as well.. 

    So my proposals would be to first of all focus on AI applications, where AI will make decisions/suggestions that can not/will not be regularly be overseen/supervised by human intelligence.  And where AI results would/could be used to directly influence the behaviour of large groups of people, i.e. they would really become power instruments.

    ------------------------------
    Anton Van der Burgt
    Xelas software
    ------------------------------



  • 21.  RE: Do we need AI regulation?

    Posted Aug 15, 2017 03:06
    So how would we DO the regulation?

    ------------------------------
    Peter Duxbury-Smith
    Gigaclear

    ------------------------------