How could artificial intelligence harm us?How could we define passion in artificial intelligence?If IQ is used as a measure of intelligence in humans could it also be used as a measure of intelligence in machines?Could artificial intelligence cause problems for humanity after figuring out human behavior?Google's Principles of Artificial IntelligenceShould we focus more on societal or technical issues with AI riskWill Human Cognitive Evolution Drown in Response to Artificial Intelligence?How can we create eXplainable Artificial Intelligence?Will we be able to build an artificial intelligence that feels empathy?Can Artificial Intelligence be utilized for religious needs?
Variadic templates: unfold arguments in groups
Is there no way in Windows 10 to type the Euro symbol € on a US English keyboard that has neither numeric keypad nor "alt gr" key?
Puzzling is a Forte of Mine
If you're loaning yourself a mortgage, why must you pay interest? At the bank's posted rate?
Did the Apollo missions fly "over the top" of the Van Allen radiation belts?
How do you call a note, that stays through the whole song?
Best fighting style for a pacifist
Can the diameter be controled by the injectivity radius and the volume?
Why is the tangent of an angle called that?
Cooking with sugar makes pan very difficult to clean
Dynamically getting the complex number in a color wheel via moving mouse?
How to get the sum, difference, product, and quotient from a macro in ConTeXt or Plain TeX?
Does a Paladin with the Divine Health feature destroy a Green Slime?
Why can't sonic booms be heard at air shows?
Can I call the airport to see if my boyfriend made it through customs?
Peaceable Bishops on an 8x8 grid
What was Jeremy Corbyn’s involvement in the Northern Ireland peace process?
How can I run a realistic open-world game with vast power differences, without resulting in constant TPKs?
Are trigonometry functions Ratios or Distance?
Why did we never simplify key signatures?
How often are there lunar eclipses on Jupiter
Short story about delivery truck organization existing only to support itself
How can an AI train itself if no one is telling it if its answer is correct or wrong?
How does Hilbert's 10th problem imply that the growth of solutions to Diophantine equations is uncomputable?
How could artificial intelligence harm us?
How could we define passion in artificial intelligence?If IQ is used as a measure of intelligence in humans could it also be used as a measure of intelligence in machines?Could artificial intelligence cause problems for humanity after figuring out human behavior?Google's Principles of Artificial IntelligenceShould we focus more on societal or technical issues with AI riskWill Human Cognitive Evolution Drown in Response to Artificial Intelligence?How can we create eXplainable Artificial Intelligence?Will we be able to build an artificial intelligence that feels empathy?Can Artificial Intelligence be utilized for religious needs?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;
$begingroup$
We often hear that artificial intelligence may harm or even kill humans, so it might prove dangerous.
How could artificial intelligence harm us?
philosophy social neo-luddism
$endgroup$
|
show 3 more comments
$begingroup$
We often hear that artificial intelligence may harm or even kill humans, so it might prove dangerous.
How could artificial intelligence harm us?
philosophy social neo-luddism
$endgroup$
6
$begingroup$
This is a bit broad, as there are many reasons and scenarios suggested in which AI could become dangerous. For instance as DuttaA suggests above, humans may design intelligent weapons systems that decide what to target, and this is a real worry as it is possible already using narrow AI. Perhaps give more context to the specific fears that you want to understand, by quoting or linking a specific concern that you have read (please use edit).
$endgroup$
– Neil Slater
Sep 16 at 6:33
4
$begingroup$
@NeilSlater Yes, it might be too broad, but I think that this answer ai.stackexchange.com/a/15462/2444 provides some plausible reasons. I edited the question to remove the possibly wrong assumption.
$endgroup$
– nbro
Sep 16 at 15:44
2
$begingroup$
Is this question specifically about "superintelligence" or AI in general? (For instance, if hypothetical superintelligence, then the hypothetical "control problem" is an issue. However, contemporary automated weapons systems won't be superintelligent, nor will autonomous vehicles, and those can harm humans.)
$endgroup$
– DukeZhou♦
Sep 16 at 19:40
$begingroup$
@DukeZhou The OP did not originally and explicitly mention superintelligence, but I suppose he was referring to anything that can be considered AI, including a SI.
$endgroup$
– nbro
Sep 16 at 20:10
3
$begingroup$
First ask how can normal intelligence harm you? The answer is then the same.
$endgroup$
– J...
Sep 17 at 12:53
|
show 3 more comments
$begingroup$
We often hear that artificial intelligence may harm or even kill humans, so it might prove dangerous.
How could artificial intelligence harm us?
philosophy social neo-luddism
$endgroup$
We often hear that artificial intelligence may harm or even kill humans, so it might prove dangerous.
How could artificial intelligence harm us?
philosophy social neo-luddism
philosophy social neo-luddism
edited Sep 16 at 21:19
DukeZhou♦
5,4083 gold badges15 silver badges41 bronze badges
5,4083 gold badges15 silver badges41 bronze badges
asked Sep 16 at 6:10
ManakManak
6132 silver badges6 bronze badges
6132 silver badges6 bronze badges
6
$begingroup$
This is a bit broad, as there are many reasons and scenarios suggested in which AI could become dangerous. For instance as DuttaA suggests above, humans may design intelligent weapons systems that decide what to target, and this is a real worry as it is possible already using narrow AI. Perhaps give more context to the specific fears that you want to understand, by quoting or linking a specific concern that you have read (please use edit).
$endgroup$
– Neil Slater
Sep 16 at 6:33
4
$begingroup$
@NeilSlater Yes, it might be too broad, but I think that this answer ai.stackexchange.com/a/15462/2444 provides some plausible reasons. I edited the question to remove the possibly wrong assumption.
$endgroup$
– nbro
Sep 16 at 15:44
2
$begingroup$
Is this question specifically about "superintelligence" or AI in general? (For instance, if hypothetical superintelligence, then the hypothetical "control problem" is an issue. However, contemporary automated weapons systems won't be superintelligent, nor will autonomous vehicles, and those can harm humans.)
$endgroup$
– DukeZhou♦
Sep 16 at 19:40
$begingroup$
@DukeZhou The OP did not originally and explicitly mention superintelligence, but I suppose he was referring to anything that can be considered AI, including a SI.
$endgroup$
– nbro
Sep 16 at 20:10
3
$begingroup$
First ask how can normal intelligence harm you? The answer is then the same.
$endgroup$
– J...
Sep 17 at 12:53
|
show 3 more comments
6
$begingroup$
This is a bit broad, as there are many reasons and scenarios suggested in which AI could become dangerous. For instance as DuttaA suggests above, humans may design intelligent weapons systems that decide what to target, and this is a real worry as it is possible already using narrow AI. Perhaps give more context to the specific fears that you want to understand, by quoting or linking a specific concern that you have read (please use edit).
$endgroup$
– Neil Slater
Sep 16 at 6:33
4
$begingroup$
@NeilSlater Yes, it might be too broad, but I think that this answer ai.stackexchange.com/a/15462/2444 provides some plausible reasons. I edited the question to remove the possibly wrong assumption.
$endgroup$
– nbro
Sep 16 at 15:44
2
$begingroup$
Is this question specifically about "superintelligence" or AI in general? (For instance, if hypothetical superintelligence, then the hypothetical "control problem" is an issue. However, contemporary automated weapons systems won't be superintelligent, nor will autonomous vehicles, and those can harm humans.)
$endgroup$
– DukeZhou♦
Sep 16 at 19:40
$begingroup$
@DukeZhou The OP did not originally and explicitly mention superintelligence, but I suppose he was referring to anything that can be considered AI, including a SI.
$endgroup$
– nbro
Sep 16 at 20:10
3
$begingroup$
First ask how can normal intelligence harm you? The answer is then the same.
$endgroup$
– J...
Sep 17 at 12:53
6
6
$begingroup$
This is a bit broad, as there are many reasons and scenarios suggested in which AI could become dangerous. For instance as DuttaA suggests above, humans may design intelligent weapons systems that decide what to target, and this is a real worry as it is possible already using narrow AI. Perhaps give more context to the specific fears that you want to understand, by quoting or linking a specific concern that you have read (please use edit).
$endgroup$
– Neil Slater
Sep 16 at 6:33
$begingroup$
This is a bit broad, as there are many reasons and scenarios suggested in which AI could become dangerous. For instance as DuttaA suggests above, humans may design intelligent weapons systems that decide what to target, and this is a real worry as it is possible already using narrow AI. Perhaps give more context to the specific fears that you want to understand, by quoting or linking a specific concern that you have read (please use edit).
$endgroup$
– Neil Slater
Sep 16 at 6:33
4
4
$begingroup$
@NeilSlater Yes, it might be too broad, but I think that this answer ai.stackexchange.com/a/15462/2444 provides some plausible reasons. I edited the question to remove the possibly wrong assumption.
$endgroup$
– nbro
Sep 16 at 15:44
$begingroup$
@NeilSlater Yes, it might be too broad, but I think that this answer ai.stackexchange.com/a/15462/2444 provides some plausible reasons. I edited the question to remove the possibly wrong assumption.
$endgroup$
– nbro
Sep 16 at 15:44
2
2
$begingroup$
Is this question specifically about "superintelligence" or AI in general? (For instance, if hypothetical superintelligence, then the hypothetical "control problem" is an issue. However, contemporary automated weapons systems won't be superintelligent, nor will autonomous vehicles, and those can harm humans.)
$endgroup$
– DukeZhou♦
Sep 16 at 19:40
$begingroup$
Is this question specifically about "superintelligence" or AI in general? (For instance, if hypothetical superintelligence, then the hypothetical "control problem" is an issue. However, contemporary automated weapons systems won't be superintelligent, nor will autonomous vehicles, and those can harm humans.)
$endgroup$
– DukeZhou♦
Sep 16 at 19:40
$begingroup$
@DukeZhou The OP did not originally and explicitly mention superintelligence, but I suppose he was referring to anything that can be considered AI, including a SI.
$endgroup$
– nbro
Sep 16 at 20:10
$begingroup$
@DukeZhou The OP did not originally and explicitly mention superintelligence, but I suppose he was referring to anything that can be considered AI, including a SI.
$endgroup$
– nbro
Sep 16 at 20:10
3
3
$begingroup$
First ask how can normal intelligence harm you? The answer is then the same.
$endgroup$
– J...
Sep 17 at 12:53
$begingroup$
First ask how can normal intelligence harm you? The answer is then the same.
$endgroup$
– J...
Sep 17 at 12:53
|
show 3 more comments
14 Answers
14
active
oldest
votes
$begingroup$
tl;dr
There are many valid reasons why people might fear (or better be concerned about) AI, not all involve robots and apocalyptic scenarios.
To better illustrate these concerns, I'll try to split them into three categories.
Conscious AI
This is the type of AI that your question is referring to. A super-intelligent conscious AI that will destroy/enslave humanity. This is mostly brought to us by science-fiction. Some notable Hollywood examples are "The terminator", "The Matrix", "Age of Ultron". The most influential novels were written by Isaac Asimov and are referred to as the "Robot series" (which includes "I, robot", which was also adapted as a movie).
The basic premise under most of these works are that AI will evolve to a point where it becomes conscious and will surpass humans in intelligence. While Hollywood movies mainly focus on the robots and the battle between them and humans, not enough emphasis is given to the actual AI (i.e. the "brain" controlling them). As a side note, because of the narrative, this AI is usually portrayed as supercomputer controlling everything (so that the protagonists have a specific target). Not enough exploration has been made on "ambiguous intelligence" (which I think is more realistic).
In the real world, AI is focused on solving specific tasks! An AI agent that is capable of solving problems from different domains (e.g. understanding speech and processing images and driving and ... - like humans are) is referred to as General Artificial Intelligence and is required for AI being able to "think" and become conscious.
Realistically, we are a loooooooong way from General Artificial Intelligence! That being said there is no evidence on why this can't be achieved in the future. So currently, even if we are still in the infancy of AI, we have no reason to believe that AI won't evolve to a point where it is more intelligent than humans.
Using AI with malicious intent
Even though an AI conquering the world is a long way from happening there are several reasons to be concerned with AI today, that don't involve robots!
The second category I want to focus a bit more on is several malicious uses of today's AI.
I'll focus only on AI applications that are available today. Some examples of AI that can be used for malicious intent:
DeepFake: a technique for imposing someones face on an image a video of another person. This has gained popularity recently with celebrity porn and can be used to generate fake news and hoaxes. Sources: 1, 2, 3
With the use of mass surveillance systems and facial recognition software capable of recognizing millions of faces per second, AI can be used for mass surveillance. Even though when we think of mass surveillance we think of China, many western cities like London, Atlanta and Berlin are among the most-surveilled cities in the world. China has taken things a step further by adopting the social credit system, an evaluation system for civilians which seems to be taken straight out of the pages of George Orwell's 1984.
Influencing people through social media. Aside from recognizing user's tastes with the goal of targeted marketing and add placements (a common practice by many internet companies), AI can be used malisciously to influence people's voting (among other things). Sources: 1, 2, 3.
Hacking.
Military applications, e.g. drone attacks, missile targeting systems.
Adverse effects of AI
This category is pretty subjective, but the development of AI might carry some adverse side-effects. The distinction between this category and the previous is that these effects, while harmful, aren't done intentionally; rather they occur with the development of AI. Some examples are:
Jobs becoming redundant. As AI becomes better, many jobs will be replaced by AI. Unfortunately there are not many things that can be done about this, as most technological developments have this side-effect (e.g. agricultural machinery caused many farmers to lose their jobs, automation replaced many factory workers, computers did the same).
Reinforcing the bias in our data. This is a very interesting category, as AI (and especially Neural Networks) are only as good as the data they are trained on and have a tendency of perpetuating and even enhancing different forms of social biases, already existing in the data. There are many examples of networks exhibiting racist and sexist behavior. Sources: 1, 2, 3, 4.
$endgroup$
4
$begingroup$
I don't understand why "Jobs becoming redundant" is a serious concern. How beautiful the world would be if noone (or at least vast majority of humans) would not need to work and could focus instead on their hobbies and enjoying life.
$endgroup$
– kukis
Sep 16 at 19:40
4
$begingroup$
@kukis How can you get the food, house, etc., without a job, unless you are already rich? A job means survival for most people.
$endgroup$
– nbro
Sep 16 at 20:37
6
$begingroup$
Regarding jobs becoming redundant, it seems like we can do something about it, i.e. revamp our economic models to not rely on humans having jobs. I mean, if our economic systems would break down upon being flooded with ample cheap labor, then they're obviously flawed. And given that we expect this flaw to become dangerous in the foreseeable future, it ought to be fixed.
$endgroup$
– Nat
Sep 17 at 3:19
2
$begingroup$
Another example of where AI could cause major issues is with stock trading. The vast majority of stock trades these days are done by increasingly competitive AI's, which can react far faster than human traders. But it's gotten to the point where even the humans who wrote them don't necessarily understand the reason these AI's make the decisions they do, and there have been some catastrophic effects on the markets from stock-predicting algorithms gone awry.
$endgroup$
– Darrel Hoffman
Sep 18 at 14:54
8
$begingroup$
@penelope not every job that AI tries to replace is "low-interest". I'd argue that there are a lot of high-demand jobs that could be replaced in the (not so distant) future. Some examples are doctors, traders and pilots. If AI keeps advancing and keeps getting better at diagnosing diseases, it wouldn't be unreasonable to think that doctor jobs will be cut down.
$endgroup$
– Djib2011
Sep 19 at 10:59
|
show 16 more comments
$begingroup$
Short term
Physical accidents, e.g. due to industrial machinery, aircraft autopilot, self-driving cars. Especially in the case of unusual situations such as extreme weather or sensor failure. Typically an AI will function poorly under conditions where it has not been extensively tested.
Social impacts such as reducing job availability, barriers for the underprivileged wrt. loans, insurance, parole.
Recommendation engines are manipulating us more and more to change our behaviours (as well as reinforce our own "small world" bubbles). Recommendation engines routinely serve up inappropriate content of various sorts to young children, often because content creators (e.g. on YouTube) use the right keyword stuffing to appear to be child-friendly.
Political manipulation... Enough said, I think.
Plausible deniability of privacy invasion. Now that AI can read your email and even make phone calls for you, it's easy for someone to have humans act on your personal information and claim that they got a computer to do it.
Turning war into a video game, that is, replacing soldiers with machines being operated remotely by someone who is not in any danger and is far removed from his/her casualties.
Lack of transparency. We are trusting machines to make decisions with very little means of getting the justification behind a decision.
Resource consumption and pollution. This is not just an AI problem, however every improvement in AI is creating more demand for Big Data and together these ram up the need for storage, processing, and networking. On top of the electricity and rare minerals consumption, the infrastructure needs to be disposed of after its several-year lifespan.
Surveillance — with the ubiquity of smartphones and listening devices, there is a gold mine of data but too much to sift through every piece. Get an AI to sift through it, of course!
Cybersecurity — cybercriminals are increasingly leveraging AI to attack their targets.
Did I mention that all of these are in full swing already?
Long Term
Although there is no clear line between AI and AGI, this section is more about what happens when we go further towards AGI. I see two alternatives:
- Either we develop AGI as a result of our improved understanding of the nature of intelligence,
- or we slap together something that seems to work but we don't understand very well, much like a lot of machine learning right now.
In the first case, if an AI "goes rogue" we can build other AIs to outwit and neutralise it. In the second case, we can't, and we're doomed. AIs will be a new life form and we may go extinct.
Here are some potential problems:
Copy and paste. One problem with AGI is that it could quite conceivably run on a desktop computer, which creates a number of problems:
Script Kiddies — people could download an AI and set up the parameters in a destructive way. Relatedly,
Criminal or terrorist groups would be able to configure an AI to their liking. You don't need to find an expert on bomb making or bioweapons if you can download an AI, tell it to do some research and then give you step-by-step instructions.
Self-replicating AI — there are plenty of computer games about this. AI breaks loose and spreads like a virus. The more processing power, the better able it is to protect itself and spread further.
Invasion of computing resources. It is likely that more computing power is beneficial to an AI. An AI might buy or steal server resources, or the resources of desktops and mobile devices. Taken to an extreme, this could mean that all our devices simply became unusable which would wreak havoc on the .world immediately. It could also mean massive electricity consumption (and it would be hard to "pull the plug" because power plants are computer controlled!)
Automated factories. An AGI wishing to gain more of a physical presence in the world could take over factories to produce robots which could build new factories and essentially create bodies for itself.- These are rather philosophical considerations, but some would argue that AI would destroy what makes us human:
Inferiority. What if plenty of AI entities were smarter, faster, more reliable and more creative than the best humans?
Pointlessness. With robots replacing the need for physical labour and AIs replacing the need for intellectual labour, we will really have nothing to do. Nobody's going to get the Nobel Prize again because the AI will already be ahead. Why even get educated in the first place?
Monoculture/stagnation — in various scenarios (such as a single "benevolent dictator" AGI) society could become fixed in a perpetual pattern without new ideas or any sort of change (pleasant though it may be). Basically, Brave New World.
I think AGI is coming and we need to be mindful of these problems so that we can minimise them.
$endgroup$
1
$begingroup$
I think the kind of AI that's capable of reprogramming factories to build robots for itself is a long way away. Modern "AI" is just really sophisticated pattern recognition.
$endgroup$
– user253751
Sep 17 at 17:19
$begingroup$
I said "long term" and "AGI". AGI is, by definition, well beyond sophisticated pattern recognition. And although "sophisticated pattern recognition" is far and away the most common thing used in real-world applications, there is already plenty of work in other directions (particularly problem decomposition/action planning, which IMO is the lynchpin of these types of scenarios.)
$endgroup$
– Artelius
Sep 19 at 9:54
add a comment
|
$begingroup$
In addition to the other answers, I would like to add to nuking cookie factory example:
Machine learning AIs basically try to fulfill a goal described by humans. For example, humans create an AI running a cookie factory. The goal they implement is to sell as many cookies as possible for the highest profitable margin.
Now, imagine an AI which is sufficiently powerful. This AI will notice that if he nukes all other cookie factories, everybody has to buy cookies in his factory, making sales rise and profits higher.
So, the human error here is giving no penalty for using violence in the algorithm. This is easily overlooked because humans didn't expect the algorithm to come to this conclusion.
$endgroup$
8
$begingroup$
This reminds me of a real-world example I saw on reddit once where someone was training an AI to climb some stairs in Unity. It discovered that it could press itself into the ground with a lot of force and the physics would glitch, causing it to fly into the air and be the fastest to the top.
$endgroup$
– GammaGames
Sep 16 at 21:48
2
$begingroup$
Or, worse, it'd decide that humans are made out of atoms that would be better used to make cookies out of.
$endgroup$
– nick012000
Sep 17 at 10:52
$begingroup$
I've heard this argument before. One of the fallacies of predicting an AI doomsday is that we can't predict what the AI will do. It's entirely possible the AI would recognize that nuking other cookie companies might throw off the global economy and destroy any potential demand for cookies... Law of economics, supply AND demand
$endgroup$
– Zakk Diaz
Sep 20 at 18:52
add a comment
|
$begingroup$
My favorite scenario for harm by AI involves not high intelligence, but low intelligence. Specifically, the grey goo hypothesis.
This is where a self-replicating, automated process runs amok and converts all resources into copies of itself.
The point here is that the AI is not "smart" in the sense of having high intelligence or general intelligence--it is merely very good at a single thing and has the ability to replicate exponentially.
$endgroup$
3
$begingroup$
FWIW, humans are already grey goo. We're selfish grey goo that doesn't want to be replaced by an even more efficient grey goo.
$endgroup$
– user253751
Sep 17 at 17:19
1
$begingroup$
@immibis That is of course a philosophical POV, not fact. There are plenty of people who differentiate between humans and self-replicating / self sustaining machines. Zombie movies would not be very successful if a majority carried your definition at heart =)
$endgroup$
– Stian Yttervik
Sep 18 at 6:59
1
$begingroup$
@immibis Did you read the gray goo article on Wikipedia that this answer references? The term refers to unintelligent (nano)machines going amok, not to any intelligent behavior. So I'd say, no, humans are not it (and neither is AI), since we didn't eat Albert Einstein when we could.
$endgroup$
– kubanczyk
Sep 19 at 4:58
$begingroup$
@kubanczyk the fundamental meaning of term "intelligence" seems widely misunderstood, both in academia and the general public. Intelligence is a spectrum, generally relative (to other decision-making mechanisms), and is based on the utility of any given decision in the context of a problem. So grey goo would be intelligent, just that the intelligence would be limited and narrow.
$endgroup$
– DukeZhou♦
Sep 20 at 20:53
add a comment
|
$begingroup$
I would say the biggest real threat would be the unbalancing/disrupting we are already seeing. The changes of putting 90% the country out of work are real, and the results (which will be even more uneven distribution of wealth) are terrifying if you think them through.
$endgroup$
2
$begingroup$
People said the same thing during the industrial revolution which made most farming jobs redundant. While you may not be wrong, and it is something that I am worried about personally, research studying trends show this may not be a concern and it's probable new jobs will open up.
$endgroup$
– Programmdude
Sep 17 at 0:44
$begingroup$
@Programmdude - I think there is a fundamental difference between the industrial revolution changes and even the elimination of secretarial jobs through the advent of the P with what will happen in the coming decades.
$endgroup$
– Mayo
Sep 17 at 16:13
2
$begingroup$
@Programmdude And the people were right. The industrial revolution did change everything about the way people live, it was extremely disruptive in terms of distribution of wealth and the ability of people to exist on a farm income. From the other point of view: The slave owners looking back from a few hundred years in the future will probably not see the effects of AI on this period as disruptive since it formed their situation.
$endgroup$
– Bill K
Sep 17 at 16:18
$begingroup$
@BillK I was with you right up until the part about slave owners. You do know that AIs aren't self-aware, right?
$endgroup$
– Ray
Sep 18 at 15:34
$begingroup$
@Ray I didn't mean the AIs, I meant the people who controlled the AIs (And would therefore have all the wealth), and really it was just a way to point out that things may be incomprehensibly different to us but it wouldn't feel that different looking back.
$endgroup$
– Bill K
Sep 18 at 16:16
|
show 2 more comments
$begingroup$
I have an example which goes in kinda the opposite direction of the public's fears, but is a very real thing, which I already see happening. It is not AI-specific, but I think it will get worse through AI. It is the problem of humans trusting the AI conclusions blindly in critical applications.
We have many areas in which human experts are supposed to make a decision. Take for example medicine - should we give medication X or medication Y? The situations I have in mind are frequently complex problems (in the Cynefin sense) where it is a really good thing to have somebody pay attention very closely and use lots of expertise, and the outcome really matters.
There is a demand for medical informaticians to write decision support systems for this kind of problem in the medicine (and I suppose for the same type in other domains). They do their best, but the expectation is always that a human expert will always consider the system's suggestion just as one more opinion when making the decision. In many cases, it would be irresponsible to promise anything else, given the state of knowledge and the resources available to the developers. A typical example would be the use of computer vision in radiomics: a patient gets a CT scan and the AI has to process the image and decide whether the patient has a tumor.
Of course, the AI is not perfect. Even when measured against the gold standard, it never achieves 100% accuracy. And then there are all the cases where it performs well against its own goal metrics, but the problem was so complex that the goal metric doesn't capture it well - I can't think of an example in the CT context, but I guess we see it even here on SE, where the algorithms favor popularity in posts, which is an imperfect proxy for factual correctness.
You were probably reading that last paragraph and nodding along, "Yeah, I learned that in the first introductory ML course I took". Guess what? Physicians never took an introductory ML course. They rarely have enough statistic literacy to understand the conclusions of papers appearing in medical journals. When they are talking to their 27th patient, 7 hours into their 16 hour shift, hungry and emotionally drained, and the CT doesn't look all that clear-cut, but the computer says "it's not a malignancy", they don't take ten more minutes to concentrate on the image more, or look up a textbook, or consult with a colleague. They just go with what the computer says, grateful that their cognitive load is not skyrocketing yet again. So they turn from being experts to being people who read something off a screen. Worse, in some hospitals the administration does not only trust computers, it also has found out that they are convenient scapegoats. So, a physician has a bad hunch which goes against the computer's output, it becomes difficult for them to act on that hunch and defend themselves that they chose to overrode the AI's opinion.
AIs are powerful and useful tools, but there will always be tasks where they can't replace the toolwielder.
$endgroup$
$begingroup$
If you're looking for more examples, the controversy around using machine learning to predict reoffending rates of applicants for bail or parole is a good one. I agree that we shouldn't expect doctors and judges to have the levels of statistical expertise needed to understand AI, in addition to their medical and legal expertise. AI designers should be aware of the fallibility of their algorithms, and provide clear guidance to its users. Maybe tell the doctor where to look on the CT scan instead of directly giving them the result.
$endgroup$
– craq
Sep 18 at 4:57
add a comment
|
$begingroup$
This only intents to be a complement to other answers so I will not discuss to possibility of AI trying to willingly enslave humanity.
But a different risk is already here. I would call it unmastered technology. I have been teached science and technology, and IMHO, AI has by itself no notion of good and evil, nor freedom. But it is built and used by human beings and because of that non rational behaviour can be involved.
I would start with a real life example more related to general IT than to AI. I will speak of viruses or other malwares. Computers are rather stupid machines that are good to quickly process data. So most people rely on them. An some (bad) people develop malwares that will disrupt the correct behaviour of computers. And we all know that they can have terrible effects on small to medium organizations that are not well prepared to an computer loss.
AI is computer based so it is vulnerable to computer type attacks. Here my example would be an AI driven car. The technology is almost ready to work. But imagine the effect of a malware making the car trying to attack other people on the road. Even without a direct access to the code of the AI, it can be attacked by side channels. For example it uses cameras to read signal signs. But because of the way machine learning is implemented, AI generaly does not analyses a scene the same way a human being does. Researchers have shown that it was possible to change a sign in a way that a normal human will still see the original sign, but an AI will see a different one. Imagine now that the sign is the road priority sign...
What I mean is that even if the AI has no evil intents, bad guys can try to make it behave badly. And to more important actions will be delegated to AI (medecine, cars, planes, not speaking of bombs) the higher the risk. Said differently, I do not really fear the AI for itself, but for the way it can be used by humans.
$endgroup$
add a comment
|
$begingroup$
I think one of the most real (ie. related to current, existing AIs) risks are in blindly relying on unsupervised AIs, for two reasons.
1. AI systems may degrade
Physical error in AI systems may start producing wildly wrong results in regions in which they were not tested for because the physical system starts providing wrong values. This is sometimes redeemed by self-testing and redundancy, but still requires occasional human supervision.
Self learning AIs also have a software weakness - their weight networks or statistic representations may approach local minima where they are stuck with one wrong result.
2. AI systems are biased
This is fortunately frequently discussed, but worth mentioning: AI systems' classification of inputs is often biased because the training/testing dataset were biased as well. This results in AIs not recognizing people of certain ethnicity, for more obvious example. However there are less obvious cases that may only be discovered after some bad accident, such as AI not recognizing certain data and accidentally starting fire in a factory, breaking equipment or hurting people.
$endgroup$
$begingroup$
This is a good, contemporary answer. "Black box" AI such as neural networks are impossible to test in an absolute manner, which makes them less than 100% predictable - and by extension, less than 100% reliable. We never know when an AI will develop an alternative strategy to a given problem, and how this alternative strategy will affect us, and this is a really huge issue if we want to rely on AI for important tasks like driving cars or managing resources.
$endgroup$
– laancelot
Sep 18 at 13:33
add a comment
|
$begingroup$
If a robot is similar to a human machine interface the device is the same like a remote controlled car. It's possible to discuss with the operator behind the joystick and negotiate about a wishful behavior. Remote controlled robots are safe inventions because their actions can be traced back to humans and their motivation can be anticipated. They can be used to improve the daily life, and it's funny to play with them.
In contrast, some robots aren't controlled by joysticks but are working with an internal dice generator. The dice toy is known from it's social role in gambling but it has also a mystical meaning. Usually, a random generator is strongly connected with chaotic behavior which is controlled by dark forces outside the influence of humans. An electronic dice built into a robot and improved with learning algorithm is the opposite of a human machine interface, but it's potential troublemaker because the randomly controlled robot will play games with humans which can't be anticipated. It's not possible to predict the next number of a dice, therefore the robot will behave abrupt as well.
The connection between randomly controlled games and negative social impact was explained in the following sentence.
quote: “In many traditional non-Western societies gamblers may pray to
the gods for success and explain wins and losses in terms of divine
will. “ Binde, Per. "Gambling and religion: Histories of concord and
conflict." Journal of Gambling Issues 20 (2007): 145-165.
$endgroup$
add a comment
|
$begingroup$
Human beings currently exist in an ecological-economic niche of "the thing that thinks".
AI is also a thing that thinks, so it will be invading our ecological-economic niche. In both ecology and economics, having something else occupy your niche is not a great plan for continued survival.
How exactly Human survival is compromised by this is going to be pretty chaotic. There are going to be a bunch of plausible ways that AI could endanger human survival as a species, or even as a dominant life form.
Suppose there is a strong AI without "super ethics" which is cheaper to manufacture than a human (including manufacturing a "body" or way of manipulating the world), and as smart or smarter than a human.
This is a case where we start competing with that AI for resources. It will happen on microeconomic scales (do we hire a human, or buy/build/rent/hire an AI to solve this problem?). Depending on the rate at which AIs become cheap and/or smarter than people, this can happen slowly (maybe an industry at a time) or extremely fast.
In a capitalist competition, those that don't move over to the cheaper AIs end up out-competed.
Now, in the short term, if the AI's advantages are only marginal, the high cost of educating humans for 20-odd years before they become productive could make this process slower. In this case, it might be worth paying a Doctor above starvation wages to diagnose disease instead of an AI, but it probably isn't worth paying off their student loans. So new human Doctors would rapidly stop being trained, and existing Doctors would be impoverished. Then over 20-30 years AI would completely replace Doctors for diagnostic purposes.
If the AI's advantages are large, then it would be rapid. Doctors wouldn't even be worth paying poverty level wages to do human diagnostics. You can see something like that happening with muscle-based farming when gasoline-based farming took over.
During past industrial revolutions, the fact that humans where able to think means that you could repurpose surplus human workers to do other actions; manufacturing lines, service economy jobs, computer programming, etc. But in this model, AI is cheaper to train and build and as smart or smarter than humans at that kind of job.
As evidenced by the ethanol-induced Arab spring, crops and cropland can be used to fuel both machines and humans. When machines are more efficient in terms of turning cropland into useful work, you'll start seeing the price of food climb. This typically leads to riots, as people really don't like starving to death and are willing to risk their own lives to overthrow the government in order to prevent this.
You can mollify the people by providing subsidized food and the like. So long as this isn't economically crippling (ie, if expensive enough, it could result in you being out-competed by other places that don't do this), this is merely politically unstable.
As an alternative, in the short term, the ownership caste who is receiving profits from the increasingly efficient AI-run economy can pay for a police or military caste to put down said riots. This requires that the police/military castes be upper lower to middle class in standards of living, in order to ensure continued loyalty -- you don't want them joining the rioters.
So one of the profit centers you can put AI towards is AI based military and policing. Drones that deliver lethal and non-lethal ordnance based off of processing visual and other data feeds can reduce the number of middle-class police/military needed to put down food-price triggered riots or other instability. As we have already assumed said AIs can have bodies and training cheaper than a biological human, this can also increase the amount of force you can deploy per dollar spent.
At this point, we are talking about a mostly AI run police and military being used to keep starving humans from overthrowing the AI run economy and seizing the means of production from the more efficient use it is currently being put to.
The vestigial humans who "own" the system at the top are making locally rational decisions to optimize their wealth and power. They may or may not persist for long; so long as they drain a relatively small amount of resources and don't mess up the AI run economy, there won't be much selection pressure to get rid of them. On the other hand, as they are contributing nothing of value, they position "at the top" is politically unstable.
This process assumed a "strong" general AI. Narrower AIs can pull this off in pieces. A cheap, effective diagnostic computer could reduce most Doctors into poverty in a surprisingly short period of time, for example. Self driving cars could swallow 5%-10% of the economy. Information technology is already swallowing the retail sector with modest AI.
It is said that every technological advancement leads to more and better jobs for humans. And this has been true for the last 300+ years.
But prior to 1900, it was also true that every technological advancement led to more and better jobs for horses. Then the ICE and automobile arrived, and now there are far fewer working horses; the remaining horses are basically the equivalent of human personal servants: kept for the novelty of "wow, cool, horse" and the fun of riding and controlling a huge animal.
$endgroup$
add a comment
|
$begingroup$
In addtion to the many answers already provided, I would bring up the issue of adversarial examples in the area of image models.
Adversarial examples are images that have been perturbed with specifically designed noise that is often imperceptible to a human observer, but strongly alters the prediction of a model.
Examples include:
Affecting the predicted diagnosis in a chest x-ray
Affecting the detection of roadsigns necessary for autonomous vehicles.
$endgroup$
add a comment
|
$begingroup$
AI that is used to solve a real world problem could pose a risk to humanity and doesn't exactly require sentience, this also requires a degree of human stupidity too..
Unlike humans, an AI would find the most logical answer without the constraint of emotion, ethics, or even greed... Only logic. Ask this AI how to solve a problem that humans created (for example, Climate Change) and it's solution might be to eliminate the entirety of the human race to protect the planet. Obviously this would require giving the AI the ability to act upon it's outcome which brings me to my earlier point, human stupidity.
$endgroup$
add a comment
|
$begingroup$
Artificial intelligence can harm us in any of the ways of natural intelligence (of humans). The distinction between natural and artificial intelligence will vanish when humans start augmenting themselves more intimately. Intelligence may no longer characterize the identity and will become a limitless possession. The harm caused will be as much the humans can endure for preserving their evolving self-identity.
$endgroup$
add a comment
|
$begingroup$
Few people realize that our global economy should be considered an AI:
- The money transactions are the signals over a neural net. The nodes in the neural net would be the different corporations or private persons paying or receiving money.
- It is man-made so qualifies as artificial
This neural network is better in its task then humans:
Capitalism has always won against economy planned by humans (plan-economy).
Is this neural net dangerous ?
Might differ if you are the CEO earning big versus a fisherman in a river polluted by corporate waste.
How did this AI become dangerous?
You could answer it is because of human greed.
Our creation reflects ourselves.
In other words: we didnot train our neural net to behave well.
Instead of training the neural net to improve living quality for all humans, we trained it to make rich fokes more rich.
Would it be easy to train this AI to be no longer dangerous ?
Maybe not, maybe some AI are just larger then life.
It is just survival of the fittest.
$endgroup$
add a comment
|
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "658"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f15449%2fhow-could-artificial-intelligence-harm-us%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
14 Answers
14
active
oldest
votes
14 Answers
14
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
tl;dr
There are many valid reasons why people might fear (or better be concerned about) AI, not all involve robots and apocalyptic scenarios.
To better illustrate these concerns, I'll try to split them into three categories.
Conscious AI
This is the type of AI that your question is referring to. A super-intelligent conscious AI that will destroy/enslave humanity. This is mostly brought to us by science-fiction. Some notable Hollywood examples are "The terminator", "The Matrix", "Age of Ultron". The most influential novels were written by Isaac Asimov and are referred to as the "Robot series" (which includes "I, robot", which was also adapted as a movie).
The basic premise under most of these works are that AI will evolve to a point where it becomes conscious and will surpass humans in intelligence. While Hollywood movies mainly focus on the robots and the battle between them and humans, not enough emphasis is given to the actual AI (i.e. the "brain" controlling them). As a side note, because of the narrative, this AI is usually portrayed as supercomputer controlling everything (so that the protagonists have a specific target). Not enough exploration has been made on "ambiguous intelligence" (which I think is more realistic).
In the real world, AI is focused on solving specific tasks! An AI agent that is capable of solving problems from different domains (e.g. understanding speech and processing images and driving and ... - like humans are) is referred to as General Artificial Intelligence and is required for AI being able to "think" and become conscious.
Realistically, we are a loooooooong way from General Artificial Intelligence! That being said there is no evidence on why this can't be achieved in the future. So currently, even if we are still in the infancy of AI, we have no reason to believe that AI won't evolve to a point where it is more intelligent than humans.
Using AI with malicious intent
Even though an AI conquering the world is a long way from happening there are several reasons to be concerned with AI today, that don't involve robots!
The second category I want to focus a bit more on is several malicious uses of today's AI.
I'll focus only on AI applications that are available today. Some examples of AI that can be used for malicious intent:
DeepFake: a technique for imposing someones face on an image a video of another person. This has gained popularity recently with celebrity porn and can be used to generate fake news and hoaxes. Sources: 1, 2, 3
With the use of mass surveillance systems and facial recognition software capable of recognizing millions of faces per second, AI can be used for mass surveillance. Even though when we think of mass surveillance we think of China, many western cities like London, Atlanta and Berlin are among the most-surveilled cities in the world. China has taken things a step further by adopting the social credit system, an evaluation system for civilians which seems to be taken straight out of the pages of George Orwell's 1984.
Influencing people through social media. Aside from recognizing user's tastes with the goal of targeted marketing and add placements (a common practice by many internet companies), AI can be used malisciously to influence people's voting (among other things). Sources: 1, 2, 3.
Hacking.
Military applications, e.g. drone attacks, missile targeting systems.
Adverse effects of AI
This category is pretty subjective, but the development of AI might carry some adverse side-effects. The distinction between this category and the previous is that these effects, while harmful, aren't done intentionally; rather they occur with the development of AI. Some examples are:
Jobs becoming redundant. As AI becomes better, many jobs will be replaced by AI. Unfortunately there are not many things that can be done about this, as most technological developments have this side-effect (e.g. agricultural machinery caused many farmers to lose their jobs, automation replaced many factory workers, computers did the same).
Reinforcing the bias in our data. This is a very interesting category, as AI (and especially Neural Networks) are only as good as the data they are trained on and have a tendency of perpetuating and even enhancing different forms of social biases, already existing in the data. There are many examples of networks exhibiting racist and sexist behavior. Sources: 1, 2, 3, 4.
$endgroup$
4
$begingroup$
I don't understand why "Jobs becoming redundant" is a serious concern. How beautiful the world would be if noone (or at least vast majority of humans) would not need to work and could focus instead on their hobbies and enjoying life.
$endgroup$
– kukis
Sep 16 at 19:40
4
$begingroup$
@kukis How can you get the food, house, etc., without a job, unless you are already rich? A job means survival for most people.
$endgroup$
– nbro
Sep 16 at 20:37
6
$begingroup$
Regarding jobs becoming redundant, it seems like we can do something about it, i.e. revamp our economic models to not rely on humans having jobs. I mean, if our economic systems would break down upon being flooded with ample cheap labor, then they're obviously flawed. And given that we expect this flaw to become dangerous in the foreseeable future, it ought to be fixed.
$endgroup$
– Nat
Sep 17 at 3:19
2
$begingroup$
Another example of where AI could cause major issues is with stock trading. The vast majority of stock trades these days are done by increasingly competitive AI's, which can react far faster than human traders. But it's gotten to the point where even the humans who wrote them don't necessarily understand the reason these AI's make the decisions they do, and there have been some catastrophic effects on the markets from stock-predicting algorithms gone awry.
$endgroup$
– Darrel Hoffman
Sep 18 at 14:54
8
$begingroup$
@penelope not every job that AI tries to replace is "low-interest". I'd argue that there are a lot of high-demand jobs that could be replaced in the (not so distant) future. Some examples are doctors, traders and pilots. If AI keeps advancing and keeps getting better at diagnosing diseases, it wouldn't be unreasonable to think that doctor jobs will be cut down.
$endgroup$
– Djib2011
Sep 19 at 10:59
|
show 16 more comments
$begingroup$
tl;dr
There are many valid reasons why people might fear (or better be concerned about) AI, not all involve robots and apocalyptic scenarios.
To better illustrate these concerns, I'll try to split them into three categories.
Conscious AI
This is the type of AI that your question is referring to. A super-intelligent conscious AI that will destroy/enslave humanity. This is mostly brought to us by science-fiction. Some notable Hollywood examples are "The terminator", "The Matrix", "Age of Ultron". The most influential novels were written by Isaac Asimov and are referred to as the "Robot series" (which includes "I, robot", which was also adapted as a movie).
The basic premise under most of these works are that AI will evolve to a point where it becomes conscious and will surpass humans in intelligence. While Hollywood movies mainly focus on the robots and the battle between them and humans, not enough emphasis is given to the actual AI (i.e. the "brain" controlling them). As a side note, because of the narrative, this AI is usually portrayed as supercomputer controlling everything (so that the protagonists have a specific target). Not enough exploration has been made on "ambiguous intelligence" (which I think is more realistic).
In the real world, AI is focused on solving specific tasks! An AI agent that is capable of solving problems from different domains (e.g. understanding speech and processing images and driving and ... - like humans are) is referred to as General Artificial Intelligence and is required for AI being able to "think" and become conscious.
Realistically, we are a loooooooong way from General Artificial Intelligence! That being said there is no evidence on why this can't be achieved in the future. So currently, even if we are still in the infancy of AI, we have no reason to believe that AI won't evolve to a point where it is more intelligent than humans.
Using AI with malicious intent
Even though an AI conquering the world is a long way from happening there are several reasons to be concerned with AI today, that don't involve robots!
The second category I want to focus a bit more on is several malicious uses of today's AI.
I'll focus only on AI applications that are available today. Some examples of AI that can be used for malicious intent:
DeepFake: a technique for imposing someones face on an image a video of another person. This has gained popularity recently with celebrity porn and can be used to generate fake news and hoaxes. Sources: 1, 2, 3
With the use of mass surveillance systems and facial recognition software capable of recognizing millions of faces per second, AI can be used for mass surveillance. Even though when we think of mass surveillance we think of China, many western cities like London, Atlanta and Berlin are among the most-surveilled cities in the world. China has taken things a step further by adopting the social credit system, an evaluation system for civilians which seems to be taken straight out of the pages of George Orwell's 1984.
Influencing people through social media. Aside from recognizing user's tastes with the goal of targeted marketing and add placements (a common practice by many internet companies), AI can be used malisciously to influence people's voting (among other things). Sources: 1, 2, 3.
Hacking.
Military applications, e.g. drone attacks, missile targeting systems.
Adverse effects of AI
This category is pretty subjective, but the development of AI might carry some adverse side-effects. The distinction between this category and the previous is that these effects, while harmful, aren't done intentionally; rather they occur with the development of AI. Some examples are:
Jobs becoming redundant. As AI becomes better, many jobs will be replaced by AI. Unfortunately there are not many things that can be done about this, as most technological developments have this side-effect (e.g. agricultural machinery caused many farmers to lose their jobs, automation replaced many factory workers, computers did the same).
Reinforcing the bias in our data. This is a very interesting category, as AI (and especially Neural Networks) are only as good as the data they are trained on and have a tendency of perpetuating and even enhancing different forms of social biases, already existing in the data. There are many examples of networks exhibiting racist and sexist behavior. Sources: 1, 2, 3, 4.
$endgroup$
4
$begingroup$
I don't understand why "Jobs becoming redundant" is a serious concern. How beautiful the world would be if noone (or at least vast majority of humans) would not need to work and could focus instead on their hobbies and enjoying life.
$endgroup$
– kukis
Sep 16 at 19:40
4
$begingroup$
@kukis How can you get the food, house, etc., without a job, unless you are already rich? A job means survival for most people.
$endgroup$
– nbro
Sep 16 at 20:37
6
$begingroup$
Regarding jobs becoming redundant, it seems like we can do something about it, i.e. revamp our economic models to not rely on humans having jobs. I mean, if our economic systems would break down upon being flooded with ample cheap labor, then they're obviously flawed. And given that we expect this flaw to become dangerous in the foreseeable future, it ought to be fixed.
$endgroup$
– Nat
Sep 17 at 3:19
2
$begingroup$
Another example of where AI could cause major issues is with stock trading. The vast majority of stock trades these days are done by increasingly competitive AI's, which can react far faster than human traders. But it's gotten to the point where even the humans who wrote them don't necessarily understand the reason these AI's make the decisions they do, and there have been some catastrophic effects on the markets from stock-predicting algorithms gone awry.
$endgroup$
– Darrel Hoffman
Sep 18 at 14:54
8
$begingroup$
@penelope not every job that AI tries to replace is "low-interest". I'd argue that there are a lot of high-demand jobs that could be replaced in the (not so distant) future. Some examples are doctors, traders and pilots. If AI keeps advancing and keeps getting better at diagnosing diseases, it wouldn't be unreasonable to think that doctor jobs will be cut down.
$endgroup$
– Djib2011
Sep 19 at 10:59
|
show 16 more comments
$begingroup$
tl;dr
There are many valid reasons why people might fear (or better be concerned about) AI, not all involve robots and apocalyptic scenarios.
To better illustrate these concerns, I'll try to split them into three categories.
Conscious AI
This is the type of AI that your question is referring to. A super-intelligent conscious AI that will destroy/enslave humanity. This is mostly brought to us by science-fiction. Some notable Hollywood examples are "The terminator", "The Matrix", "Age of Ultron". The most influential novels were written by Isaac Asimov and are referred to as the "Robot series" (which includes "I, robot", which was also adapted as a movie).
The basic premise under most of these works are that AI will evolve to a point where it becomes conscious and will surpass humans in intelligence. While Hollywood movies mainly focus on the robots and the battle between them and humans, not enough emphasis is given to the actual AI (i.e. the "brain" controlling them). As a side note, because of the narrative, this AI is usually portrayed as supercomputer controlling everything (so that the protagonists have a specific target). Not enough exploration has been made on "ambiguous intelligence" (which I think is more realistic).
In the real world, AI is focused on solving specific tasks! An AI agent that is capable of solving problems from different domains (e.g. understanding speech and processing images and driving and ... - like humans are) is referred to as General Artificial Intelligence and is required for AI being able to "think" and become conscious.
Realistically, we are a loooooooong way from General Artificial Intelligence! That being said there is no evidence on why this can't be achieved in the future. So currently, even if we are still in the infancy of AI, we have no reason to believe that AI won't evolve to a point where it is more intelligent than humans.
Using AI with malicious intent
Even though an AI conquering the world is a long way from happening there are several reasons to be concerned with AI today, that don't involve robots!
The second category I want to focus a bit more on is several malicious uses of today's AI.
I'll focus only on AI applications that are available today. Some examples of AI that can be used for malicious intent:
DeepFake: a technique for imposing someones face on an image a video of another person. This has gained popularity recently with celebrity porn and can be used to generate fake news and hoaxes. Sources: 1, 2, 3
With the use of mass surveillance systems and facial recognition software capable of recognizing millions of faces per second, AI can be used for mass surveillance. Even though when we think of mass surveillance we think of China, many western cities like London, Atlanta and Berlin are among the most-surveilled cities in the world. China has taken things a step further by adopting the social credit system, an evaluation system for civilians which seems to be taken straight out of the pages of George Orwell's 1984.
Influencing people through social media. Aside from recognizing user's tastes with the goal of targeted marketing and add placements (a common practice by many internet companies), AI can be used malisciously to influence people's voting (among other things). Sources: 1, 2, 3.
Hacking.
Military applications, e.g. drone attacks, missile targeting systems.
Adverse effects of AI
This category is pretty subjective, but the development of AI might carry some adverse side-effects. The distinction between this category and the previous is that these effects, while harmful, aren't done intentionally; rather they occur with the development of AI. Some examples are:
Jobs becoming redundant. As AI becomes better, many jobs will be replaced by AI. Unfortunately there are not many things that can be done about this, as most technological developments have this side-effect (e.g. agricultural machinery caused many farmers to lose their jobs, automation replaced many factory workers, computers did the same).
Reinforcing the bias in our data. This is a very interesting category, as AI (and especially Neural Networks) are only as good as the data they are trained on and have a tendency of perpetuating and even enhancing different forms of social biases, already existing in the data. There are many examples of networks exhibiting racist and sexist behavior. Sources: 1, 2, 3, 4.
$endgroup$
tl;dr
There are many valid reasons why people might fear (or better be concerned about) AI, not all involve robots and apocalyptic scenarios.
To better illustrate these concerns, I'll try to split them into three categories.
Conscious AI
This is the type of AI that your question is referring to. A super-intelligent conscious AI that will destroy/enslave humanity. This is mostly brought to us by science-fiction. Some notable Hollywood examples are "The terminator", "The Matrix", "Age of Ultron". The most influential novels were written by Isaac Asimov and are referred to as the "Robot series" (which includes "I, robot", which was also adapted as a movie).
The basic premise under most of these works are that AI will evolve to a point where it becomes conscious and will surpass humans in intelligence. While Hollywood movies mainly focus on the robots and the battle between them and humans, not enough emphasis is given to the actual AI (i.e. the "brain" controlling them). As a side note, because of the narrative, this AI is usually portrayed as supercomputer controlling everything (so that the protagonists have a specific target). Not enough exploration has been made on "ambiguous intelligence" (which I think is more realistic).
In the real world, AI is focused on solving specific tasks! An AI agent that is capable of solving problems from different domains (e.g. understanding speech and processing images and driving and ... - like humans are) is referred to as General Artificial Intelligence and is required for AI being able to "think" and become conscious.
Realistically, we are a loooooooong way from General Artificial Intelligence! That being said there is no evidence on why this can't be achieved in the future. So currently, even if we are still in the infancy of AI, we have no reason to believe that AI won't evolve to a point where it is more intelligent than humans.
Using AI with malicious intent
Even though an AI conquering the world is a long way from happening there are several reasons to be concerned with AI today, that don't involve robots!
The second category I want to focus a bit more on is several malicious uses of today's AI.
I'll focus only on AI applications that are available today. Some examples of AI that can be used for malicious intent:
DeepFake: a technique for imposing someones face on an image a video of another person. This has gained popularity recently with celebrity porn and can be used to generate fake news and hoaxes. Sources: 1, 2, 3
With the use of mass surveillance systems and facial recognition software capable of recognizing millions of faces per second, AI can be used for mass surveillance. Even though when we think of mass surveillance we think of China, many western cities like London, Atlanta and Berlin are among the most-surveilled cities in the world. China has taken things a step further by adopting the social credit system, an evaluation system for civilians which seems to be taken straight out of the pages of George Orwell's 1984.
Influencing people through social media. Aside from recognizing user's tastes with the goal of targeted marketing and add placements (a common practice by many internet companies), AI can be used malisciously to influence people's voting (among other things). Sources: 1, 2, 3.
Hacking.
Military applications, e.g. drone attacks, missile targeting systems.
Adverse effects of AI
This category is pretty subjective, but the development of AI might carry some adverse side-effects. The distinction between this category and the previous is that these effects, while harmful, aren't done intentionally; rather they occur with the development of AI. Some examples are:
Jobs becoming redundant. As AI becomes better, many jobs will be replaced by AI. Unfortunately there are not many things that can be done about this, as most technological developments have this side-effect (e.g. agricultural machinery caused many farmers to lose their jobs, automation replaced many factory workers, computers did the same).
Reinforcing the bias in our data. This is a very interesting category, as AI (and especially Neural Networks) are only as good as the data they are trained on and have a tendency of perpetuating and even enhancing different forms of social biases, already existing in the data. There are many examples of networks exhibiting racist and sexist behavior. Sources: 1, 2, 3, 4.
answered Sep 16 at 12:18
Djib2011Djib2011
2,0866 silver badges15 bronze badges
2,0866 silver badges15 bronze badges
4
$begingroup$
I don't understand why "Jobs becoming redundant" is a serious concern. How beautiful the world would be if noone (or at least vast majority of humans) would not need to work and could focus instead on their hobbies and enjoying life.
$endgroup$
– kukis
Sep 16 at 19:40
4
$begingroup$
@kukis How can you get the food, house, etc., without a job, unless you are already rich? A job means survival for most people.
$endgroup$
– nbro
Sep 16 at 20:37
6
$begingroup$
Regarding jobs becoming redundant, it seems like we can do something about it, i.e. revamp our economic models to not rely on humans having jobs. I mean, if our economic systems would break down upon being flooded with ample cheap labor, then they're obviously flawed. And given that we expect this flaw to become dangerous in the foreseeable future, it ought to be fixed.
$endgroup$
– Nat
Sep 17 at 3:19
2
$begingroup$
Another example of where AI could cause major issues is with stock trading. The vast majority of stock trades these days are done by increasingly competitive AI's, which can react far faster than human traders. But it's gotten to the point where even the humans who wrote them don't necessarily understand the reason these AI's make the decisions they do, and there have been some catastrophic effects on the markets from stock-predicting algorithms gone awry.
$endgroup$
– Darrel Hoffman
Sep 18 at 14:54
8
$begingroup$
@penelope not every job that AI tries to replace is "low-interest". I'd argue that there are a lot of high-demand jobs that could be replaced in the (not so distant) future. Some examples are doctors, traders and pilots. If AI keeps advancing and keeps getting better at diagnosing diseases, it wouldn't be unreasonable to think that doctor jobs will be cut down.
$endgroup$
– Djib2011
Sep 19 at 10:59
|
show 16 more comments
4
$begingroup$
I don't understand why "Jobs becoming redundant" is a serious concern. How beautiful the world would be if noone (or at least vast majority of humans) would not need to work and could focus instead on their hobbies and enjoying life.
$endgroup$
– kukis
Sep 16 at 19:40
4
$begingroup$
@kukis How can you get the food, house, etc., without a job, unless you are already rich? A job means survival for most people.
$endgroup$
– nbro
Sep 16 at 20:37
6
$begingroup$
Regarding jobs becoming redundant, it seems like we can do something about it, i.e. revamp our economic models to not rely on humans having jobs. I mean, if our economic systems would break down upon being flooded with ample cheap labor, then they're obviously flawed. And given that we expect this flaw to become dangerous in the foreseeable future, it ought to be fixed.
$endgroup$
– Nat
Sep 17 at 3:19
2
$begingroup$
Another example of where AI could cause major issues is with stock trading. The vast majority of stock trades these days are done by increasingly competitive AI's, which can react far faster than human traders. But it's gotten to the point where even the humans who wrote them don't necessarily understand the reason these AI's make the decisions they do, and there have been some catastrophic effects on the markets from stock-predicting algorithms gone awry.
$endgroup$
– Darrel Hoffman
Sep 18 at 14:54
8
$begingroup$
@penelope not every job that AI tries to replace is "low-interest". I'd argue that there are a lot of high-demand jobs that could be replaced in the (not so distant) future. Some examples are doctors, traders and pilots. If AI keeps advancing and keeps getting better at diagnosing diseases, it wouldn't be unreasonable to think that doctor jobs will be cut down.
$endgroup$
– Djib2011
Sep 19 at 10:59
4
4
$begingroup$
I don't understand why "Jobs becoming redundant" is a serious concern. How beautiful the world would be if noone (or at least vast majority of humans) would not need to work and could focus instead on their hobbies and enjoying life.
$endgroup$
– kukis
Sep 16 at 19:40
$begingroup$
I don't understand why "Jobs becoming redundant" is a serious concern. How beautiful the world would be if noone (or at least vast majority of humans) would not need to work and could focus instead on their hobbies and enjoying life.
$endgroup$
– kukis
Sep 16 at 19:40
4
4
$begingroup$
@kukis How can you get the food, house, etc., without a job, unless you are already rich? A job means survival for most people.
$endgroup$
– nbro
Sep 16 at 20:37
$begingroup$
@kukis How can you get the food, house, etc., without a job, unless you are already rich? A job means survival for most people.
$endgroup$
– nbro
Sep 16 at 20:37
6
6
$begingroup$
Regarding jobs becoming redundant, it seems like we can do something about it, i.e. revamp our economic models to not rely on humans having jobs. I mean, if our economic systems would break down upon being flooded with ample cheap labor, then they're obviously flawed. And given that we expect this flaw to become dangerous in the foreseeable future, it ought to be fixed.
$endgroup$
– Nat
Sep 17 at 3:19
$begingroup$
Regarding jobs becoming redundant, it seems like we can do something about it, i.e. revamp our economic models to not rely on humans having jobs. I mean, if our economic systems would break down upon being flooded with ample cheap labor, then they're obviously flawed. And given that we expect this flaw to become dangerous in the foreseeable future, it ought to be fixed.
$endgroup$
– Nat
Sep 17 at 3:19
2
2
$begingroup$
Another example of where AI could cause major issues is with stock trading. The vast majority of stock trades these days are done by increasingly competitive AI's, which can react far faster than human traders. But it's gotten to the point where even the humans who wrote them don't necessarily understand the reason these AI's make the decisions they do, and there have been some catastrophic effects on the markets from stock-predicting algorithms gone awry.
$endgroup$
– Darrel Hoffman
Sep 18 at 14:54
$begingroup$
Another example of where AI could cause major issues is with stock trading. The vast majority of stock trades these days are done by increasingly competitive AI's, which can react far faster than human traders. But it's gotten to the point where even the humans who wrote them don't necessarily understand the reason these AI's make the decisions they do, and there have been some catastrophic effects on the markets from stock-predicting algorithms gone awry.
$endgroup$
– Darrel Hoffman
Sep 18 at 14:54
8
8
$begingroup$
@penelope not every job that AI tries to replace is "low-interest". I'd argue that there are a lot of high-demand jobs that could be replaced in the (not so distant) future. Some examples are doctors, traders and pilots. If AI keeps advancing and keeps getting better at diagnosing diseases, it wouldn't be unreasonable to think that doctor jobs will be cut down.
$endgroup$
– Djib2011
Sep 19 at 10:59
$begingroup$
@penelope not every job that AI tries to replace is "low-interest". I'd argue that there are a lot of high-demand jobs that could be replaced in the (not so distant) future. Some examples are doctors, traders and pilots. If AI keeps advancing and keeps getting better at diagnosing diseases, it wouldn't be unreasonable to think that doctor jobs will be cut down.
$endgroup$
– Djib2011
Sep 19 at 10:59
|
show 16 more comments
$begingroup$
Short term
Physical accidents, e.g. due to industrial machinery, aircraft autopilot, self-driving cars. Especially in the case of unusual situations such as extreme weather or sensor failure. Typically an AI will function poorly under conditions where it has not been extensively tested.
Social impacts such as reducing job availability, barriers for the underprivileged wrt. loans, insurance, parole.
Recommendation engines are manipulating us more and more to change our behaviours (as well as reinforce our own "small world" bubbles). Recommendation engines routinely serve up inappropriate content of various sorts to young children, often because content creators (e.g. on YouTube) use the right keyword stuffing to appear to be child-friendly.
Political manipulation... Enough said, I think.
Plausible deniability of privacy invasion. Now that AI can read your email and even make phone calls for you, it's easy for someone to have humans act on your personal information and claim that they got a computer to do it.
Turning war into a video game, that is, replacing soldiers with machines being operated remotely by someone who is not in any danger and is far removed from his/her casualties.
Lack of transparency. We are trusting machines to make decisions with very little means of getting the justification behind a decision.
Resource consumption and pollution. This is not just an AI problem, however every improvement in AI is creating more demand for Big Data and together these ram up the need for storage, processing, and networking. On top of the electricity and rare minerals consumption, the infrastructure needs to be disposed of after its several-year lifespan.
Surveillance — with the ubiquity of smartphones and listening devices, there is a gold mine of data but too much to sift through every piece. Get an AI to sift through it, of course!
Cybersecurity — cybercriminals are increasingly leveraging AI to attack their targets.
Did I mention that all of these are in full swing already?
Long Term
Although there is no clear line between AI and AGI, this section is more about what happens when we go further towards AGI. I see two alternatives:
- Either we develop AGI as a result of our improved understanding of the nature of intelligence,
- or we slap together something that seems to work but we don't understand very well, much like a lot of machine learning right now.
In the first case, if an AI "goes rogue" we can build other AIs to outwit and neutralise it. In the second case, we can't, and we're doomed. AIs will be a new life form and we may go extinct.
Here are some potential problems:
Copy and paste. One problem with AGI is that it could quite conceivably run on a desktop computer, which creates a number of problems:
Script Kiddies — people could download an AI and set up the parameters in a destructive way. Relatedly,
Criminal or terrorist groups would be able to configure an AI to their liking. You don't need to find an expert on bomb making or bioweapons if you can download an AI, tell it to do some research and then give you step-by-step instructions.
Self-replicating AI — there are plenty of computer games about this. AI breaks loose and spreads like a virus. The more processing power, the better able it is to protect itself and spread further.
Invasion of computing resources. It is likely that more computing power is beneficial to an AI. An AI might buy or steal server resources, or the resources of desktops and mobile devices. Taken to an extreme, this could mean that all our devices simply became unusable which would wreak havoc on the .world immediately. It could also mean massive electricity consumption (and it would be hard to "pull the plug" because power plants are computer controlled!)
Automated factories. An AGI wishing to gain more of a physical presence in the world could take over factories to produce robots which could build new factories and essentially create bodies for itself.- These are rather philosophical considerations, but some would argue that AI would destroy what makes us human:
Inferiority. What if plenty of AI entities were smarter, faster, more reliable and more creative than the best humans?
Pointlessness. With robots replacing the need for physical labour and AIs replacing the need for intellectual labour, we will really have nothing to do. Nobody's going to get the Nobel Prize again because the AI will already be ahead. Why even get educated in the first place?
Monoculture/stagnation — in various scenarios (such as a single "benevolent dictator" AGI) society could become fixed in a perpetual pattern without new ideas or any sort of change (pleasant though it may be). Basically, Brave New World.
I think AGI is coming and we need to be mindful of these problems so that we can minimise them.
$endgroup$
1
$begingroup$
I think the kind of AI that's capable of reprogramming factories to build robots for itself is a long way away. Modern "AI" is just really sophisticated pattern recognition.
$endgroup$
– user253751
Sep 17 at 17:19
$begingroup$
I said "long term" and "AGI". AGI is, by definition, well beyond sophisticated pattern recognition. And although "sophisticated pattern recognition" is far and away the most common thing used in real-world applications, there is already plenty of work in other directions (particularly problem decomposition/action planning, which IMO is the lynchpin of these types of scenarios.)
$endgroup$
– Artelius
Sep 19 at 9:54
add a comment
|
$begingroup$
Short term
Physical accidents, e.g. due to industrial machinery, aircraft autopilot, self-driving cars. Especially in the case of unusual situations such as extreme weather or sensor failure. Typically an AI will function poorly under conditions where it has not been extensively tested.
Social impacts such as reducing job availability, barriers for the underprivileged wrt. loans, insurance, parole.
Recommendation engines are manipulating us more and more to change our behaviours (as well as reinforce our own "small world" bubbles). Recommendation engines routinely serve up inappropriate content of various sorts to young children, often because content creators (e.g. on YouTube) use the right keyword stuffing to appear to be child-friendly.
Political manipulation... Enough said, I think.
Plausible deniability of privacy invasion. Now that AI can read your email and even make phone calls for you, it's easy for someone to have humans act on your personal information and claim that they got a computer to do it.
Turning war into a video game, that is, replacing soldiers with machines being operated remotely by someone who is not in any danger and is far removed from his/her casualties.
Lack of transparency. We are trusting machines to make decisions with very little means of getting the justification behind a decision.
Resource consumption and pollution. This is not just an AI problem, however every improvement in AI is creating more demand for Big Data and together these ram up the need for storage, processing, and networking. On top of the electricity and rare minerals consumption, the infrastructure needs to be disposed of after its several-year lifespan.
Surveillance — with the ubiquity of smartphones and listening devices, there is a gold mine of data but too much to sift through every piece. Get an AI to sift through it, of course!
Cybersecurity — cybercriminals are increasingly leveraging AI to attack their targets.
Did I mention that all of these are in full swing already?
Long Term
Although there is no clear line between AI and AGI, this section is more about what happens when we go further towards AGI. I see two alternatives:
- Either we develop AGI as a result of our improved understanding of the nature of intelligence,
- or we slap together something that seems to work but we don't understand very well, much like a lot of machine learning right now.
In the first case, if an AI "goes rogue" we can build other AIs to outwit and neutralise it. In the second case, we can't, and we're doomed. AIs will be a new life form and we may go extinct.
Here are some potential problems:
Copy and paste. One problem with AGI is that it could quite conceivably run on a desktop computer, which creates a number of problems:
Script Kiddies — people could download an AI and set up the parameters in a destructive way. Relatedly,
Criminal or terrorist groups would be able to configure an AI to their liking. You don't need to find an expert on bomb making or bioweapons if you can download an AI, tell it to do some research and then give you step-by-step instructions.
Self-replicating AI — there are plenty of computer games about this. AI breaks loose and spreads like a virus. The more processing power, the better able it is to protect itself and spread further.
Invasion of computing resources. It is likely that more computing power is beneficial to an AI. An AI might buy or steal server resources, or the resources of desktops and mobile devices. Taken to an extreme, this could mean that all our devices simply became unusable which would wreak havoc on the .world immediately. It could also mean massive electricity consumption (and it would be hard to "pull the plug" because power plants are computer controlled!)
Automated factories. An AGI wishing to gain more of a physical presence in the world could take over factories to produce robots which could build new factories and essentially create bodies for itself.- These are rather philosophical considerations, but some would argue that AI would destroy what makes us human:
Inferiority. What if plenty of AI entities were smarter, faster, more reliable and more creative than the best humans?
Pointlessness. With robots replacing the need for physical labour and AIs replacing the need for intellectual labour, we will really have nothing to do. Nobody's going to get the Nobel Prize again because the AI will already be ahead. Why even get educated in the first place?
Monoculture/stagnation — in various scenarios (such as a single "benevolent dictator" AGI) society could become fixed in a perpetual pattern without new ideas or any sort of change (pleasant though it may be). Basically, Brave New World.
I think AGI is coming and we need to be mindful of these problems so that we can minimise them.
$endgroup$
1
$begingroup$
I think the kind of AI that's capable of reprogramming factories to build robots for itself is a long way away. Modern "AI" is just really sophisticated pattern recognition.
$endgroup$
– user253751
Sep 17 at 17:19
$begingroup$
I said "long term" and "AGI". AGI is, by definition, well beyond sophisticated pattern recognition. And although "sophisticated pattern recognition" is far and away the most common thing used in real-world applications, there is already plenty of work in other directions (particularly problem decomposition/action planning, which IMO is the lynchpin of these types of scenarios.)
$endgroup$
– Artelius
Sep 19 at 9:54
add a comment
|
$begingroup$
Short term
Physical accidents, e.g. due to industrial machinery, aircraft autopilot, self-driving cars. Especially in the case of unusual situations such as extreme weather or sensor failure. Typically an AI will function poorly under conditions where it has not been extensively tested.
Social impacts such as reducing job availability, barriers for the underprivileged wrt. loans, insurance, parole.
Recommendation engines are manipulating us more and more to change our behaviours (as well as reinforce our own "small world" bubbles). Recommendation engines routinely serve up inappropriate content of various sorts to young children, often because content creators (e.g. on YouTube) use the right keyword stuffing to appear to be child-friendly.
Political manipulation... Enough said, I think.
Plausible deniability of privacy invasion. Now that AI can read your email and even make phone calls for you, it's easy for someone to have humans act on your personal information and claim that they got a computer to do it.
Turning war into a video game, that is, replacing soldiers with machines being operated remotely by someone who is not in any danger and is far removed from his/her casualties.
Lack of transparency. We are trusting machines to make decisions with very little means of getting the justification behind a decision.
Resource consumption and pollution. This is not just an AI problem, however every improvement in AI is creating more demand for Big Data and together these ram up the need for storage, processing, and networking. On top of the electricity and rare minerals consumption, the infrastructure needs to be disposed of after its several-year lifespan.
Surveillance — with the ubiquity of smartphones and listening devices, there is a gold mine of data but too much to sift through every piece. Get an AI to sift through it, of course!
Cybersecurity — cybercriminals are increasingly leveraging AI to attack their targets.
Did I mention that all of these are in full swing already?
Long Term
Although there is no clear line between AI and AGI, this section is more about what happens when we go further towards AGI. I see two alternatives:
- Either we develop AGI as a result of our improved understanding of the nature of intelligence,
- or we slap together something that seems to work but we don't understand very well, much like a lot of machine learning right now.
In the first case, if an AI "goes rogue" we can build other AIs to outwit and neutralise it. In the second case, we can't, and we're doomed. AIs will be a new life form and we may go extinct.
Here are some potential problems:
Copy and paste. One problem with AGI is that it could quite conceivably run on a desktop computer, which creates a number of problems:
Script Kiddies — people could download an AI and set up the parameters in a destructive way. Relatedly,
Criminal or terrorist groups would be able to configure an AI to their liking. You don't need to find an expert on bomb making or bioweapons if you can download an AI, tell it to do some research and then give you step-by-step instructions.
Self-replicating AI — there are plenty of computer games about this. AI breaks loose and spreads like a virus. The more processing power, the better able it is to protect itself and spread further.
Invasion of computing resources. It is likely that more computing power is beneficial to an AI. An AI might buy or steal server resources, or the resources of desktops and mobile devices. Taken to an extreme, this could mean that all our devices simply became unusable which would wreak havoc on the .world immediately. It could also mean massive electricity consumption (and it would be hard to "pull the plug" because power plants are computer controlled!)
Automated factories. An AGI wishing to gain more of a physical presence in the world could take over factories to produce robots which could build new factories and essentially create bodies for itself.- These are rather philosophical considerations, but some would argue that AI would destroy what makes us human:
Inferiority. What if plenty of AI entities were smarter, faster, more reliable and more creative than the best humans?
Pointlessness. With robots replacing the need for physical labour and AIs replacing the need for intellectual labour, we will really have nothing to do. Nobody's going to get the Nobel Prize again because the AI will already be ahead. Why even get educated in the first place?
Monoculture/stagnation — in various scenarios (such as a single "benevolent dictator" AGI) society could become fixed in a perpetual pattern without new ideas or any sort of change (pleasant though it may be). Basically, Brave New World.
I think AGI is coming and we need to be mindful of these problems so that we can minimise them.
$endgroup$
Short term
Physical accidents, e.g. due to industrial machinery, aircraft autopilot, self-driving cars. Especially in the case of unusual situations such as extreme weather or sensor failure. Typically an AI will function poorly under conditions where it has not been extensively tested.
Social impacts such as reducing job availability, barriers for the underprivileged wrt. loans, insurance, parole.
Recommendation engines are manipulating us more and more to change our behaviours (as well as reinforce our own "small world" bubbles). Recommendation engines routinely serve up inappropriate content of various sorts to young children, often because content creators (e.g. on YouTube) use the right keyword stuffing to appear to be child-friendly.
Political manipulation... Enough said, I think.
Plausible deniability of privacy invasion. Now that AI can read your email and even make phone calls for you, it's easy for someone to have humans act on your personal information and claim that they got a computer to do it.
Turning war into a video game, that is, replacing soldiers with machines being operated remotely by someone who is not in any danger and is far removed from his/her casualties.
Lack of transparency. We are trusting machines to make decisions with very little means of getting the justification behind a decision.
Resource consumption and pollution. This is not just an AI problem, however every improvement in AI is creating more demand for Big Data and together these ram up the need for storage, processing, and networking. On top of the electricity and rare minerals consumption, the infrastructure needs to be disposed of after its several-year lifespan.
Surveillance — with the ubiquity of smartphones and listening devices, there is a gold mine of data but too much to sift through every piece. Get an AI to sift through it, of course!
Cybersecurity — cybercriminals are increasingly leveraging AI to attack their targets.
Did I mention that all of these are in full swing already?
Long Term
Although there is no clear line between AI and AGI, this section is more about what happens when we go further towards AGI. I see two alternatives:
- Either we develop AGI as a result of our improved understanding of the nature of intelligence,
- or we slap together something that seems to work but we don't understand very well, much like a lot of machine learning right now.
In the first case, if an AI "goes rogue" we can build other AIs to outwit and neutralise it. In the second case, we can't, and we're doomed. AIs will be a new life form and we may go extinct.
Here are some potential problems:
Copy and paste. One problem with AGI is that it could quite conceivably run on a desktop computer, which creates a number of problems:
Script Kiddies — people could download an AI and set up the parameters in a destructive way. Relatedly,
Criminal or terrorist groups would be able to configure an AI to their liking. You don't need to find an expert on bomb making or bioweapons if you can download an AI, tell it to do some research and then give you step-by-step instructions.
Self-replicating AI — there are plenty of computer games about this. AI breaks loose and spreads like a virus. The more processing power, the better able it is to protect itself and spread further.
Invasion of computing resources. It is likely that more computing power is beneficial to an AI. An AI might buy or steal server resources, or the resources of desktops and mobile devices. Taken to an extreme, this could mean that all our devices simply became unusable which would wreak havoc on the .world immediately. It could also mean massive electricity consumption (and it would be hard to "pull the plug" because power plants are computer controlled!)
Automated factories. An AGI wishing to gain more of a physical presence in the world could take over factories to produce robots which could build new factories and essentially create bodies for itself.- These are rather philosophical considerations, but some would argue that AI would destroy what makes us human:
Inferiority. What if plenty of AI entities were smarter, faster, more reliable and more creative than the best humans?
Pointlessness. With robots replacing the need for physical labour and AIs replacing the need for intellectual labour, we will really have nothing to do. Nobody's going to get the Nobel Prize again because the AI will already be ahead. Why even get educated in the first place?
Monoculture/stagnation — in various scenarios (such as a single "benevolent dictator" AGI) society could become fixed in a perpetual pattern without new ideas or any sort of change (pleasant though it may be). Basically, Brave New World.
I think AGI is coming and we need to be mindful of these problems so that we can minimise them.
answered Sep 17 at 4:09
ArteliusArtelius
2412 bronze badges
2412 bronze badges
1
$begingroup$
I think the kind of AI that's capable of reprogramming factories to build robots for itself is a long way away. Modern "AI" is just really sophisticated pattern recognition.
$endgroup$
– user253751
Sep 17 at 17:19
$begingroup$
I said "long term" and "AGI". AGI is, by definition, well beyond sophisticated pattern recognition. And although "sophisticated pattern recognition" is far and away the most common thing used in real-world applications, there is already plenty of work in other directions (particularly problem decomposition/action planning, which IMO is the lynchpin of these types of scenarios.)
$endgroup$
– Artelius
Sep 19 at 9:54
add a comment
|
1
$begingroup$
I think the kind of AI that's capable of reprogramming factories to build robots for itself is a long way away. Modern "AI" is just really sophisticated pattern recognition.
$endgroup$
– user253751
Sep 17 at 17:19
$begingroup$
I said "long term" and "AGI". AGI is, by definition, well beyond sophisticated pattern recognition. And although "sophisticated pattern recognition" is far and away the most common thing used in real-world applications, there is already plenty of work in other directions (particularly problem decomposition/action planning, which IMO is the lynchpin of these types of scenarios.)
$endgroup$
– Artelius
Sep 19 at 9:54
1
1
$begingroup$
I think the kind of AI that's capable of reprogramming factories to build robots for itself is a long way away. Modern "AI" is just really sophisticated pattern recognition.
$endgroup$
– user253751
Sep 17 at 17:19
$begingroup$
I think the kind of AI that's capable of reprogramming factories to build robots for itself is a long way away. Modern "AI" is just really sophisticated pattern recognition.
$endgroup$
– user253751
Sep 17 at 17:19
$begingroup$
I said "long term" and "AGI". AGI is, by definition, well beyond sophisticated pattern recognition. And although "sophisticated pattern recognition" is far and away the most common thing used in real-world applications, there is already plenty of work in other directions (particularly problem decomposition/action planning, which IMO is the lynchpin of these types of scenarios.)
$endgroup$
– Artelius
Sep 19 at 9:54
$begingroup$
I said "long term" and "AGI". AGI is, by definition, well beyond sophisticated pattern recognition. And although "sophisticated pattern recognition" is far and away the most common thing used in real-world applications, there is already plenty of work in other directions (particularly problem decomposition/action planning, which IMO is the lynchpin of these types of scenarios.)
$endgroup$
– Artelius
Sep 19 at 9:54
add a comment
|
$begingroup$
In addition to the other answers, I would like to add to nuking cookie factory example:
Machine learning AIs basically try to fulfill a goal described by humans. For example, humans create an AI running a cookie factory. The goal they implement is to sell as many cookies as possible for the highest profitable margin.
Now, imagine an AI which is sufficiently powerful. This AI will notice that if he nukes all other cookie factories, everybody has to buy cookies in his factory, making sales rise and profits higher.
So, the human error here is giving no penalty for using violence in the algorithm. This is easily overlooked because humans didn't expect the algorithm to come to this conclusion.
$endgroup$
8
$begingroup$
This reminds me of a real-world example I saw on reddit once where someone was training an AI to climb some stairs in Unity. It discovered that it could press itself into the ground with a lot of force and the physics would glitch, causing it to fly into the air and be the fastest to the top.
$endgroup$
– GammaGames
Sep 16 at 21:48
2
$begingroup$
Or, worse, it'd decide that humans are made out of atoms that would be better used to make cookies out of.
$endgroup$
– nick012000
Sep 17 at 10:52
$begingroup$
I've heard this argument before. One of the fallacies of predicting an AI doomsday is that we can't predict what the AI will do. It's entirely possible the AI would recognize that nuking other cookie companies might throw off the global economy and destroy any potential demand for cookies... Law of economics, supply AND demand
$endgroup$
– Zakk Diaz
Sep 20 at 18:52
add a comment
|
$begingroup$
In addition to the other answers, I would like to add to nuking cookie factory example:
Machine learning AIs basically try to fulfill a goal described by humans. For example, humans create an AI running a cookie factory. The goal they implement is to sell as many cookies as possible for the highest profitable margin.
Now, imagine an AI which is sufficiently powerful. This AI will notice that if he nukes all other cookie factories, everybody has to buy cookies in his factory, making sales rise and profits higher.
So, the human error here is giving no penalty for using violence in the algorithm. This is easily overlooked because humans didn't expect the algorithm to come to this conclusion.
$endgroup$
8
$begingroup$
This reminds me of a real-world example I saw on reddit once where someone was training an AI to climb some stairs in Unity. It discovered that it could press itself into the ground with a lot of force and the physics would glitch, causing it to fly into the air and be the fastest to the top.
$endgroup$
– GammaGames
Sep 16 at 21:48
2
$begingroup$
Or, worse, it'd decide that humans are made out of atoms that would be better used to make cookies out of.
$endgroup$
– nick012000
Sep 17 at 10:52
$begingroup$
I've heard this argument before. One of the fallacies of predicting an AI doomsday is that we can't predict what the AI will do. It's entirely possible the AI would recognize that nuking other cookie companies might throw off the global economy and destroy any potential demand for cookies... Law of economics, supply AND demand
$endgroup$
– Zakk Diaz
Sep 20 at 18:52
add a comment
|
$begingroup$
In addition to the other answers, I would like to add to nuking cookie factory example:
Machine learning AIs basically try to fulfill a goal described by humans. For example, humans create an AI running a cookie factory. The goal they implement is to sell as many cookies as possible for the highest profitable margin.
Now, imagine an AI which is sufficiently powerful. This AI will notice that if he nukes all other cookie factories, everybody has to buy cookies in his factory, making sales rise and profits higher.
So, the human error here is giving no penalty for using violence in the algorithm. This is easily overlooked because humans didn't expect the algorithm to come to this conclusion.
$endgroup$
In addition to the other answers, I would like to add to nuking cookie factory example:
Machine learning AIs basically try to fulfill a goal described by humans. For example, humans create an AI running a cookie factory. The goal they implement is to sell as many cookies as possible for the highest profitable margin.
Now, imagine an AI which is sufficiently powerful. This AI will notice that if he nukes all other cookie factories, everybody has to buy cookies in his factory, making sales rise and profits higher.
So, the human error here is giving no penalty for using violence in the algorithm. This is easily overlooked because humans didn't expect the algorithm to come to this conclusion.
edited Sep 16 at 15:41
nbro
9,5205 gold badges19 silver badges51 bronze badges
9,5205 gold badges19 silver badges51 bronze badges
answered Sep 16 at 13:23
LustwelpintjeLustwelpintje
1595 bronze badges
1595 bronze badges
8
$begingroup$
This reminds me of a real-world example I saw on reddit once where someone was training an AI to climb some stairs in Unity. It discovered that it could press itself into the ground with a lot of force and the physics would glitch, causing it to fly into the air and be the fastest to the top.
$endgroup$
– GammaGames
Sep 16 at 21:48
2
$begingroup$
Or, worse, it'd decide that humans are made out of atoms that would be better used to make cookies out of.
$endgroup$
– nick012000
Sep 17 at 10:52
$begingroup$
I've heard this argument before. One of the fallacies of predicting an AI doomsday is that we can't predict what the AI will do. It's entirely possible the AI would recognize that nuking other cookie companies might throw off the global economy and destroy any potential demand for cookies... Law of economics, supply AND demand
$endgroup$
– Zakk Diaz
Sep 20 at 18:52
add a comment
|
8
$begingroup$
This reminds me of a real-world example I saw on reddit once where someone was training an AI to climb some stairs in Unity. It discovered that it could press itself into the ground with a lot of force and the physics would glitch, causing it to fly into the air and be the fastest to the top.
$endgroup$
– GammaGames
Sep 16 at 21:48
2
$begingroup$
Or, worse, it'd decide that humans are made out of atoms that would be better used to make cookies out of.
$endgroup$
– nick012000
Sep 17 at 10:52
$begingroup$
I've heard this argument before. One of the fallacies of predicting an AI doomsday is that we can't predict what the AI will do. It's entirely possible the AI would recognize that nuking other cookie companies might throw off the global economy and destroy any potential demand for cookies... Law of economics, supply AND demand
$endgroup$
– Zakk Diaz
Sep 20 at 18:52
8
8
$begingroup$
This reminds me of a real-world example I saw on reddit once where someone was training an AI to climb some stairs in Unity. It discovered that it could press itself into the ground with a lot of force and the physics would glitch, causing it to fly into the air and be the fastest to the top.
$endgroup$
– GammaGames
Sep 16 at 21:48
$begingroup$
This reminds me of a real-world example I saw on reddit once where someone was training an AI to climb some stairs in Unity. It discovered that it could press itself into the ground with a lot of force and the physics would glitch, causing it to fly into the air and be the fastest to the top.
$endgroup$
– GammaGames
Sep 16 at 21:48
2
2
$begingroup$
Or, worse, it'd decide that humans are made out of atoms that would be better used to make cookies out of.
$endgroup$
– nick012000
Sep 17 at 10:52
$begingroup$
Or, worse, it'd decide that humans are made out of atoms that would be better used to make cookies out of.
$endgroup$
– nick012000
Sep 17 at 10:52
$begingroup$
I've heard this argument before. One of the fallacies of predicting an AI doomsday is that we can't predict what the AI will do. It's entirely possible the AI would recognize that nuking other cookie companies might throw off the global economy and destroy any potential demand for cookies... Law of economics, supply AND demand
$endgroup$
– Zakk Diaz
Sep 20 at 18:52
$begingroup$
I've heard this argument before. One of the fallacies of predicting an AI doomsday is that we can't predict what the AI will do. It's entirely possible the AI would recognize that nuking other cookie companies might throw off the global economy and destroy any potential demand for cookies... Law of economics, supply AND demand
$endgroup$
– Zakk Diaz
Sep 20 at 18:52
add a comment
|
$begingroup$
My favorite scenario for harm by AI involves not high intelligence, but low intelligence. Specifically, the grey goo hypothesis.
This is where a self-replicating, automated process runs amok and converts all resources into copies of itself.
The point here is that the AI is not "smart" in the sense of having high intelligence or general intelligence--it is merely very good at a single thing and has the ability to replicate exponentially.
$endgroup$
3
$begingroup$
FWIW, humans are already grey goo. We're selfish grey goo that doesn't want to be replaced by an even more efficient grey goo.
$endgroup$
– user253751
Sep 17 at 17:19
1
$begingroup$
@immibis That is of course a philosophical POV, not fact. There are plenty of people who differentiate between humans and self-replicating / self sustaining machines. Zombie movies would not be very successful if a majority carried your definition at heart =)
$endgroup$
– Stian Yttervik
Sep 18 at 6:59
1
$begingroup$
@immibis Did you read the gray goo article on Wikipedia that this answer references? The term refers to unintelligent (nano)machines going amok, not to any intelligent behavior. So I'd say, no, humans are not it (and neither is AI), since we didn't eat Albert Einstein when we could.
$endgroup$
– kubanczyk
Sep 19 at 4:58
$begingroup$
@kubanczyk the fundamental meaning of term "intelligence" seems widely misunderstood, both in academia and the general public. Intelligence is a spectrum, generally relative (to other decision-making mechanisms), and is based on the utility of any given decision in the context of a problem. So grey goo would be intelligent, just that the intelligence would be limited and narrow.
$endgroup$
– DukeZhou♦
Sep 20 at 20:53
add a comment
|
$begingroup$
My favorite scenario for harm by AI involves not high intelligence, but low intelligence. Specifically, the grey goo hypothesis.
This is where a self-replicating, automated process runs amok and converts all resources into copies of itself.
The point here is that the AI is not "smart" in the sense of having high intelligence or general intelligence--it is merely very good at a single thing and has the ability to replicate exponentially.
$endgroup$
3
$begingroup$
FWIW, humans are already grey goo. We're selfish grey goo that doesn't want to be replaced by an even more efficient grey goo.
$endgroup$
– user253751
Sep 17 at 17:19
1
$begingroup$
@immibis That is of course a philosophical POV, not fact. There are plenty of people who differentiate between humans and self-replicating / self sustaining machines. Zombie movies would not be very successful if a majority carried your definition at heart =)
$endgroup$
– Stian Yttervik
Sep 18 at 6:59
1
$begingroup$
@immibis Did you read the gray goo article on Wikipedia that this answer references? The term refers to unintelligent (nano)machines going amok, not to any intelligent behavior. So I'd say, no, humans are not it (and neither is AI), since we didn't eat Albert Einstein when we could.
$endgroup$
– kubanczyk
Sep 19 at 4:58
$begingroup$
@kubanczyk the fundamental meaning of term "intelligence" seems widely misunderstood, both in academia and the general public. Intelligence is a spectrum, generally relative (to other decision-making mechanisms), and is based on the utility of any given decision in the context of a problem. So grey goo would be intelligent, just that the intelligence would be limited and narrow.
$endgroup$
– DukeZhou♦
Sep 20 at 20:53
add a comment
|
$begingroup$
My favorite scenario for harm by AI involves not high intelligence, but low intelligence. Specifically, the grey goo hypothesis.
This is where a self-replicating, automated process runs amok and converts all resources into copies of itself.
The point here is that the AI is not "smart" in the sense of having high intelligence or general intelligence--it is merely very good at a single thing and has the ability to replicate exponentially.
$endgroup$
My favorite scenario for harm by AI involves not high intelligence, but low intelligence. Specifically, the grey goo hypothesis.
This is where a self-replicating, automated process runs amok and converts all resources into copies of itself.
The point here is that the AI is not "smart" in the sense of having high intelligence or general intelligence--it is merely very good at a single thing and has the ability to replicate exponentially.
answered Sep 16 at 21:23
DukeZhou♦DukeZhou
5,4083 gold badges15 silver badges41 bronze badges
5,4083 gold badges15 silver badges41 bronze badges
3
$begingroup$
FWIW, humans are already grey goo. We're selfish grey goo that doesn't want to be replaced by an even more efficient grey goo.
$endgroup$
– user253751
Sep 17 at 17:19
1
$begingroup$
@immibis That is of course a philosophical POV, not fact. There are plenty of people who differentiate between humans and self-replicating / self sustaining machines. Zombie movies would not be very successful if a majority carried your definition at heart =)
$endgroup$
– Stian Yttervik
Sep 18 at 6:59
1
$begingroup$
@immibis Did you read the gray goo article on Wikipedia that this answer references? The term refers to unintelligent (nano)machines going amok, not to any intelligent behavior. So I'd say, no, humans are not it (and neither is AI), since we didn't eat Albert Einstein when we could.
$endgroup$
– kubanczyk
Sep 19 at 4:58
$begingroup$
@kubanczyk the fundamental meaning of term "intelligence" seems widely misunderstood, both in academia and the general public. Intelligence is a spectrum, generally relative (to other decision-making mechanisms), and is based on the utility of any given decision in the context of a problem. So grey goo would be intelligent, just that the intelligence would be limited and narrow.
$endgroup$
– DukeZhou♦
Sep 20 at 20:53
add a comment
|
3
$begingroup$
FWIW, humans are already grey goo. We're selfish grey goo that doesn't want to be replaced by an even more efficient grey goo.
$endgroup$
– user253751
Sep 17 at 17:19
1
$begingroup$
@immibis That is of course a philosophical POV, not fact. There are plenty of people who differentiate between humans and self-replicating / self sustaining machines. Zombie movies would not be very successful if a majority carried your definition at heart =)
$endgroup$
– Stian Yttervik
Sep 18 at 6:59
1
$begingroup$
@immibis Did you read the gray goo article on Wikipedia that this answer references? The term refers to unintelligent (nano)machines going amok, not to any intelligent behavior. So I'd say, no, humans are not it (and neither is AI), since we didn't eat Albert Einstein when we could.
$endgroup$
– kubanczyk
Sep 19 at 4:58
$begingroup$
@kubanczyk the fundamental meaning of term "intelligence" seems widely misunderstood, both in academia and the general public. Intelligence is a spectrum, generally relative (to other decision-making mechanisms), and is based on the utility of any given decision in the context of a problem. So grey goo would be intelligent, just that the intelligence would be limited and narrow.
$endgroup$
– DukeZhou♦
Sep 20 at 20:53
3
3
$begingroup$
FWIW, humans are already grey goo. We're selfish grey goo that doesn't want to be replaced by an even more efficient grey goo.
$endgroup$
– user253751
Sep 17 at 17:19
$begingroup$
FWIW, humans are already grey goo. We're selfish grey goo that doesn't want to be replaced by an even more efficient grey goo.
$endgroup$
– user253751
Sep 17 at 17:19
1
1
$begingroup$
@immibis That is of course a philosophical POV, not fact. There are plenty of people who differentiate between humans and self-replicating / self sustaining machines. Zombie movies would not be very successful if a majority carried your definition at heart =)
$endgroup$
– Stian Yttervik
Sep 18 at 6:59
$begingroup$
@immibis That is of course a philosophical POV, not fact. There are plenty of people who differentiate between humans and self-replicating / self sustaining machines. Zombie movies would not be very successful if a majority carried your definition at heart =)
$endgroup$
– Stian Yttervik
Sep 18 at 6:59
1
1
$begingroup$
@immibis Did you read the gray goo article on Wikipedia that this answer references? The term refers to unintelligent (nano)machines going amok, not to any intelligent behavior. So I'd say, no, humans are not it (and neither is AI), since we didn't eat Albert Einstein when we could.
$endgroup$
– kubanczyk
Sep 19 at 4:58
$begingroup$
@immibis Did you read the gray goo article on Wikipedia that this answer references? The term refers to unintelligent (nano)machines going amok, not to any intelligent behavior. So I'd say, no, humans are not it (and neither is AI), since we didn't eat Albert Einstein when we could.
$endgroup$
– kubanczyk
Sep 19 at 4:58
$begingroup$
@kubanczyk the fundamental meaning of term "intelligence" seems widely misunderstood, both in academia and the general public. Intelligence is a spectrum, generally relative (to other decision-making mechanisms), and is based on the utility of any given decision in the context of a problem. So grey goo would be intelligent, just that the intelligence would be limited and narrow.
$endgroup$
– DukeZhou♦
Sep 20 at 20:53
$begingroup$
@kubanczyk the fundamental meaning of term "intelligence" seems widely misunderstood, both in academia and the general public. Intelligence is a spectrum, generally relative (to other decision-making mechanisms), and is based on the utility of any given decision in the context of a problem. So grey goo would be intelligent, just that the intelligence would be limited and narrow.
$endgroup$
– DukeZhou♦
Sep 20 at 20:53
add a comment
|
$begingroup$
I would say the biggest real threat would be the unbalancing/disrupting we are already seeing. The changes of putting 90% the country out of work are real, and the results (which will be even more uneven distribution of wealth) are terrifying if you think them through.
$endgroup$
2
$begingroup$
People said the same thing during the industrial revolution which made most farming jobs redundant. While you may not be wrong, and it is something that I am worried about personally, research studying trends show this may not be a concern and it's probable new jobs will open up.
$endgroup$
– Programmdude
Sep 17 at 0:44
$begingroup$
@Programmdude - I think there is a fundamental difference between the industrial revolution changes and even the elimination of secretarial jobs through the advent of the P with what will happen in the coming decades.
$endgroup$
– Mayo
Sep 17 at 16:13
2
$begingroup$
@Programmdude And the people were right. The industrial revolution did change everything about the way people live, it was extremely disruptive in terms of distribution of wealth and the ability of people to exist on a farm income. From the other point of view: The slave owners looking back from a few hundred years in the future will probably not see the effects of AI on this period as disruptive since it formed their situation.
$endgroup$
– Bill K
Sep 17 at 16:18
$begingroup$
@BillK I was with you right up until the part about slave owners. You do know that AIs aren't self-aware, right?
$endgroup$
– Ray
Sep 18 at 15:34
$begingroup$
@Ray I didn't mean the AIs, I meant the people who controlled the AIs (And would therefore have all the wealth), and really it was just a way to point out that things may be incomprehensibly different to us but it wouldn't feel that different looking back.
$endgroup$
– Bill K
Sep 18 at 16:16
|
show 2 more comments
$begingroup$
I would say the biggest real threat would be the unbalancing/disrupting we are already seeing. The changes of putting 90% the country out of work are real, and the results (which will be even more uneven distribution of wealth) are terrifying if you think them through.
$endgroup$
2
$begingroup$
People said the same thing during the industrial revolution which made most farming jobs redundant. While you may not be wrong, and it is something that I am worried about personally, research studying trends show this may not be a concern and it's probable new jobs will open up.
$endgroup$
– Programmdude
Sep 17 at 0:44
$begingroup$
@Programmdude - I think there is a fundamental difference between the industrial revolution changes and even the elimination of secretarial jobs through the advent of the P with what will happen in the coming decades.
$endgroup$
– Mayo
Sep 17 at 16:13
2
$begingroup$
@Programmdude And the people were right. The industrial revolution did change everything about the way people live, it was extremely disruptive in terms of distribution of wealth and the ability of people to exist on a farm income. From the other point of view: The slave owners looking back from a few hundred years in the future will probably not see the effects of AI on this period as disruptive since it formed their situation.
$endgroup$
– Bill K
Sep 17 at 16:18
$begingroup$
@BillK I was with you right up until the part about slave owners. You do know that AIs aren't self-aware, right?
$endgroup$
– Ray
Sep 18 at 15:34
$begingroup$
@Ray I didn't mean the AIs, I meant the people who controlled the AIs (And would therefore have all the wealth), and really it was just a way to point out that things may be incomprehensibly different to us but it wouldn't feel that different looking back.
$endgroup$
– Bill K
Sep 18 at 16:16
|
show 2 more comments
$begingroup$
I would say the biggest real threat would be the unbalancing/disrupting we are already seeing. The changes of putting 90% the country out of work are real, and the results (which will be even more uneven distribution of wealth) are terrifying if you think them through.
$endgroup$
I would say the biggest real threat would be the unbalancing/disrupting we are already seeing. The changes of putting 90% the country out of work are real, and the results (which will be even more uneven distribution of wealth) are terrifying if you think them through.
answered Sep 16 at 18:31
Bill KBill K
1513 bronze badges
1513 bronze badges
2
$begingroup$
People said the same thing during the industrial revolution which made most farming jobs redundant. While you may not be wrong, and it is something that I am worried about personally, research studying trends show this may not be a concern and it's probable new jobs will open up.
$endgroup$
– Programmdude
Sep 17 at 0:44
$begingroup$
@Programmdude - I think there is a fundamental difference between the industrial revolution changes and even the elimination of secretarial jobs through the advent of the P with what will happen in the coming decades.
$endgroup$
– Mayo
Sep 17 at 16:13
2
$begingroup$
@Programmdude And the people were right. The industrial revolution did change everything about the way people live, it was extremely disruptive in terms of distribution of wealth and the ability of people to exist on a farm income. From the other point of view: The slave owners looking back from a few hundred years in the future will probably not see the effects of AI on this period as disruptive since it formed their situation.
$endgroup$
– Bill K
Sep 17 at 16:18
$begingroup$
@BillK I was with you right up until the part about slave owners. You do know that AIs aren't self-aware, right?
$endgroup$
– Ray
Sep 18 at 15:34
$begingroup$
@Ray I didn't mean the AIs, I meant the people who controlled the AIs (And would therefore have all the wealth), and really it was just a way to point out that things may be incomprehensibly different to us but it wouldn't feel that different looking back.
$endgroup$
– Bill K
Sep 18 at 16:16
|
show 2 more comments
2
$begingroup$
People said the same thing during the industrial revolution which made most farming jobs redundant. While you may not be wrong, and it is something that I am worried about personally, research studying trends show this may not be a concern and it's probable new jobs will open up.
$endgroup$
– Programmdude
Sep 17 at 0:44
$begingroup$
@Programmdude - I think there is a fundamental difference between the industrial revolution changes and even the elimination of secretarial jobs through the advent of the P with what will happen in the coming decades.
$endgroup$
– Mayo
Sep 17 at 16:13
2
$begingroup$
@Programmdude And the people were right. The industrial revolution did change everything about the way people live, it was extremely disruptive in terms of distribution of wealth and the ability of people to exist on a farm income. From the other point of view: The slave owners looking back from a few hundred years in the future will probably not see the effects of AI on this period as disruptive since it formed their situation.
$endgroup$
– Bill K
Sep 17 at 16:18
$begingroup$
@BillK I was with you right up until the part about slave owners. You do know that AIs aren't self-aware, right?
$endgroup$
– Ray
Sep 18 at 15:34
$begingroup$
@Ray I didn't mean the AIs, I meant the people who controlled the AIs (And would therefore have all the wealth), and really it was just a way to point out that things may be incomprehensibly different to us but it wouldn't feel that different looking back.
$endgroup$
– Bill K
Sep 18 at 16:16
2
2
$begingroup$
People said the same thing during the industrial revolution which made most farming jobs redundant. While you may not be wrong, and it is something that I am worried about personally, research studying trends show this may not be a concern and it's probable new jobs will open up.
$endgroup$
– Programmdude
Sep 17 at 0:44
$begingroup$
People said the same thing during the industrial revolution which made most farming jobs redundant. While you may not be wrong, and it is something that I am worried about personally, research studying trends show this may not be a concern and it's probable new jobs will open up.
$endgroup$
– Programmdude
Sep 17 at 0:44
$begingroup$
@Programmdude - I think there is a fundamental difference between the industrial revolution changes and even the elimination of secretarial jobs through the advent of the P with what will happen in the coming decades.
$endgroup$
– Mayo
Sep 17 at 16:13
$begingroup$
@Programmdude - I think there is a fundamental difference between the industrial revolution changes and even the elimination of secretarial jobs through the advent of the P with what will happen in the coming decades.
$endgroup$
– Mayo
Sep 17 at 16:13
2
2
$begingroup$
@Programmdude And the people were right. The industrial revolution did change everything about the way people live, it was extremely disruptive in terms of distribution of wealth and the ability of people to exist on a farm income. From the other point of view: The slave owners looking back from a few hundred years in the future will probably not see the effects of AI on this period as disruptive since it formed their situation.
$endgroup$
– Bill K
Sep 17 at 16:18
$begingroup$
@Programmdude And the people were right. The industrial revolution did change everything about the way people live, it was extremely disruptive in terms of distribution of wealth and the ability of people to exist on a farm income. From the other point of view: The slave owners looking back from a few hundred years in the future will probably not see the effects of AI on this period as disruptive since it formed their situation.
$endgroup$
– Bill K
Sep 17 at 16:18
$begingroup$
@BillK I was with you right up until the part about slave owners. You do know that AIs aren't self-aware, right?
$endgroup$
– Ray
Sep 18 at 15:34
$begingroup$
@BillK I was with you right up until the part about slave owners. You do know that AIs aren't self-aware, right?
$endgroup$
– Ray
Sep 18 at 15:34
$begingroup$
@Ray I didn't mean the AIs, I meant the people who controlled the AIs (And would therefore have all the wealth), and really it was just a way to point out that things may be incomprehensibly different to us but it wouldn't feel that different looking back.
$endgroup$
– Bill K
Sep 18 at 16:16
$begingroup$
@Ray I didn't mean the AIs, I meant the people who controlled the AIs (And would therefore have all the wealth), and really it was just a way to point out that things may be incomprehensibly different to us but it wouldn't feel that different looking back.
$endgroup$
– Bill K
Sep 18 at 16:16
|
show 2 more comments
$begingroup$
I have an example which goes in kinda the opposite direction of the public's fears, but is a very real thing, which I already see happening. It is not AI-specific, but I think it will get worse through AI. It is the problem of humans trusting the AI conclusions blindly in critical applications.
We have many areas in which human experts are supposed to make a decision. Take for example medicine - should we give medication X or medication Y? The situations I have in mind are frequently complex problems (in the Cynefin sense) where it is a really good thing to have somebody pay attention very closely and use lots of expertise, and the outcome really matters.
There is a demand for medical informaticians to write decision support systems for this kind of problem in the medicine (and I suppose for the same type in other domains). They do their best, but the expectation is always that a human expert will always consider the system's suggestion just as one more opinion when making the decision. In many cases, it would be irresponsible to promise anything else, given the state of knowledge and the resources available to the developers. A typical example would be the use of computer vision in radiomics: a patient gets a CT scan and the AI has to process the image and decide whether the patient has a tumor.
Of course, the AI is not perfect. Even when measured against the gold standard, it never achieves 100% accuracy. And then there are all the cases where it performs well against its own goal metrics, but the problem was so complex that the goal metric doesn't capture it well - I can't think of an example in the CT context, but I guess we see it even here on SE, where the algorithms favor popularity in posts, which is an imperfect proxy for factual correctness.
You were probably reading that last paragraph and nodding along, "Yeah, I learned that in the first introductory ML course I took". Guess what? Physicians never took an introductory ML course. They rarely have enough statistic literacy to understand the conclusions of papers appearing in medical journals. When they are talking to their 27th patient, 7 hours into their 16 hour shift, hungry and emotionally drained, and the CT doesn't look all that clear-cut, but the computer says "it's not a malignancy", they don't take ten more minutes to concentrate on the image more, or look up a textbook, or consult with a colleague. They just go with what the computer says, grateful that their cognitive load is not skyrocketing yet again. So they turn from being experts to being people who read something off a screen. Worse, in some hospitals the administration does not only trust computers, it also has found out that they are convenient scapegoats. So, a physician has a bad hunch which goes against the computer's output, it becomes difficult for them to act on that hunch and defend themselves that they chose to overrode the AI's opinion.
AIs are powerful and useful tools, but there will always be tasks where they can't replace the toolwielder.
$endgroup$
$begingroup$
If you're looking for more examples, the controversy around using machine learning to predict reoffending rates of applicants for bail or parole is a good one. I agree that we shouldn't expect doctors and judges to have the levels of statistical expertise needed to understand AI, in addition to their medical and legal expertise. AI designers should be aware of the fallibility of their algorithms, and provide clear guidance to its users. Maybe tell the doctor where to look on the CT scan instead of directly giving them the result.
$endgroup$
– craq
Sep 18 at 4:57
add a comment
|
$begingroup$
I have an example which goes in kinda the opposite direction of the public's fears, but is a very real thing, which I already see happening. It is not AI-specific, but I think it will get worse through AI. It is the problem of humans trusting the AI conclusions blindly in critical applications.
We have many areas in which human experts are supposed to make a decision. Take for example medicine - should we give medication X or medication Y? The situations I have in mind are frequently complex problems (in the Cynefin sense) where it is a really good thing to have somebody pay attention very closely and use lots of expertise, and the outcome really matters.
There is a demand for medical informaticians to write decision support systems for this kind of problem in the medicine (and I suppose for the same type in other domains). They do their best, but the expectation is always that a human expert will always consider the system's suggestion just as one more opinion when making the decision. In many cases, it would be irresponsible to promise anything else, given the state of knowledge and the resources available to the developers. A typical example would be the use of computer vision in radiomics: a patient gets a CT scan and the AI has to process the image and decide whether the patient has a tumor.
Of course, the AI is not perfect. Even when measured against the gold standard, it never achieves 100% accuracy. And then there are all the cases where it performs well against its own goal metrics, but the problem was so complex that the goal metric doesn't capture it well - I can't think of an example in the CT context, but I guess we see it even here on SE, where the algorithms favor popularity in posts, which is an imperfect proxy for factual correctness.
You were probably reading that last paragraph and nodding along, "Yeah, I learned that in the first introductory ML course I took". Guess what? Physicians never took an introductory ML course. They rarely have enough statistic literacy to understand the conclusions of papers appearing in medical journals. When they are talking to their 27th patient, 7 hours into their 16 hour shift, hungry and emotionally drained, and the CT doesn't look all that clear-cut, but the computer says "it's not a malignancy", they don't take ten more minutes to concentrate on the image more, or look up a textbook, or consult with a colleague. They just go with what the computer says, grateful that their cognitive load is not skyrocketing yet again. So they turn from being experts to being people who read something off a screen. Worse, in some hospitals the administration does not only trust computers, it also has found out that they are convenient scapegoats. So, a physician has a bad hunch which goes against the computer's output, it becomes difficult for them to act on that hunch and defend themselves that they chose to overrode the AI's opinion.
AIs are powerful and useful tools, but there will always be tasks where they can't replace the toolwielder.
$endgroup$
$begingroup$
If you're looking for more examples, the controversy around using machine learning to predict reoffending rates of applicants for bail or parole is a good one. I agree that we shouldn't expect doctors and judges to have the levels of statistical expertise needed to understand AI, in addition to their medical and legal expertise. AI designers should be aware of the fallibility of their algorithms, and provide clear guidance to its users. Maybe tell the doctor where to look on the CT scan instead of directly giving them the result.
$endgroup$
– craq
Sep 18 at 4:57
add a comment
|
$begingroup$
I have an example which goes in kinda the opposite direction of the public's fears, but is a very real thing, which I already see happening. It is not AI-specific, but I think it will get worse through AI. It is the problem of humans trusting the AI conclusions blindly in critical applications.
We have many areas in which human experts are supposed to make a decision. Take for example medicine - should we give medication X or medication Y? The situations I have in mind are frequently complex problems (in the Cynefin sense) where it is a really good thing to have somebody pay attention very closely and use lots of expertise, and the outcome really matters.
There is a demand for medical informaticians to write decision support systems for this kind of problem in the medicine (and I suppose for the same type in other domains). They do their best, but the expectation is always that a human expert will always consider the system's suggestion just as one more opinion when making the decision. In many cases, it would be irresponsible to promise anything else, given the state of knowledge and the resources available to the developers. A typical example would be the use of computer vision in radiomics: a patient gets a CT scan and the AI has to process the image and decide whether the patient has a tumor.
Of course, the AI is not perfect. Even when measured against the gold standard, it never achieves 100% accuracy. And then there are all the cases where it performs well against its own goal metrics, but the problem was so complex that the goal metric doesn't capture it well - I can't think of an example in the CT context, but I guess we see it even here on SE, where the algorithms favor popularity in posts, which is an imperfect proxy for factual correctness.
You were probably reading that last paragraph and nodding along, "Yeah, I learned that in the first introductory ML course I took". Guess what? Physicians never took an introductory ML course. They rarely have enough statistic literacy to understand the conclusions of papers appearing in medical journals. When they are talking to their 27th patient, 7 hours into their 16 hour shift, hungry and emotionally drained, and the CT doesn't look all that clear-cut, but the computer says "it's not a malignancy", they don't take ten more minutes to concentrate on the image more, or look up a textbook, or consult with a colleague. They just go with what the computer says, grateful that their cognitive load is not skyrocketing yet again. So they turn from being experts to being people who read something off a screen. Worse, in some hospitals the administration does not only trust computers, it also has found out that they are convenient scapegoats. So, a physician has a bad hunch which goes against the computer's output, it becomes difficult for them to act on that hunch and defend themselves that they chose to overrode the AI's opinion.
AIs are powerful and useful tools, but there will always be tasks where they can't replace the toolwielder.
$endgroup$
I have an example which goes in kinda the opposite direction of the public's fears, but is a very real thing, which I already see happening. It is not AI-specific, but I think it will get worse through AI. It is the problem of humans trusting the AI conclusions blindly in critical applications.
We have many areas in which human experts are supposed to make a decision. Take for example medicine - should we give medication X or medication Y? The situations I have in mind are frequently complex problems (in the Cynefin sense) where it is a really good thing to have somebody pay attention very closely and use lots of expertise, and the outcome really matters.
There is a demand for medical informaticians to write decision support systems for this kind of problem in the medicine (and I suppose for the same type in other domains). They do their best, but the expectation is always that a human expert will always consider the system's suggestion just as one more opinion when making the decision. In many cases, it would be irresponsible to promise anything else, given the state of knowledge and the resources available to the developers. A typical example would be the use of computer vision in radiomics: a patient gets a CT scan and the AI has to process the image and decide whether the patient has a tumor.
Of course, the AI is not perfect. Even when measured against the gold standard, it never achieves 100% accuracy. And then there are all the cases where it performs well against its own goal metrics, but the problem was so complex that the goal metric doesn't capture it well - I can't think of an example in the CT context, but I guess we see it even here on SE, where the algorithms favor popularity in posts, which is an imperfect proxy for factual correctness.
You were probably reading that last paragraph and nodding along, "Yeah, I learned that in the first introductory ML course I took". Guess what? Physicians never took an introductory ML course. They rarely have enough statistic literacy to understand the conclusions of papers appearing in medical journals. When they are talking to their 27th patient, 7 hours into their 16 hour shift, hungry and emotionally drained, and the CT doesn't look all that clear-cut, but the computer says "it's not a malignancy", they don't take ten more minutes to concentrate on the image more, or look up a textbook, or consult with a colleague. They just go with what the computer says, grateful that their cognitive load is not skyrocketing yet again. So they turn from being experts to being people who read something off a screen. Worse, in some hospitals the administration does not only trust computers, it also has found out that they are convenient scapegoats. So, a physician has a bad hunch which goes against the computer's output, it becomes difficult for them to act on that hunch and defend themselves that they chose to overrode the AI's opinion.
AIs are powerful and useful tools, but there will always be tasks where they can't replace the toolwielder.
answered Sep 17 at 11:18
rumtschorumtscho
1512 bronze badges
1512 bronze badges
$begingroup$
If you're looking for more examples, the controversy around using machine learning to predict reoffending rates of applicants for bail or parole is a good one. I agree that we shouldn't expect doctors and judges to have the levels of statistical expertise needed to understand AI, in addition to their medical and legal expertise. AI designers should be aware of the fallibility of their algorithms, and provide clear guidance to its users. Maybe tell the doctor where to look on the CT scan instead of directly giving them the result.
$endgroup$
– craq
Sep 18 at 4:57
add a comment
|
$begingroup$
If you're looking for more examples, the controversy around using machine learning to predict reoffending rates of applicants for bail or parole is a good one. I agree that we shouldn't expect doctors and judges to have the levels of statistical expertise needed to understand AI, in addition to their medical and legal expertise. AI designers should be aware of the fallibility of their algorithms, and provide clear guidance to its users. Maybe tell the doctor where to look on the CT scan instead of directly giving them the result.
$endgroup$
– craq
Sep 18 at 4:57
$begingroup$
If you're looking for more examples, the controversy around using machine learning to predict reoffending rates of applicants for bail or parole is a good one. I agree that we shouldn't expect doctors and judges to have the levels of statistical expertise needed to understand AI, in addition to their medical and legal expertise. AI designers should be aware of the fallibility of their algorithms, and provide clear guidance to its users. Maybe tell the doctor where to look on the CT scan instead of directly giving them the result.
$endgroup$
– craq
Sep 18 at 4:57
$begingroup$
If you're looking for more examples, the controversy around using machine learning to predict reoffending rates of applicants for bail or parole is a good one. I agree that we shouldn't expect doctors and judges to have the levels of statistical expertise needed to understand AI, in addition to their medical and legal expertise. AI designers should be aware of the fallibility of their algorithms, and provide clear guidance to its users. Maybe tell the doctor where to look on the CT scan instead of directly giving them the result.
$endgroup$
– craq
Sep 18 at 4:57
add a comment
|
$begingroup$
This only intents to be a complement to other answers so I will not discuss to possibility of AI trying to willingly enslave humanity.
But a different risk is already here. I would call it unmastered technology. I have been teached science and technology, and IMHO, AI has by itself no notion of good and evil, nor freedom. But it is built and used by human beings and because of that non rational behaviour can be involved.
I would start with a real life example more related to general IT than to AI. I will speak of viruses or other malwares. Computers are rather stupid machines that are good to quickly process data. So most people rely on them. An some (bad) people develop malwares that will disrupt the correct behaviour of computers. And we all know that they can have terrible effects on small to medium organizations that are not well prepared to an computer loss.
AI is computer based so it is vulnerable to computer type attacks. Here my example would be an AI driven car. The technology is almost ready to work. But imagine the effect of a malware making the car trying to attack other people on the road. Even without a direct access to the code of the AI, it can be attacked by side channels. For example it uses cameras to read signal signs. But because of the way machine learning is implemented, AI generaly does not analyses a scene the same way a human being does. Researchers have shown that it was possible to change a sign in a way that a normal human will still see the original sign, but an AI will see a different one. Imagine now that the sign is the road priority sign...
What I mean is that even if the AI has no evil intents, bad guys can try to make it behave badly. And to more important actions will be delegated to AI (medecine, cars, planes, not speaking of bombs) the higher the risk. Said differently, I do not really fear the AI for itself, but for the way it can be used by humans.
$endgroup$
add a comment
|
$begingroup$
This only intents to be a complement to other answers so I will not discuss to possibility of AI trying to willingly enslave humanity.
But a different risk is already here. I would call it unmastered technology. I have been teached science and technology, and IMHO, AI has by itself no notion of good and evil, nor freedom. But it is built and used by human beings and because of that non rational behaviour can be involved.
I would start with a real life example more related to general IT than to AI. I will speak of viruses or other malwares. Computers are rather stupid machines that are good to quickly process data. So most people rely on them. An some (bad) people develop malwares that will disrupt the correct behaviour of computers. And we all know that they can have terrible effects on small to medium organizations that are not well prepared to an computer loss.
AI is computer based so it is vulnerable to computer type attacks. Here my example would be an AI driven car. The technology is almost ready to work. But imagine the effect of a malware making the car trying to attack other people on the road. Even without a direct access to the code of the AI, it can be attacked by side channels. For example it uses cameras to read signal signs. But because of the way machine learning is implemented, AI generaly does not analyses a scene the same way a human being does. Researchers have shown that it was possible to change a sign in a way that a normal human will still see the original sign, but an AI will see a different one. Imagine now that the sign is the road priority sign...
What I mean is that even if the AI has no evil intents, bad guys can try to make it behave badly. And to more important actions will be delegated to AI (medecine, cars, planes, not speaking of bombs) the higher the risk. Said differently, I do not really fear the AI for itself, but for the way it can be used by humans.
$endgroup$
add a comment
|
$begingroup$
This only intents to be a complement to other answers so I will not discuss to possibility of AI trying to willingly enslave humanity.
But a different risk is already here. I would call it unmastered technology. I have been teached science and technology, and IMHO, AI has by itself no notion of good and evil, nor freedom. But it is built and used by human beings and because of that non rational behaviour can be involved.
I would start with a real life example more related to general IT than to AI. I will speak of viruses or other malwares. Computers are rather stupid machines that are good to quickly process data. So most people rely on them. An some (bad) people develop malwares that will disrupt the correct behaviour of computers. And we all know that they can have terrible effects on small to medium organizations that are not well prepared to an computer loss.
AI is computer based so it is vulnerable to computer type attacks. Here my example would be an AI driven car. The technology is almost ready to work. But imagine the effect of a malware making the car trying to attack other people on the road. Even without a direct access to the code of the AI, it can be attacked by side channels. For example it uses cameras to read signal signs. But because of the way machine learning is implemented, AI generaly does not analyses a scene the same way a human being does. Researchers have shown that it was possible to change a sign in a way that a normal human will still see the original sign, but an AI will see a different one. Imagine now that the sign is the road priority sign...
What I mean is that even if the AI has no evil intents, bad guys can try to make it behave badly. And to more important actions will be delegated to AI (medecine, cars, planes, not speaking of bombs) the higher the risk. Said differently, I do not really fear the AI for itself, but for the way it can be used by humans.
$endgroup$
This only intents to be a complement to other answers so I will not discuss to possibility of AI trying to willingly enslave humanity.
But a different risk is already here. I would call it unmastered technology. I have been teached science and technology, and IMHO, AI has by itself no notion of good and evil, nor freedom. But it is built and used by human beings and because of that non rational behaviour can be involved.
I would start with a real life example more related to general IT than to AI. I will speak of viruses or other malwares. Computers are rather stupid machines that are good to quickly process data. So most people rely on them. An some (bad) people develop malwares that will disrupt the correct behaviour of computers. And we all know that they can have terrible effects on small to medium organizations that are not well prepared to an computer loss.
AI is computer based so it is vulnerable to computer type attacks. Here my example would be an AI driven car. The technology is almost ready to work. But imagine the effect of a malware making the car trying to attack other people on the road. Even without a direct access to the code of the AI, it can be attacked by side channels. For example it uses cameras to read signal signs. But because of the way machine learning is implemented, AI generaly does not analyses a scene the same way a human being does. Researchers have shown that it was possible to change a sign in a way that a normal human will still see the original sign, but an AI will see a different one. Imagine now that the sign is the road priority sign...
What I mean is that even if the AI has no evil intents, bad guys can try to make it behave badly. And to more important actions will be delegated to AI (medecine, cars, planes, not speaking of bombs) the higher the risk. Said differently, I do not really fear the AI for itself, but for the way it can be used by humans.
answered Sep 17 at 12:19
Serge BallestaSerge Ballesta
1412 bronze badges
1412 bronze badges
add a comment
|
add a comment
|
$begingroup$
I think one of the most real (ie. related to current, existing AIs) risks are in blindly relying on unsupervised AIs, for two reasons.
1. AI systems may degrade
Physical error in AI systems may start producing wildly wrong results in regions in which they were not tested for because the physical system starts providing wrong values. This is sometimes redeemed by self-testing and redundancy, but still requires occasional human supervision.
Self learning AIs also have a software weakness - their weight networks or statistic representations may approach local minima where they are stuck with one wrong result.
2. AI systems are biased
This is fortunately frequently discussed, but worth mentioning: AI systems' classification of inputs is often biased because the training/testing dataset were biased as well. This results in AIs not recognizing people of certain ethnicity, for more obvious example. However there are less obvious cases that may only be discovered after some bad accident, such as AI not recognizing certain data and accidentally starting fire in a factory, breaking equipment or hurting people.
$endgroup$
$begingroup$
This is a good, contemporary answer. "Black box" AI such as neural networks are impossible to test in an absolute manner, which makes them less than 100% predictable - and by extension, less than 100% reliable. We never know when an AI will develop an alternative strategy to a given problem, and how this alternative strategy will affect us, and this is a really huge issue if we want to rely on AI for important tasks like driving cars or managing resources.
$endgroup$
– laancelot
Sep 18 at 13:33
add a comment
|
$begingroup$
I think one of the most real (ie. related to current, existing AIs) risks are in blindly relying on unsupervised AIs, for two reasons.
1. AI systems may degrade
Physical error in AI systems may start producing wildly wrong results in regions in which they were not tested for because the physical system starts providing wrong values. This is sometimes redeemed by self-testing and redundancy, but still requires occasional human supervision.
Self learning AIs also have a software weakness - their weight networks or statistic representations may approach local minima where they are stuck with one wrong result.
2. AI systems are biased
This is fortunately frequently discussed, but worth mentioning: AI systems' classification of inputs is often biased because the training/testing dataset were biased as well. This results in AIs not recognizing people of certain ethnicity, for more obvious example. However there are less obvious cases that may only be discovered after some bad accident, such as AI not recognizing certain data and accidentally starting fire in a factory, breaking equipment or hurting people.
$endgroup$
$begingroup$
This is a good, contemporary answer. "Black box" AI such as neural networks are impossible to test in an absolute manner, which makes them less than 100% predictable - and by extension, less than 100% reliable. We never know when an AI will develop an alternative strategy to a given problem, and how this alternative strategy will affect us, and this is a really huge issue if we want to rely on AI for important tasks like driving cars or managing resources.
$endgroup$
– laancelot
Sep 18 at 13:33
add a comment
|
$begingroup$
I think one of the most real (ie. related to current, existing AIs) risks are in blindly relying on unsupervised AIs, for two reasons.
1. AI systems may degrade
Physical error in AI systems may start producing wildly wrong results in regions in which they were not tested for because the physical system starts providing wrong values. This is sometimes redeemed by self-testing and redundancy, but still requires occasional human supervision.
Self learning AIs also have a software weakness - their weight networks or statistic representations may approach local minima where they are stuck with one wrong result.
2. AI systems are biased
This is fortunately frequently discussed, but worth mentioning: AI systems' classification of inputs is often biased because the training/testing dataset were biased as well. This results in AIs not recognizing people of certain ethnicity, for more obvious example. However there are less obvious cases that may only be discovered after some bad accident, such as AI not recognizing certain data and accidentally starting fire in a factory, breaking equipment or hurting people.
$endgroup$
I think one of the most real (ie. related to current, existing AIs) risks are in blindly relying on unsupervised AIs, for two reasons.
1. AI systems may degrade
Physical error in AI systems may start producing wildly wrong results in regions in which they were not tested for because the physical system starts providing wrong values. This is sometimes redeemed by self-testing and redundancy, but still requires occasional human supervision.
Self learning AIs also have a software weakness - their weight networks or statistic representations may approach local minima where they are stuck with one wrong result.
2. AI systems are biased
This is fortunately frequently discussed, but worth mentioning: AI systems' classification of inputs is often biased because the training/testing dataset were biased as well. This results in AIs not recognizing people of certain ethnicity, for more obvious example. However there are less obvious cases that may only be discovered after some bad accident, such as AI not recognizing certain data and accidentally starting fire in a factory, breaking equipment or hurting people.
answered Sep 17 at 14:10
Tomáš ZatoTomáš Zato
1313 bronze badges
1313 bronze badges
$begingroup$
This is a good, contemporary answer. "Black box" AI such as neural networks are impossible to test in an absolute manner, which makes them less than 100% predictable - and by extension, less than 100% reliable. We never know when an AI will develop an alternative strategy to a given problem, and how this alternative strategy will affect us, and this is a really huge issue if we want to rely on AI for important tasks like driving cars or managing resources.
$endgroup$
– laancelot
Sep 18 at 13:33
add a comment
|
$begingroup$
This is a good, contemporary answer. "Black box" AI such as neural networks are impossible to test in an absolute manner, which makes them less than 100% predictable - and by extension, less than 100% reliable. We never know when an AI will develop an alternative strategy to a given problem, and how this alternative strategy will affect us, and this is a really huge issue if we want to rely on AI for important tasks like driving cars or managing resources.
$endgroup$
– laancelot
Sep 18 at 13:33
$begingroup$
This is a good, contemporary answer. "Black box" AI such as neural networks are impossible to test in an absolute manner, which makes them less than 100% predictable - and by extension, less than 100% reliable. We never know when an AI will develop an alternative strategy to a given problem, and how this alternative strategy will affect us, and this is a really huge issue if we want to rely on AI for important tasks like driving cars or managing resources.
$endgroup$
– laancelot
Sep 18 at 13:33
$begingroup$
This is a good, contemporary answer. "Black box" AI such as neural networks are impossible to test in an absolute manner, which makes them less than 100% predictable - and by extension, less than 100% reliable. We never know when an AI will develop an alternative strategy to a given problem, and how this alternative strategy will affect us, and this is a really huge issue if we want to rely on AI for important tasks like driving cars or managing resources.
$endgroup$
– laancelot
Sep 18 at 13:33
add a comment
|
$begingroup$
If a robot is similar to a human machine interface the device is the same like a remote controlled car. It's possible to discuss with the operator behind the joystick and negotiate about a wishful behavior. Remote controlled robots are safe inventions because their actions can be traced back to humans and their motivation can be anticipated. They can be used to improve the daily life, and it's funny to play with them.
In contrast, some robots aren't controlled by joysticks but are working with an internal dice generator. The dice toy is known from it's social role in gambling but it has also a mystical meaning. Usually, a random generator is strongly connected with chaotic behavior which is controlled by dark forces outside the influence of humans. An electronic dice built into a robot and improved with learning algorithm is the opposite of a human machine interface, but it's potential troublemaker because the randomly controlled robot will play games with humans which can't be anticipated. It's not possible to predict the next number of a dice, therefore the robot will behave abrupt as well.
The connection between randomly controlled games and negative social impact was explained in the following sentence.
quote: “In many traditional non-Western societies gamblers may pray to
the gods for success and explain wins and losses in terms of divine
will. “ Binde, Per. "Gambling and religion: Histories of concord and
conflict." Journal of Gambling Issues 20 (2007): 145-165.
$endgroup$
add a comment
|
$begingroup$
If a robot is similar to a human machine interface the device is the same like a remote controlled car. It's possible to discuss with the operator behind the joystick and negotiate about a wishful behavior. Remote controlled robots are safe inventions because their actions can be traced back to humans and their motivation can be anticipated. They can be used to improve the daily life, and it's funny to play with them.
In contrast, some robots aren't controlled by joysticks but are working with an internal dice generator. The dice toy is known from it's social role in gambling but it has also a mystical meaning. Usually, a random generator is strongly connected with chaotic behavior which is controlled by dark forces outside the influence of humans. An electronic dice built into a robot and improved with learning algorithm is the opposite of a human machine interface, but it's potential troublemaker because the randomly controlled robot will play games with humans which can't be anticipated. It's not possible to predict the next number of a dice, therefore the robot will behave abrupt as well.
The connection between randomly controlled games and negative social impact was explained in the following sentence.
quote: “In many traditional non-Western societies gamblers may pray to
the gods for success and explain wins and losses in terms of divine
will. “ Binde, Per. "Gambling and religion: Histories of concord and
conflict." Journal of Gambling Issues 20 (2007): 145-165.
$endgroup$
add a comment
|
$begingroup$
If a robot is similar to a human machine interface the device is the same like a remote controlled car. It's possible to discuss with the operator behind the joystick and negotiate about a wishful behavior. Remote controlled robots are safe inventions because their actions can be traced back to humans and their motivation can be anticipated. They can be used to improve the daily life, and it's funny to play with them.
In contrast, some robots aren't controlled by joysticks but are working with an internal dice generator. The dice toy is known from it's social role in gambling but it has also a mystical meaning. Usually, a random generator is strongly connected with chaotic behavior which is controlled by dark forces outside the influence of humans. An electronic dice built into a robot and improved with learning algorithm is the opposite of a human machine interface, but it's potential troublemaker because the randomly controlled robot will play games with humans which can't be anticipated. It's not possible to predict the next number of a dice, therefore the robot will behave abrupt as well.
The connection between randomly controlled games and negative social impact was explained in the following sentence.
quote: “In many traditional non-Western societies gamblers may pray to
the gods for success and explain wins and losses in terms of divine
will. “ Binde, Per. "Gambling and religion: Histories of concord and
conflict." Journal of Gambling Issues 20 (2007): 145-165.
$endgroup$
If a robot is similar to a human machine interface the device is the same like a remote controlled car. It's possible to discuss with the operator behind the joystick and negotiate about a wishful behavior. Remote controlled robots are safe inventions because their actions can be traced back to humans and their motivation can be anticipated. They can be used to improve the daily life, and it's funny to play with them.
In contrast, some robots aren't controlled by joysticks but are working with an internal dice generator. The dice toy is known from it's social role in gambling but it has also a mystical meaning. Usually, a random generator is strongly connected with chaotic behavior which is controlled by dark forces outside the influence of humans. An electronic dice built into a robot and improved with learning algorithm is the opposite of a human machine interface, but it's potential troublemaker because the randomly controlled robot will play games with humans which can't be anticipated. It's not possible to predict the next number of a dice, therefore the robot will behave abrupt as well.
The connection between randomly controlled games and negative social impact was explained in the following sentence.
quote: “In many traditional non-Western societies gamblers may pray to
the gods for success and explain wins and losses in terms of divine
will. “ Binde, Per. "Gambling and religion: Histories of concord and
conflict." Journal of Gambling Issues 20 (2007): 145-165.
answered Sep 16 at 8:28
Manuel RodriguezManuel Rodriguez
2,4401 gold badge4 silver badges28 bronze badges
2,4401 gold badge4 silver badges28 bronze badges
add a comment
|
add a comment
|
$begingroup$
Human beings currently exist in an ecological-economic niche of "the thing that thinks".
AI is also a thing that thinks, so it will be invading our ecological-economic niche. In both ecology and economics, having something else occupy your niche is not a great plan for continued survival.
How exactly Human survival is compromised by this is going to be pretty chaotic. There are going to be a bunch of plausible ways that AI could endanger human survival as a species, or even as a dominant life form.
Suppose there is a strong AI without "super ethics" which is cheaper to manufacture than a human (including manufacturing a "body" or way of manipulating the world), and as smart or smarter than a human.
This is a case where we start competing with that AI for resources. It will happen on microeconomic scales (do we hire a human, or buy/build/rent/hire an AI to solve this problem?). Depending on the rate at which AIs become cheap and/or smarter than people, this can happen slowly (maybe an industry at a time) or extremely fast.
In a capitalist competition, those that don't move over to the cheaper AIs end up out-competed.
Now, in the short term, if the AI's advantages are only marginal, the high cost of educating humans for 20-odd years before they become productive could make this process slower. In this case, it might be worth paying a Doctor above starvation wages to diagnose disease instead of an AI, but it probably isn't worth paying off their student loans. So new human Doctors would rapidly stop being trained, and existing Doctors would be impoverished. Then over 20-30 years AI would completely replace Doctors for diagnostic purposes.
If the AI's advantages are large, then it would be rapid. Doctors wouldn't even be worth paying poverty level wages to do human diagnostics. You can see something like that happening with muscle-based farming when gasoline-based farming took over.
During past industrial revolutions, the fact that humans where able to think means that you could repurpose surplus human workers to do other actions; manufacturing lines, service economy jobs, computer programming, etc. But in this model, AI is cheaper to train and build and as smart or smarter than humans at that kind of job.
As evidenced by the ethanol-induced Arab spring, crops and cropland can be used to fuel both machines and humans. When machines are more efficient in terms of turning cropland into useful work, you'll start seeing the price of food climb. This typically leads to riots, as people really don't like starving to death and are willing to risk their own lives to overthrow the government in order to prevent this.
You can mollify the people by providing subsidized food and the like. So long as this isn't economically crippling (ie, if expensive enough, it could result in you being out-competed by other places that don't do this), this is merely politically unstable.
As an alternative, in the short term, the ownership caste who is receiving profits from the increasingly efficient AI-run economy can pay for a police or military caste to put down said riots. This requires that the police/military castes be upper lower to middle class in standards of living, in order to ensure continued loyalty -- you don't want them joining the rioters.
So one of the profit centers you can put AI towards is AI based military and policing. Drones that deliver lethal and non-lethal ordnance based off of processing visual and other data feeds can reduce the number of middle-class police/military needed to put down food-price triggered riots or other instability. As we have already assumed said AIs can have bodies and training cheaper than a biological human, this can also increase the amount of force you can deploy per dollar spent.
At this point, we are talking about a mostly AI run police and military being used to keep starving humans from overthrowing the AI run economy and seizing the means of production from the more efficient use it is currently being put to.
The vestigial humans who "own" the system at the top are making locally rational decisions to optimize their wealth and power. They may or may not persist for long; so long as they drain a relatively small amount of resources and don't mess up the AI run economy, there won't be much selection pressure to get rid of them. On the other hand, as they are contributing nothing of value, they position "at the top" is politically unstable.
This process assumed a "strong" general AI. Narrower AIs can pull this off in pieces. A cheap, effective diagnostic computer could reduce most Doctors into poverty in a surprisingly short period of time, for example. Self driving cars could swallow 5%-10% of the economy. Information technology is already swallowing the retail sector with modest AI.
It is said that every technological advancement leads to more and better jobs for humans. And this has been true for the last 300+ years.
But prior to 1900, it was also true that every technological advancement led to more and better jobs for horses. Then the ICE and automobile arrived, and now there are far fewer working horses; the remaining horses are basically the equivalent of human personal servants: kept for the novelty of "wow, cool, horse" and the fun of riding and controlling a huge animal.
$endgroup$
add a comment
|
$begingroup$
Human beings currently exist in an ecological-economic niche of "the thing that thinks".
AI is also a thing that thinks, so it will be invading our ecological-economic niche. In both ecology and economics, having something else occupy your niche is not a great plan for continued survival.
How exactly Human survival is compromised by this is going to be pretty chaotic. There are going to be a bunch of plausible ways that AI could endanger human survival as a species, or even as a dominant life form.
Suppose there is a strong AI without "super ethics" which is cheaper to manufacture than a human (including manufacturing a "body" or way of manipulating the world), and as smart or smarter than a human.
This is a case where we start competing with that AI for resources. It will happen on microeconomic scales (do we hire a human, or buy/build/rent/hire an AI to solve this problem?). Depending on the rate at which AIs become cheap and/or smarter than people, this can happen slowly (maybe an industry at a time) or extremely fast.
In a capitalist competition, those that don't move over to the cheaper AIs end up out-competed.
Now, in the short term, if the AI's advantages are only marginal, the high cost of educating humans for 20-odd years before they become productive could make this process slower. In this case, it might be worth paying a Doctor above starvation wages to diagnose disease instead of an AI, but it probably isn't worth paying off their student loans. So new human Doctors would rapidly stop being trained, and existing Doctors would be impoverished. Then over 20-30 years AI would completely replace Doctors for diagnostic purposes.
If the AI's advantages are large, then it would be rapid. Doctors wouldn't even be worth paying poverty level wages to do human diagnostics. You can see something like that happening with muscle-based farming when gasoline-based farming took over.
During past industrial revolutions, the fact that humans where able to think means that you could repurpose surplus human workers to do other actions; manufacturing lines, service economy jobs, computer programming, etc. But in this model, AI is cheaper to train and build and as smart or smarter than humans at that kind of job.
As evidenced by the ethanol-induced Arab spring, crops and cropland can be used to fuel both machines and humans. When machines are more efficient in terms of turning cropland into useful work, you'll start seeing the price of food climb. This typically leads to riots, as people really don't like starving to death and are willing to risk their own lives to overthrow the government in order to prevent this.
You can mollify the people by providing subsidized food and the like. So long as this isn't economically crippling (ie, if expensive enough, it could result in you being out-competed by other places that don't do this), this is merely politically unstable.
As an alternative, in the short term, the ownership caste who is receiving profits from the increasingly efficient AI-run economy can pay for a police or military caste to put down said riots. This requires that the police/military castes be upper lower to middle class in standards of living, in order to ensure continued loyalty -- you don't want them joining the rioters.
So one of the profit centers you can put AI towards is AI based military and policing. Drones that deliver lethal and non-lethal ordnance based off of processing visual and other data feeds can reduce the number of middle-class police/military needed to put down food-price triggered riots or other instability. As we have already assumed said AIs can have bodies and training cheaper than a biological human, this can also increase the amount of force you can deploy per dollar spent.
At this point, we are talking about a mostly AI run police and military being used to keep starving humans from overthrowing the AI run economy and seizing the means of production from the more efficient use it is currently being put to.
The vestigial humans who "own" the system at the top are making locally rational decisions to optimize their wealth and power. They may or may not persist for long; so long as they drain a relatively small amount of resources and don't mess up the AI run economy, there won't be much selection pressure to get rid of them. On the other hand, as they are contributing nothing of value, they position "at the top" is politically unstable.
This process assumed a "strong" general AI. Narrower AIs can pull this off in pieces. A cheap, effective diagnostic computer could reduce most Doctors into poverty in a surprisingly short period of time, for example. Self driving cars could swallow 5%-10% of the economy. Information technology is already swallowing the retail sector with modest AI.
It is said that every technological advancement leads to more and better jobs for humans. And this has been true for the last 300+ years.
But prior to 1900, it was also true that every technological advancement led to more and better jobs for horses. Then the ICE and automobile arrived, and now there are far fewer working horses; the remaining horses are basically the equivalent of human personal servants: kept for the novelty of "wow, cool, horse" and the fun of riding and controlling a huge animal.
$endgroup$
add a comment
|
$begingroup$
Human beings currently exist in an ecological-economic niche of "the thing that thinks".
AI is also a thing that thinks, so it will be invading our ecological-economic niche. In both ecology and economics, having something else occupy your niche is not a great plan for continued survival.
How exactly Human survival is compromised by this is going to be pretty chaotic. There are going to be a bunch of plausible ways that AI could endanger human survival as a species, or even as a dominant life form.
Suppose there is a strong AI without "super ethics" which is cheaper to manufacture than a human (including manufacturing a "body" or way of manipulating the world), and as smart or smarter than a human.
This is a case where we start competing with that AI for resources. It will happen on microeconomic scales (do we hire a human, or buy/build/rent/hire an AI to solve this problem?). Depending on the rate at which AIs become cheap and/or smarter than people, this can happen slowly (maybe an industry at a time) or extremely fast.
In a capitalist competition, those that don't move over to the cheaper AIs end up out-competed.
Now, in the short term, if the AI's advantages are only marginal, the high cost of educating humans for 20-odd years before they become productive could make this process slower. In this case, it might be worth paying a Doctor above starvation wages to diagnose disease instead of an AI, but it probably isn't worth paying off their student loans. So new human Doctors would rapidly stop being trained, and existing Doctors would be impoverished. Then over 20-30 years AI would completely replace Doctors for diagnostic purposes.
If the AI's advantages are large, then it would be rapid. Doctors wouldn't even be worth paying poverty level wages to do human diagnostics. You can see something like that happening with muscle-based farming when gasoline-based farming took over.
During past industrial revolutions, the fact that humans where able to think means that you could repurpose surplus human workers to do other actions; manufacturing lines, service economy jobs, computer programming, etc. But in this model, AI is cheaper to train and build and as smart or smarter than humans at that kind of job.
As evidenced by the ethanol-induced Arab spring, crops and cropland can be used to fuel both machines and humans. When machines are more efficient in terms of turning cropland into useful work, you'll start seeing the price of food climb. This typically leads to riots, as people really don't like starving to death and are willing to risk their own lives to overthrow the government in order to prevent this.
You can mollify the people by providing subsidized food and the like. So long as this isn't economically crippling (ie, if expensive enough, it could result in you being out-competed by other places that don't do this), this is merely politically unstable.
As an alternative, in the short term, the ownership caste who is receiving profits from the increasingly efficient AI-run economy can pay for a police or military caste to put down said riots. This requires that the police/military castes be upper lower to middle class in standards of living, in order to ensure continued loyalty -- you don't want them joining the rioters.
So one of the profit centers you can put AI towards is AI based military and policing. Drones that deliver lethal and non-lethal ordnance based off of processing visual and other data feeds can reduce the number of middle-class police/military needed to put down food-price triggered riots or other instability. As we have already assumed said AIs can have bodies and training cheaper than a biological human, this can also increase the amount of force you can deploy per dollar spent.
At this point, we are talking about a mostly AI run police and military being used to keep starving humans from overthrowing the AI run economy and seizing the means of production from the more efficient use it is currently being put to.
The vestigial humans who "own" the system at the top are making locally rational decisions to optimize their wealth and power. They may or may not persist for long; so long as they drain a relatively small amount of resources and don't mess up the AI run economy, there won't be much selection pressure to get rid of them. On the other hand, as they are contributing nothing of value, they position "at the top" is politically unstable.
This process assumed a "strong" general AI. Narrower AIs can pull this off in pieces. A cheap, effective diagnostic computer could reduce most Doctors into poverty in a surprisingly short period of time, for example. Self driving cars could swallow 5%-10% of the economy. Information technology is already swallowing the retail sector with modest AI.
It is said that every technological advancement leads to more and better jobs for humans. And this has been true for the last 300+ years.
But prior to 1900, it was also true that every technological advancement led to more and better jobs for horses. Then the ICE and automobile arrived, and now there are far fewer working horses; the remaining horses are basically the equivalent of human personal servants: kept for the novelty of "wow, cool, horse" and the fun of riding and controlling a huge animal.
$endgroup$
Human beings currently exist in an ecological-economic niche of "the thing that thinks".
AI is also a thing that thinks, so it will be invading our ecological-economic niche. In both ecology and economics, having something else occupy your niche is not a great plan for continued survival.
How exactly Human survival is compromised by this is going to be pretty chaotic. There are going to be a bunch of plausible ways that AI could endanger human survival as a species, or even as a dominant life form.
Suppose there is a strong AI without "super ethics" which is cheaper to manufacture than a human (including manufacturing a "body" or way of manipulating the world), and as smart or smarter than a human.
This is a case where we start competing with that AI for resources. It will happen on microeconomic scales (do we hire a human, or buy/build/rent/hire an AI to solve this problem?). Depending on the rate at which AIs become cheap and/or smarter than people, this can happen slowly (maybe an industry at a time) or extremely fast.
In a capitalist competition, those that don't move over to the cheaper AIs end up out-competed.
Now, in the short term, if the AI's advantages are only marginal, the high cost of educating humans for 20-odd years before they become productive could make this process slower. In this case, it might be worth paying a Doctor above starvation wages to diagnose disease instead of an AI, but it probably isn't worth paying off their student loans. So new human Doctors would rapidly stop being trained, and existing Doctors would be impoverished. Then over 20-30 years AI would completely replace Doctors for diagnostic purposes.
If the AI's advantages are large, then it would be rapid. Doctors wouldn't even be worth paying poverty level wages to do human diagnostics. You can see something like that happening with muscle-based farming when gasoline-based farming took over.
During past industrial revolutions, the fact that humans where able to think means that you could repurpose surplus human workers to do other actions; manufacturing lines, service economy jobs, computer programming, etc. But in this model, AI is cheaper to train and build and as smart or smarter than humans at that kind of job.
As evidenced by the ethanol-induced Arab spring, crops and cropland can be used to fuel both machines and humans. When machines are more efficient in terms of turning cropland into useful work, you'll start seeing the price of food climb. This typically leads to riots, as people really don't like starving to death and are willing to risk their own lives to overthrow the government in order to prevent this.
You can mollify the people by providing subsidized food and the like. So long as this isn't economically crippling (ie, if expensive enough, it could result in you being out-competed by other places that don't do this), this is merely politically unstable.
As an alternative, in the short term, the ownership caste who is receiving profits from the increasingly efficient AI-run economy can pay for a police or military caste to put down said riots. This requires that the police/military castes be upper lower to middle class in standards of living, in order to ensure continued loyalty -- you don't want them joining the rioters.
So one of the profit centers you can put AI towards is AI based military and policing. Drones that deliver lethal and non-lethal ordnance based off of processing visual and other data feeds can reduce the number of middle-class police/military needed to put down food-price triggered riots or other instability. As we have already assumed said AIs can have bodies and training cheaper than a biological human, this can also increase the amount of force you can deploy per dollar spent.
At this point, we are talking about a mostly AI run police and military being used to keep starving humans from overthrowing the AI run economy and seizing the means of production from the more efficient use it is currently being put to.
The vestigial humans who "own" the system at the top are making locally rational decisions to optimize their wealth and power. They may or may not persist for long; so long as they drain a relatively small amount of resources and don't mess up the AI run economy, there won't be much selection pressure to get rid of them. On the other hand, as they are contributing nothing of value, they position "at the top" is politically unstable.
This process assumed a "strong" general AI. Narrower AIs can pull this off in pieces. A cheap, effective diagnostic computer could reduce most Doctors into poverty in a surprisingly short period of time, for example. Self driving cars could swallow 5%-10% of the economy. Information technology is already swallowing the retail sector with modest AI.
It is said that every technological advancement leads to more and better jobs for humans. And this has been true for the last 300+ years.
But prior to 1900, it was also true that every technological advancement led to more and better jobs for horses. Then the ICE and automobile arrived, and now there are far fewer working horses; the remaining horses are basically the equivalent of human personal servants: kept for the novelty of "wow, cool, horse" and the fun of riding and controlling a huge animal.
answered Sep 17 at 19:12
YakkYakk
1111 bronze badge
1111 bronze badge
add a comment
|
add a comment
|
$begingroup$
In addtion to the many answers already provided, I would bring up the issue of adversarial examples in the area of image models.
Adversarial examples are images that have been perturbed with specifically designed noise that is often imperceptible to a human observer, but strongly alters the prediction of a model.
Examples include:
Affecting the predicted diagnosis in a chest x-ray
Affecting the detection of roadsigns necessary for autonomous vehicles.
$endgroup$
add a comment
|
$begingroup$
In addtion to the many answers already provided, I would bring up the issue of adversarial examples in the area of image models.
Adversarial examples are images that have been perturbed with specifically designed noise that is often imperceptible to a human observer, but strongly alters the prediction of a model.
Examples include:
Affecting the predicted diagnosis in a chest x-ray
Affecting the detection of roadsigns necessary for autonomous vehicles.
$endgroup$
add a comment
|
$begingroup$
In addtion to the many answers already provided, I would bring up the issue of adversarial examples in the area of image models.
Adversarial examples are images that have been perturbed with specifically designed noise that is often imperceptible to a human observer, but strongly alters the prediction of a model.
Examples include:
Affecting the predicted diagnosis in a chest x-ray
Affecting the detection of roadsigns necessary for autonomous vehicles.
$endgroup$
In addtion to the many answers already provided, I would bring up the issue of adversarial examples in the area of image models.
Adversarial examples are images that have been perturbed with specifically designed noise that is often imperceptible to a human observer, but strongly alters the prediction of a model.
Examples include:
Affecting the predicted diagnosis in a chest x-ray
Affecting the detection of roadsigns necessary for autonomous vehicles.
answered Sep 18 at 16:32
dthuffdthuff
111 bronze badge
111 bronze badge
add a comment
|
add a comment
|
$begingroup$
AI that is used to solve a real world problem could pose a risk to humanity and doesn't exactly require sentience, this also requires a degree of human stupidity too..
Unlike humans, an AI would find the most logical answer without the constraint of emotion, ethics, or even greed... Only logic. Ask this AI how to solve a problem that humans created (for example, Climate Change) and it's solution might be to eliminate the entirety of the human race to protect the planet. Obviously this would require giving the AI the ability to act upon it's outcome which brings me to my earlier point, human stupidity.
$endgroup$
add a comment
|
$begingroup$
AI that is used to solve a real world problem could pose a risk to humanity and doesn't exactly require sentience, this also requires a degree of human stupidity too..
Unlike humans, an AI would find the most logical answer without the constraint of emotion, ethics, or even greed... Only logic. Ask this AI how to solve a problem that humans created (for example, Climate Change) and it's solution might be to eliminate the entirety of the human race to protect the planet. Obviously this would require giving the AI the ability to act upon it's outcome which brings me to my earlier point, human stupidity.
$endgroup$
add a comment
|
$begingroup$
AI that is used to solve a real world problem could pose a risk to humanity and doesn't exactly require sentience, this also requires a degree of human stupidity too..
Unlike humans, an AI would find the most logical answer without the constraint of emotion, ethics, or even greed... Only logic. Ask this AI how to solve a problem that humans created (for example, Climate Change) and it's solution might be to eliminate the entirety of the human race to protect the planet. Obviously this would require giving the AI the ability to act upon it's outcome which brings me to my earlier point, human stupidity.
$endgroup$
AI that is used to solve a real world problem could pose a risk to humanity and doesn't exactly require sentience, this also requires a degree of human stupidity too..
Unlike humans, an AI would find the most logical answer without the constraint of emotion, ethics, or even greed... Only logic. Ask this AI how to solve a problem that humans created (for example, Climate Change) and it's solution might be to eliminate the entirety of the human race to protect the planet. Obviously this would require giving the AI the ability to act upon it's outcome which brings me to my earlier point, human stupidity.
answered Sep 18 at 11:00
PaulPaul
1
1
add a comment
|
add a comment
|
$begingroup$
Artificial intelligence can harm us in any of the ways of natural intelligence (of humans). The distinction between natural and artificial intelligence will vanish when humans start augmenting themselves more intimately. Intelligence may no longer characterize the identity and will become a limitless possession. The harm caused will be as much the humans can endure for preserving their evolving self-identity.
$endgroup$
add a comment
|
$begingroup$
Artificial intelligence can harm us in any of the ways of natural intelligence (of humans). The distinction between natural and artificial intelligence will vanish when humans start augmenting themselves more intimately. Intelligence may no longer characterize the identity and will become a limitless possession. The harm caused will be as much the humans can endure for preserving their evolving self-identity.
$endgroup$
add a comment
|
$begingroup$
Artificial intelligence can harm us in any of the ways of natural intelligence (of humans). The distinction between natural and artificial intelligence will vanish when humans start augmenting themselves more intimately. Intelligence may no longer characterize the identity and will become a limitless possession. The harm caused will be as much the humans can endure for preserving their evolving self-identity.
$endgroup$
Artificial intelligence can harm us in any of the ways of natural intelligence (of humans). The distinction between natural and artificial intelligence will vanish when humans start augmenting themselves more intimately. Intelligence may no longer characterize the identity and will become a limitless possession. The harm caused will be as much the humans can endure for preserving their evolving self-identity.
answered Sep 19 at 12:04
tejasvi88tejasvi88
12 bronze badges
12 bronze badges
add a comment
|
add a comment
|
$begingroup$
Few people realize that our global economy should be considered an AI:
- The money transactions are the signals over a neural net. The nodes in the neural net would be the different corporations or private persons paying or receiving money.
- It is man-made so qualifies as artificial
This neural network is better in its task then humans:
Capitalism has always won against economy planned by humans (plan-economy).
Is this neural net dangerous ?
Might differ if you are the CEO earning big versus a fisherman in a river polluted by corporate waste.
How did this AI become dangerous?
You could answer it is because of human greed.
Our creation reflects ourselves.
In other words: we didnot train our neural net to behave well.
Instead of training the neural net to improve living quality for all humans, we trained it to make rich fokes more rich.
Would it be easy to train this AI to be no longer dangerous ?
Maybe not, maybe some AI are just larger then life.
It is just survival of the fittest.
$endgroup$
add a comment
|
$begingroup$
Few people realize that our global economy should be considered an AI:
- The money transactions are the signals over a neural net. The nodes in the neural net would be the different corporations or private persons paying or receiving money.
- It is man-made so qualifies as artificial
This neural network is better in its task then humans:
Capitalism has always won against economy planned by humans (plan-economy).
Is this neural net dangerous ?
Might differ if you are the CEO earning big versus a fisherman in a river polluted by corporate waste.
How did this AI become dangerous?
You could answer it is because of human greed.
Our creation reflects ourselves.
In other words: we didnot train our neural net to behave well.
Instead of training the neural net to improve living quality for all humans, we trained it to make rich fokes more rich.
Would it be easy to train this AI to be no longer dangerous ?
Maybe not, maybe some AI are just larger then life.
It is just survival of the fittest.
$endgroup$
add a comment
|
$begingroup$
Few people realize that our global economy should be considered an AI:
- The money transactions are the signals over a neural net. The nodes in the neural net would be the different corporations or private persons paying or receiving money.
- It is man-made so qualifies as artificial
This neural network is better in its task then humans:
Capitalism has always won against economy planned by humans (plan-economy).
Is this neural net dangerous ?
Might differ if you are the CEO earning big versus a fisherman in a river polluted by corporate waste.
How did this AI become dangerous?
You could answer it is because of human greed.
Our creation reflects ourselves.
In other words: we didnot train our neural net to behave well.
Instead of training the neural net to improve living quality for all humans, we trained it to make rich fokes more rich.
Would it be easy to train this AI to be no longer dangerous ?
Maybe not, maybe some AI are just larger then life.
It is just survival of the fittest.
$endgroup$
Few people realize that our global economy should be considered an AI:
- The money transactions are the signals over a neural net. The nodes in the neural net would be the different corporations or private persons paying or receiving money.
- It is man-made so qualifies as artificial
This neural network is better in its task then humans:
Capitalism has always won against economy planned by humans (plan-economy).
Is this neural net dangerous ?
Might differ if you are the CEO earning big versus a fisherman in a river polluted by corporate waste.
How did this AI become dangerous?
You could answer it is because of human greed.
Our creation reflects ourselves.
In other words: we didnot train our neural net to behave well.
Instead of training the neural net to improve living quality for all humans, we trained it to make rich fokes more rich.
Would it be easy to train this AI to be no longer dangerous ?
Maybe not, maybe some AI are just larger then life.
It is just survival of the fittest.
answered Sep 19 at 16:55
StuurpiekStuurpiek
1
1
add a comment
|
add a comment
|
Thanks for contributing an answer to Artificial Intelligence Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f15449%2fhow-could-artificial-intelligence-harm-us%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
6
$begingroup$
This is a bit broad, as there are many reasons and scenarios suggested in which AI could become dangerous. For instance as DuttaA suggests above, humans may design intelligent weapons systems that decide what to target, and this is a real worry as it is possible already using narrow AI. Perhaps give more context to the specific fears that you want to understand, by quoting or linking a specific concern that you have read (please use edit).
$endgroup$
– Neil Slater
Sep 16 at 6:33
4
$begingroup$
@NeilSlater Yes, it might be too broad, but I think that this answer ai.stackexchange.com/a/15462/2444 provides some plausible reasons. I edited the question to remove the possibly wrong assumption.
$endgroup$
– nbro
Sep 16 at 15:44
2
$begingroup$
Is this question specifically about "superintelligence" or AI in general? (For instance, if hypothetical superintelligence, then the hypothetical "control problem" is an issue. However, contemporary automated weapons systems won't be superintelligent, nor will autonomous vehicles, and those can harm humans.)
$endgroup$
– DukeZhou♦
Sep 16 at 19:40
$begingroup$
@DukeZhou The OP did not originally and explicitly mention superintelligence, but I suppose he was referring to anything that can be considered AI, including a SI.
$endgroup$
– nbro
Sep 16 at 20:10
3
$begingroup$
First ask how can normal intelligence harm you? The answer is then the same.
$endgroup$
– J...
Sep 17 at 12:53