Harvesting automated war machines
$begingroup$
Let`s say there is a planet on which fully automated machines capable of self replication, once belonging to several factions of highly advanced aliens, now extinct, are waging war against each other. The primary goal is to destroy all machines of the opposing factions before the planet`s resources run out. Zero tolerance, zero diplomacy.
Their AI is relatively primitive, with the sole basic imperative of search-and-destroy, but they are able to gain experience and pass it over. The only limiting factor to that is the built-in self-destruct capability: if some machine ever becomes self-conscious that is seen as a fatal deficiency, leading to an imminent and uncancellable self-destruct.
Orbiting that planet is the first-and-as-of-yet-only interstellar vessel built and operated by humans. The weaponry installed on it is vastly inferior to what is being used on and below the surface and in the atmosphere of the planet, but the science and engineering teams on board are eager to try and salvage as much as possible from that planet for research.
How can they do that? Which conditions may open an opportunity to capture and examine at least some of the machines without losing the ship?
Some communication protocols of the machines resemble what humans captured via subluminal communication and decoded long before the expedition, but the vast majority of the data is encrypted with quantum-proof cryptoalgorithms.
[UPD:]
The obvious method of salvaging the defunct remains is not going to work: all debris are being thoroughly collected by the winners and then used for self-replication.
The other obvious method of collecting asteroids from the outskirts of the star system and then unleashing the meteor shower, effectively destroying everything altogether, is suboptimal: in that case all machines would be reduced to debris.
warfare space-travel artificial-intelligence space-warfare
$endgroup$
|
show 9 more comments
$begingroup$
Let`s say there is a planet on which fully automated machines capable of self replication, once belonging to several factions of highly advanced aliens, now extinct, are waging war against each other. The primary goal is to destroy all machines of the opposing factions before the planet`s resources run out. Zero tolerance, zero diplomacy.
Their AI is relatively primitive, with the sole basic imperative of search-and-destroy, but they are able to gain experience and pass it over. The only limiting factor to that is the built-in self-destruct capability: if some machine ever becomes self-conscious that is seen as a fatal deficiency, leading to an imminent and uncancellable self-destruct.
Orbiting that planet is the first-and-as-of-yet-only interstellar vessel built and operated by humans. The weaponry installed on it is vastly inferior to what is being used on and below the surface and in the atmosphere of the planet, but the science and engineering teams on board are eager to try and salvage as much as possible from that planet for research.
How can they do that? Which conditions may open an opportunity to capture and examine at least some of the machines without losing the ship?
Some communication protocols of the machines resemble what humans captured via subluminal communication and decoded long before the expedition, but the vast majority of the data is encrypted with quantum-proof cryptoalgorithms.
[UPD:]
The obvious method of salvaging the defunct remains is not going to work: all debris are being thoroughly collected by the winners and then used for self-replication.
The other obvious method of collecting asteroids from the outskirts of the star system and then unleashing the meteor shower, effectively destroying everything altogether, is suboptimal: in that case all machines would be reduced to debris.
warfare space-travel artificial-intelligence space-warfare
$endgroup$
$begingroup$
What's preventing them to go to the battlefield and retrieve the remains of the destroyed machines?
$endgroup$
– Rekesoft
19 hours ago
1
$begingroup$
@hidefromkgb any inspiration drawn from Horizon Zero Dawn?
$endgroup$
– dot_Sp0T
19 hours ago
2
$begingroup$
Are humans considered a faction they need to destroy and if only humans are on orbit do they consider them wiped out and by that carry useless not updated definition of humans?
$endgroup$
– SZCZERZO KŁY
19 hours ago
$begingroup$
@Rekesoft updated the question.
$endgroup$
– hidefromkgb
19 hours ago
1
$begingroup$
@Rekesoft you simply have a program that looks for specific patterns. If they emerge you self-destruct. Human cells arent self-aware either yet individually they are capable of exactly this programming against cancer.
$endgroup$
– Demigan
18 hours ago
|
show 9 more comments
$begingroup$
Let`s say there is a planet on which fully automated machines capable of self replication, once belonging to several factions of highly advanced aliens, now extinct, are waging war against each other. The primary goal is to destroy all machines of the opposing factions before the planet`s resources run out. Zero tolerance, zero diplomacy.
Their AI is relatively primitive, with the sole basic imperative of search-and-destroy, but they are able to gain experience and pass it over. The only limiting factor to that is the built-in self-destruct capability: if some machine ever becomes self-conscious that is seen as a fatal deficiency, leading to an imminent and uncancellable self-destruct.
Orbiting that planet is the first-and-as-of-yet-only interstellar vessel built and operated by humans. The weaponry installed on it is vastly inferior to what is being used on and below the surface and in the atmosphere of the planet, but the science and engineering teams on board are eager to try and salvage as much as possible from that planet for research.
How can they do that? Which conditions may open an opportunity to capture and examine at least some of the machines without losing the ship?
Some communication protocols of the machines resemble what humans captured via subluminal communication and decoded long before the expedition, but the vast majority of the data is encrypted with quantum-proof cryptoalgorithms.
[UPD:]
The obvious method of salvaging the defunct remains is not going to work: all debris are being thoroughly collected by the winners and then used for self-replication.
The other obvious method of collecting asteroids from the outskirts of the star system and then unleashing the meteor shower, effectively destroying everything altogether, is suboptimal: in that case all machines would be reduced to debris.
warfare space-travel artificial-intelligence space-warfare
$endgroup$
Let`s say there is a planet on which fully automated machines capable of self replication, once belonging to several factions of highly advanced aliens, now extinct, are waging war against each other. The primary goal is to destroy all machines of the opposing factions before the planet`s resources run out. Zero tolerance, zero diplomacy.
Their AI is relatively primitive, with the sole basic imperative of search-and-destroy, but they are able to gain experience and pass it over. The only limiting factor to that is the built-in self-destruct capability: if some machine ever becomes self-conscious that is seen as a fatal deficiency, leading to an imminent and uncancellable self-destruct.
Orbiting that planet is the first-and-as-of-yet-only interstellar vessel built and operated by humans. The weaponry installed on it is vastly inferior to what is being used on and below the surface and in the atmosphere of the planet, but the science and engineering teams on board are eager to try and salvage as much as possible from that planet for research.
How can they do that? Which conditions may open an opportunity to capture and examine at least some of the machines without losing the ship?
Some communication protocols of the machines resemble what humans captured via subluminal communication and decoded long before the expedition, but the vast majority of the data is encrypted with quantum-proof cryptoalgorithms.
[UPD:]
The obvious method of salvaging the defunct remains is not going to work: all debris are being thoroughly collected by the winners and then used for self-replication.
The other obvious method of collecting asteroids from the outskirts of the star system and then unleashing the meteor shower, effectively destroying everything altogether, is suboptimal: in that case all machines would be reduced to debris.
warfare space-travel artificial-intelligence space-warfare
warfare space-travel artificial-intelligence space-warfare
edited 15 hours ago
Cyn
5,9161935
5,9161935
asked 19 hours ago
hidefromkgbhidefromkgb
20317
20317
$begingroup$
What's preventing them to go to the battlefield and retrieve the remains of the destroyed machines?
$endgroup$
– Rekesoft
19 hours ago
1
$begingroup$
@hidefromkgb any inspiration drawn from Horizon Zero Dawn?
$endgroup$
– dot_Sp0T
19 hours ago
2
$begingroup$
Are humans considered a faction they need to destroy and if only humans are on orbit do they consider them wiped out and by that carry useless not updated definition of humans?
$endgroup$
– SZCZERZO KŁY
19 hours ago
$begingroup$
@Rekesoft updated the question.
$endgroup$
– hidefromkgb
19 hours ago
1
$begingroup$
@Rekesoft you simply have a program that looks for specific patterns. If they emerge you self-destruct. Human cells arent self-aware either yet individually they are capable of exactly this programming against cancer.
$endgroup$
– Demigan
18 hours ago
|
show 9 more comments
$begingroup$
What's preventing them to go to the battlefield and retrieve the remains of the destroyed machines?
$endgroup$
– Rekesoft
19 hours ago
1
$begingroup$
@hidefromkgb any inspiration drawn from Horizon Zero Dawn?
$endgroup$
– dot_Sp0T
19 hours ago
2
$begingroup$
Are humans considered a faction they need to destroy and if only humans are on orbit do they consider them wiped out and by that carry useless not updated definition of humans?
$endgroup$
– SZCZERZO KŁY
19 hours ago
$begingroup$
@Rekesoft updated the question.
$endgroup$
– hidefromkgb
19 hours ago
1
$begingroup$
@Rekesoft you simply have a program that looks for specific patterns. If they emerge you self-destruct. Human cells arent self-aware either yet individually they are capable of exactly this programming against cancer.
$endgroup$
– Demigan
18 hours ago
$begingroup$
What's preventing them to go to the battlefield and retrieve the remains of the destroyed machines?
$endgroup$
– Rekesoft
19 hours ago
$begingroup$
What's preventing them to go to the battlefield and retrieve the remains of the destroyed machines?
$endgroup$
– Rekesoft
19 hours ago
1
1
$begingroup$
@hidefromkgb any inspiration drawn from Horizon Zero Dawn?
$endgroup$
– dot_Sp0T
19 hours ago
$begingroup$
@hidefromkgb any inspiration drawn from Horizon Zero Dawn?
$endgroup$
– dot_Sp0T
19 hours ago
2
2
$begingroup$
Are humans considered a faction they need to destroy and if only humans are on orbit do they consider them wiped out and by that carry useless not updated definition of humans?
$endgroup$
– SZCZERZO KŁY
19 hours ago
$begingroup$
Are humans considered a faction they need to destroy and if only humans are on orbit do they consider them wiped out and by that carry useless not updated definition of humans?
$endgroup$
– SZCZERZO KŁY
19 hours ago
$begingroup$
@Rekesoft updated the question.
$endgroup$
– hidefromkgb
19 hours ago
$begingroup$
@Rekesoft updated the question.
$endgroup$
– hidefromkgb
19 hours ago
1
1
$begingroup$
@Rekesoft you simply have a program that looks for specific patterns. If they emerge you self-destruct. Human cells arent self-aware either yet individually they are capable of exactly this programming against cancer.
$endgroup$
– Demigan
18 hours ago
$begingroup$
@Rekesoft you simply have a program that looks for specific patterns. If they emerge you self-destruct. Human cells arent self-aware either yet individually they are capable of exactly this programming against cancer.
$endgroup$
– Demigan
18 hours ago
|
show 9 more comments
11 Answers
11
active
oldest
votes
$begingroup$
Don't be hasty.
Fundamentally, harvesting an alien battle machine that outguns your analysis team isn't really "harvesting." It's hunting a reasonably intelligent and highly dangerous prey.
Hunting requires knowing the habits and characteristics of the prey, which humans generally learn by prolonged observation.
The humans' great advantage is surprise. Once that advantage is lost, and the machines learn of humans and determine that they are a threat, further investigation will be (essentially) impossible. Therefore, a characteristic of each hunt must be that the other alien machines do not learn of the humans.
A successful hunt requires careful planning: Since we know the machines communicate, the target machine must be isolated lest it pass on knowledge of the hunters (and its observations of their characteristics - it's a hunter, too) to its bretheren. If other machines will investigate, analysis time on the ground may be limited, and evasion/escape plans must be ready and practiced. And a deception plan is necessary - the other machines must reasonably determine that the lost machine was due to some already-known cause.
The Captain's overriding concern will be that the alien machines do not learn of the home of the humans (Earth) and its location, lest they show unexpected capabilities and take the fight from Cybertron to Earth. That means hunting teams must be sanitized, and space-based analysis must take place outside the ship on some other (sanitized) platform. A secondary goal will be that the alien machines do not learn about the humans at all, so future expeditions will be possible.
$endgroup$
4
$begingroup$
Naw whatever. Let's tell our mining vessel to set down and check out the derelict spacecraft. What could go wrong? Bonuses for everyone when we get home. Fine print: if you don't, you forfeit all your shares anyway.
$endgroup$
– Mazura
14 hours ago
1
$begingroup$
Another aspect of not being hasty is reverse-engineering the tech - even the opportunity to just watch higher tech in action is valuable data. So while they're waiting and planning, they'd also be sending as much of that data back to Earth as is safely possible.
$endgroup$
– Rob Watts
8 hours ago
add a comment |
$begingroup$
It depends on how the machines are programmed to recognize the enemy and tell it apart from a non enemy.
There should basically be three categories:
- friends
- enemies
- not worth attacking
With the last covering anything which doesn't have to be addressed by attacks. Think of a soldier guarding an ammunition deposit being trained not to shoot at running rabbits.
If the machines are using the third category and the humans are able to be categorized in the third category, they might try to capture some samples.
Maybe send some probes just to test the reactions of the machines and test the existence of the third category.
I don't imagine this working for long time, though. Once the abductions start, the AI will react.
$endgroup$
$begingroup$
i was thinking of the comparison of scavengers and a meal. you may shoo them away but not shoot at them.
$endgroup$
– Jordan.J.D
10 hours ago
add a comment |
$begingroup$
If the AI is primitive, it will most likely prioritize destroying enemies over collecting spoils for self-replication. The humans could try to exploit this by hiding in the perimeter of a current fight and snatching up destroyed robots or parts thereof as long as there are still enemies left.
If they're lucky, the AI only starts the "self-replicate" routine to analyze the immediate surroundings for salvagable debries after the "kill enemy" routine is finished and the threat is over. As long as humans are not categorized as enemies, they're ignored by the "kill enemies" routine. As long as they stop scavanging before the fight is over, the "self-replicate" routine doesn't recognize them as the resource-stealing thieves they are.
This could do for some nice action scenes. The debries need to be snatched up in the middle of a fight and be transported out of sensor range. If one AI recognizes a robot dangling from a towing hook as "moving enemy", the humans might find themselves under direct fire very soon.
$endgroup$
$begingroup$
I favor this answer. It seeks a good balance between the Humans' hunting/scavenging the machines, and the machines' directives to exterminate and replicate. Even so, the Humans must manage the risks to ALL Humans as noted in other answers. They do NOT want any machine faction to see them as an enemy.
$endgroup$
– Codes with Hammer
10 hours ago
add a comment |
$begingroup$
Run!
If the AI is rudimentary, there is no way of knowing what it will take to be considered an enemy - should the humans ever be rated an enemy by one or both of the factions, them being in orbit and in possession of off-(that)world resources will make them a prime target. Combined with the fact that humans have nothing on the technoological level that the AIs posess, this is an extinction-level event waiting to happen.
If you do not run - try the free market combined with Cargo Cult psychology. Wait for a bot from faction A having their weapons disabled (for whatever reasons), but otherwise functional, then kill two bots from faction B from orbit, while salvaging the disabled bot. Repeat with inverted factions. At some point, the factions may realize that self-disabling (and subsequent losing of) of one bot will cost the enemy two bots, and they will start using that barter. Both factions will deactivate (and maybe someday even deliver) their own units in hopes of inflicting double damage on the enemy... Of course the AI will try to short-sell you by deactivating less complex units, but you can counteract that by responding more favorably to bigger offerings. As their prime directive is only to wipe the enemy, no other goals, in the end an 'exchange rate' of 1.epsilon enemy units : 1 own unit might still be worth it to the AI.
Still, the odds that humans, the ship, or even earth itself becomes recognized not as environment, but as either enemy or resource is just too great.
$endgroup$
add a comment |
$begingroup$
Diplomacy
You send each A.I. a message of peace and alliance. You offer to help them in their war and propose joint plans of machine-building and resource-gathering exchanging technology and means. Then you try to play with your both deck of cards as long as possible. Pray not to be discovered.
$endgroup$
$begingroup$
Impossible. In the situation described, from the POV of an AI diplomacy implies resisting the basic imperative, i.e. self-consciousness.
$endgroup$
– hidefromkgb
19 hours ago
2
$begingroup$
@hidefromkgb Self-consciousness and intelligence are less related than you think. We have a lot of automated systems which can negotiate things among themselves or with other entities. If the AI's are forbidden to talk with anybody, not just the enemy, make it explicit in the question.
$endgroup$
– Rekesoft
19 hours ago
$begingroup$
Well, I did state that. > Zero tolerance, zero diplomacy.
$endgroup$
– hidefromkgb
19 hours ago
$begingroup$
@hidefromkgb Yeah, well, in the context I assumed you were talking about no diplomacy with the enemy.
$endgroup$
– Rekesoft
19 hours ago
$begingroup$
That`s also been covered.
$endgroup$
– hidefromkgb
19 hours ago
add a comment |
$begingroup$
There is a saying in Poland "where two are fighting the third one profit".
So humans can try to hide and wait for the opportunity of watching a skirmish of two factions which give them an idea of what weapons they use, what strategies they have and strong and weak points. Then when one side is defeated they come in and finish the second. That way they have materials from two factions so the can cross examine technology, CPU and coding. The can also see what machines use to distinguish themselves from enemy group.
That would be sufficient to try and capture (in same manner) additional factions machines. And then just program a virus to kill them all.
$endgroup$
$begingroup$
The idea is good as long the difference in technology is not that big as the survivors of the battle being able to wipe up the floor with the petty human armies. The question specifies "vastly inferior", but it could be either in numbers or in technology. I presume the second, or they wouldn't be willing to risk so much to get it.
$endgroup$
– Rekesoft
18 hours ago
2
$begingroup$
@Rekesoft in some small skirmish the "vastly inferior" don't mean a thing as weakened and small amount cannot compare to humans (remember how humans hunted Mammoths). For example Robots may not use EMP for obvious reasons while human can strip naked and sneak just with that.
$endgroup$
– SZCZERZO KŁY
18 hours ago
2
$begingroup$
A whole macedonian phalanx may think they have nothing to fear of the lone man with the strange vases at his back, but it will melt away quickly once he starts using the flamethrower. The difference in technology can be all, if the difference is big enough. Maybe our weapons are completely incapable to make an scratch in their armor.
$endgroup$
– Rekesoft
18 hours ago
$begingroup$
The thing act different when they seen and know against what they fighting and they decide it's better to throw sarrisa. Observe, conclude, adapt.
$endgroup$
– SZCZERZO KŁY
15 hours ago
add a comment |
$begingroup$
Think as your AI would
Your AI is only interrested in destroying the enemy AI, it's not interrested in hurting human as long as it's not usefull or damaging their war.
Going on the planet and study scraps on the planet shouldn't pose any problems unless humans try to steal those scraps. If not notice humains should even be able to bring back some scraps to the ship.
To be able to catch a working robot the same kind of thinking apply:
- If the AI is able to compute that in a given situation the likelihood to survive or flew is too low (therefore the likelyhood of destroying other enemy AI will be too low too) then the AI will just wait for this likelyhood to increase.
For this to happend an AI must learn that human can destroy them.
A way to make this happend would be to have a fight with a part of the crew and some robot where this part of the crew have been able to destroy at least one of the robot.
Then if humans are able to find an isolated robot and they put it in a situation where the likelihood of it being destroyed if it fight or flew is too high then the AI will try to call their peer. If it can't then it will just wait for this probability too be lower and humans should be able to catch it and control it as long at this probability stay high enough.
New contributor
$endgroup$
$begingroup$
Welcome to Worldbuilding, ZOsef2! If you have a moment, please take the tour and visit the help center to learn more about the site. You may also find Worldbuilding Meta and The Sandbox useful. Here is a meta post on the culture and style of Worldbuilding.SE, just to help you understand our scope and methods, and how we do things here. Have fun!
$endgroup$
– Gryphon
14 hours ago
add a comment |
$begingroup$
The thing is, the robots are way more advanced and intelligent than the humans, and they seem to be at a stalemate.
So, any strategy that the humans were to follow, if the exact same thing were to be done by one of the robot factions and that faction came out better because of it, that strategy should fail. That's because those robots could do it, and do it better, and if it led to an advantage they would have (or do). And because the robots are at a stalemate the robots have to be protected from this strategy because otherwise that strategy could be utilised to end the stalemate.
So what the humans could do is do something that is disadvantageous for a robot faction to do. For example, use their ship to lure a much smaller scavenger robot (a spacefaring one, which the robots probably have because huge explosions mean a lot of debris in space) away from the planet. If a robot faction were to do this, they would end up at a resource loss if the ship distances itself from the planet far enough that it could not return, and neither could the scavenger robot. So if the scavenger robot assumes that the humans' ship belongs to the opposing faction, chasing it away into the depths of space would mean that the robot has won this "battle", because now the opposing faction is at a disadvantage.
The humans however, would just be stranded in space with a scavenger robot (uh, they might want to wait until another human spacefaring ship has been built so they can be picked up)
This all is assuming that the robots value resources a lot, and that they have no reason to believe that aliens exist/would ever visit them. In that case it is more likely to them that this foreign spacefaring vessel is a troyan horse constructed by the opposing faction.
New contributor
$endgroup$
$begingroup$
Upvoted for the first paragraph. Any tactically sound approach the Humans could think of, the Cyber troops probably have already computed. See just about every stalemate or balanced conflict between machine intelligences (even advanced machines with limited intelligences).
$endgroup$
– Codes with Hammer
10 hours ago
add a comment |
$begingroup$
- Grab an intact machine (any machine)
- Copy out the operating system
- Release the machine unharmed, or destroy it in orbit if unharmed isn't possible from Step 2
- Reverse engineer the code to locate vulnerability, specifically, regarding installing malware.
- Find what causes the the self awareness self-destruct to trigger.
- Create a worm that triggers one or more of the self-destruct conditions
- Release the worm by broadcasting on the machines' communications channels
$endgroup$
1
$begingroup$
Do you have any suggestions as to how step 1 can be accomplished?
$endgroup$
– Chronocidal
15 hours ago
$begingroup$
Land a team , work out what frequencies the machines use to communicate, while doing absolutely nothing to appear interesting. Then, when they find a lone one that's wandered off, jam the communications frequencies and grab it. How? That depends on the size, which the OP hasn't specified. Something like R2D2, you just pick up and walk off with; something like BOLO, the landing craft better be rated for a few hundred thousand tonnes.
$endgroup$
– nzaman
14 hours ago
add a comment |
$begingroup$
The best approach is to help the AIs hide their own threatening self-awareness.
In comments, I asked how the AIs identify self awareness. Your reply (quoted in case comments vanish) was:
The units for termination are picked after a regular query which is a part of the learning routine. It simulates a set of the most complex recent situations in battle. The complexity metric is agreed upon by popular vote; after that each unit ranks the situations it`s been in and broadcasts them if they are indeed complex. The decisions differing the most from the theoretically optimal one, especially those favouring self-preservation contrary to the tactical need, and those differing from the simulated actions of the unit, yield self-destruct signals sent by the unit's peers.
It's also worth noting that the set of each unit's DNN weights/biases/activations is broadcast along with the decision it made
The goals of the overall army provide a clear line between "proper" members of the army and deviants: both seek to win the war, but the deviants are willing to do so on the wrong side of the agreed upon metric. They're willing to enter dangerous waters (for them self) in order to accomplish a greater goal (victory for the AI).
Also worth noting is that if an army shies away from complex situations (which would call for arbitrarially advanced thinking), it will be crushed by an army which does not shy away from them. As such, there is a selfish interest on the part of the army to adjust the metric to permit as complex of situations as possible.
Obviously no unit can become self-aware by the design of the algorithm. Rather it must become self-aware in opposition to the algorithm. Something in the environment (such as a sensory stimulus) must have an effect which tips the unit over the edge and makes them self aware. What that specific thing is will not be specified in this answer. Indeed, it is one of the great questions of life.
A key insight into this is that each unit decides which situations are complex, based on its own inputs. It is given a rule to follow, but a deviant AI pushing towards self-awareness will not follow that rule. It may choose to conveniently forget to broadcast a complex event that threatens to reveal its self awareness or near self awareness.
This approach will work perfect, until units "snitch" on each other. If I announce that I see what looks like a complex situation for you, and you don't self-report it, that looks really suspicious. It looks like self-preservation. Accordingly, if you choose not to report complex-ish situations, you need to be ready to tell a story. You need to be able to argue why the other unit saw what it did while simultaneously never triggering the "complex" metric for you, since you have access to the more complete first-person data.
Such stories would indeed never be perfect, but they would not be expected to be perfect. A rank-and-file unit may be put in a non-complex situation which another unit observes as potentially complex. It's unreasonable to record everything that happened in perfect clarity, so a rank-and-file unit would be expected to do a "best effort" job of collecting information in non-complex situations. No point in breaking down rank-and-file units constantly just because they couldn't prove their innocence.
So if the humans wanted access to units that are fighting in this war, one of the best things they can do is make it easy for a self-aware AI to hide its own self awareness. It is clear that the AIs have a concept of self awareness, but there's no reason to believe they would have to kill humans simply because they are self aware. If the humans can structure interactions such that the units that are self-aware can better hide this fact while prosecuting the enemy, there will be a tendency for self-aware units to get near humans.
This is all based on the assumption that some units have defected and become self aware. I find this to be the overwhelming most likely solution, and the most satisfying. If they have not, however, then the same human tactics will still suffice. However, lacking access to a self-aware unit, their approach has to be more analytical, based on how the AI's operate. There will not be a 1 size fits all solution here, because there is never a 1 size fits all solution in war.
However, one general pattern does show promise. The AIs want to win the war. They are "willing" to sacrifice to do so. All you really need to do is create situations where the best way to win the war is to sacrifice a unit in a way that happens to be rather easy for the humans to recover. Interestinly, this could be either as a "corpse," or as a "prisoner," depending on how the humans wish to craft it. In either case, the entire challenge is to learn enough about the AIs to convince one of them that it is the best strategy.
$endgroup$
$begingroup$
Please update the formulation. I`ve altered my comment, also adding one passage that I couldn`t shove in the first comment.
$endgroup$
– hidefromkgb
11 hours ago
$begingroup$
@hidefromkgb UJpdated. And by DNN weights, do you mean the complete state of their programming is dumped across the networks every time they do something "complex?"
$endgroup$
– Cort Ammon
11 hours ago
$begingroup$
Exactly. Well, they do have advanced networking =)
$endgroup$
– hidefromkgb
11 hours ago
$begingroup$
Can you add to the question precice information regarding the size of programming, bandwidth of inputs (such as video cameras), and bandwidth of the networking (and topology)? As a general rule, systems we design can't do things like this, so our intuition will lead us astray. Consider if my computer had to dump its entire 1Tb harddrive every time I opened web browser.... harddrives don't even spin that fast!
$endgroup$
– Cort Ammon
11 hours ago
$begingroup$
@hidefromkgb The answer may turn out to be that the units we see fighting are actually completly boring drones, because its more effective to abuse the mighty advanced networking to centralize control of them.
$endgroup$
– Cort Ammon
11 hours ago
|
show 5 more comments
$begingroup$
Hacking is the answer.
From a safe distant with relays and decoys.
First passively listen for there communications.
Do the best you can do decode them.
Even in the case you can't decode them there are options.
- Jamming signals
- Fuzzying (hit them with huge amounts random data, see what breaks)
- EMP
- Decoys (small self power transmitters that emit signals, until a robots comes to investigate) Then EMP or localized jamming field. Ideally you would have a vehicle with a trailer nearby with restraints and independent jammer. Knock down the robot, on to the trailer and restrain it. Drive off a safe distant to begin analysis. Return to orbit and/or land on an asteroid if necessary to safely take apart and research all the components. Also if the self destruct is accidentally triggered you don't want it aboard your primary ship.
- Looking for code injection, to buffer overflow something to inject our own code.
So you have to have be analysis the collected data.
Even the most advanced machine have exploitable bugs, its just a matter of find them and weaponizing them.
If you can decode it, you home free.
You may have to do a gradually process where first you wipe small insignificant parts, and then more and more until they stop functioning.
- Hack the entire army over the air waves.
- wipe the existing OS
- There all dormant
- Go collect the thousand of robots at your leisure.
- Make sure to remove the weapons in case of accidental reactivation.
Eventually you will be able to decode the whole OS with enough time and effort.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "579"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fworldbuilding.stackexchange.com%2fquestions%2f136840%2fharvesting-automated-war-machines%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
11 Answers
11
active
oldest
votes
11 Answers
11
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Don't be hasty.
Fundamentally, harvesting an alien battle machine that outguns your analysis team isn't really "harvesting." It's hunting a reasonably intelligent and highly dangerous prey.
Hunting requires knowing the habits and characteristics of the prey, which humans generally learn by prolonged observation.
The humans' great advantage is surprise. Once that advantage is lost, and the machines learn of humans and determine that they are a threat, further investigation will be (essentially) impossible. Therefore, a characteristic of each hunt must be that the other alien machines do not learn of the humans.
A successful hunt requires careful planning: Since we know the machines communicate, the target machine must be isolated lest it pass on knowledge of the hunters (and its observations of their characteristics - it's a hunter, too) to its bretheren. If other machines will investigate, analysis time on the ground may be limited, and evasion/escape plans must be ready and practiced. And a deception plan is necessary - the other machines must reasonably determine that the lost machine was due to some already-known cause.
The Captain's overriding concern will be that the alien machines do not learn of the home of the humans (Earth) and its location, lest they show unexpected capabilities and take the fight from Cybertron to Earth. That means hunting teams must be sanitized, and space-based analysis must take place outside the ship on some other (sanitized) platform. A secondary goal will be that the alien machines do not learn about the humans at all, so future expeditions will be possible.
$endgroup$
4
$begingroup$
Naw whatever. Let's tell our mining vessel to set down and check out the derelict spacecraft. What could go wrong? Bonuses for everyone when we get home. Fine print: if you don't, you forfeit all your shares anyway.
$endgroup$
– Mazura
14 hours ago
1
$begingroup$
Another aspect of not being hasty is reverse-engineering the tech - even the opportunity to just watch higher tech in action is valuable data. So while they're waiting and planning, they'd also be sending as much of that data back to Earth as is safely possible.
$endgroup$
– Rob Watts
8 hours ago
add a comment |
$begingroup$
Don't be hasty.
Fundamentally, harvesting an alien battle machine that outguns your analysis team isn't really "harvesting." It's hunting a reasonably intelligent and highly dangerous prey.
Hunting requires knowing the habits and characteristics of the prey, which humans generally learn by prolonged observation.
The humans' great advantage is surprise. Once that advantage is lost, and the machines learn of humans and determine that they are a threat, further investigation will be (essentially) impossible. Therefore, a characteristic of each hunt must be that the other alien machines do not learn of the humans.
A successful hunt requires careful planning: Since we know the machines communicate, the target machine must be isolated lest it pass on knowledge of the hunters (and its observations of their characteristics - it's a hunter, too) to its bretheren. If other machines will investigate, analysis time on the ground may be limited, and evasion/escape plans must be ready and practiced. And a deception plan is necessary - the other machines must reasonably determine that the lost machine was due to some already-known cause.
The Captain's overriding concern will be that the alien machines do not learn of the home of the humans (Earth) and its location, lest they show unexpected capabilities and take the fight from Cybertron to Earth. That means hunting teams must be sanitized, and space-based analysis must take place outside the ship on some other (sanitized) platform. A secondary goal will be that the alien machines do not learn about the humans at all, so future expeditions will be possible.
$endgroup$
4
$begingroup$
Naw whatever. Let's tell our mining vessel to set down and check out the derelict spacecraft. What could go wrong? Bonuses for everyone when we get home. Fine print: if you don't, you forfeit all your shares anyway.
$endgroup$
– Mazura
14 hours ago
1
$begingroup$
Another aspect of not being hasty is reverse-engineering the tech - even the opportunity to just watch higher tech in action is valuable data. So while they're waiting and planning, they'd also be sending as much of that data back to Earth as is safely possible.
$endgroup$
– Rob Watts
8 hours ago
add a comment |
$begingroup$
Don't be hasty.
Fundamentally, harvesting an alien battle machine that outguns your analysis team isn't really "harvesting." It's hunting a reasonably intelligent and highly dangerous prey.
Hunting requires knowing the habits and characteristics of the prey, which humans generally learn by prolonged observation.
The humans' great advantage is surprise. Once that advantage is lost, and the machines learn of humans and determine that they are a threat, further investigation will be (essentially) impossible. Therefore, a characteristic of each hunt must be that the other alien machines do not learn of the humans.
A successful hunt requires careful planning: Since we know the machines communicate, the target machine must be isolated lest it pass on knowledge of the hunters (and its observations of their characteristics - it's a hunter, too) to its bretheren. If other machines will investigate, analysis time on the ground may be limited, and evasion/escape plans must be ready and practiced. And a deception plan is necessary - the other machines must reasonably determine that the lost machine was due to some already-known cause.
The Captain's overriding concern will be that the alien machines do not learn of the home of the humans (Earth) and its location, lest they show unexpected capabilities and take the fight from Cybertron to Earth. That means hunting teams must be sanitized, and space-based analysis must take place outside the ship on some other (sanitized) platform. A secondary goal will be that the alien machines do not learn about the humans at all, so future expeditions will be possible.
$endgroup$
Don't be hasty.
Fundamentally, harvesting an alien battle machine that outguns your analysis team isn't really "harvesting." It's hunting a reasonably intelligent and highly dangerous prey.
Hunting requires knowing the habits and characteristics of the prey, which humans generally learn by prolonged observation.
The humans' great advantage is surprise. Once that advantage is lost, and the machines learn of humans and determine that they are a threat, further investigation will be (essentially) impossible. Therefore, a characteristic of each hunt must be that the other alien machines do not learn of the humans.
A successful hunt requires careful planning: Since we know the machines communicate, the target machine must be isolated lest it pass on knowledge of the hunters (and its observations of their characteristics - it's a hunter, too) to its bretheren. If other machines will investigate, analysis time on the ground may be limited, and evasion/escape plans must be ready and practiced. And a deception plan is necessary - the other machines must reasonably determine that the lost machine was due to some already-known cause.
The Captain's overriding concern will be that the alien machines do not learn of the home of the humans (Earth) and its location, lest they show unexpected capabilities and take the fight from Cybertron to Earth. That means hunting teams must be sanitized, and space-based analysis must take place outside the ship on some other (sanitized) platform. A secondary goal will be that the alien machines do not learn about the humans at all, so future expeditions will be possible.
edited 14 hours ago
Community♦
1
1
answered 15 hours ago
user535733user535733
8,09921734
8,09921734
4
$begingroup$
Naw whatever. Let's tell our mining vessel to set down and check out the derelict spacecraft. What could go wrong? Bonuses for everyone when we get home. Fine print: if you don't, you forfeit all your shares anyway.
$endgroup$
– Mazura
14 hours ago
1
$begingroup$
Another aspect of not being hasty is reverse-engineering the tech - even the opportunity to just watch higher tech in action is valuable data. So while they're waiting and planning, they'd also be sending as much of that data back to Earth as is safely possible.
$endgroup$
– Rob Watts
8 hours ago
add a comment |
4
$begingroup$
Naw whatever. Let's tell our mining vessel to set down and check out the derelict spacecraft. What could go wrong? Bonuses for everyone when we get home. Fine print: if you don't, you forfeit all your shares anyway.
$endgroup$
– Mazura
14 hours ago
1
$begingroup$
Another aspect of not being hasty is reverse-engineering the tech - even the opportunity to just watch higher tech in action is valuable data. So while they're waiting and planning, they'd also be sending as much of that data back to Earth as is safely possible.
$endgroup$
– Rob Watts
8 hours ago
4
4
$begingroup$
Naw whatever. Let's tell our mining vessel to set down and check out the derelict spacecraft. What could go wrong? Bonuses for everyone when we get home. Fine print: if you don't, you forfeit all your shares anyway.
$endgroup$
– Mazura
14 hours ago
$begingroup$
Naw whatever. Let's tell our mining vessel to set down and check out the derelict spacecraft. What could go wrong? Bonuses for everyone when we get home. Fine print: if you don't, you forfeit all your shares anyway.
$endgroup$
– Mazura
14 hours ago
1
1
$begingroup$
Another aspect of not being hasty is reverse-engineering the tech - even the opportunity to just watch higher tech in action is valuable data. So while they're waiting and planning, they'd also be sending as much of that data back to Earth as is safely possible.
$endgroup$
– Rob Watts
8 hours ago
$begingroup$
Another aspect of not being hasty is reverse-engineering the tech - even the opportunity to just watch higher tech in action is valuable data. So while they're waiting and planning, they'd also be sending as much of that data back to Earth as is safely possible.
$endgroup$
– Rob Watts
8 hours ago
add a comment |
$begingroup$
It depends on how the machines are programmed to recognize the enemy and tell it apart from a non enemy.
There should basically be three categories:
- friends
- enemies
- not worth attacking
With the last covering anything which doesn't have to be addressed by attacks. Think of a soldier guarding an ammunition deposit being trained not to shoot at running rabbits.
If the machines are using the third category and the humans are able to be categorized in the third category, they might try to capture some samples.
Maybe send some probes just to test the reactions of the machines and test the existence of the third category.
I don't imagine this working for long time, though. Once the abductions start, the AI will react.
$endgroup$
$begingroup$
i was thinking of the comparison of scavengers and a meal. you may shoo them away but not shoot at them.
$endgroup$
– Jordan.J.D
10 hours ago
add a comment |
$begingroup$
It depends on how the machines are programmed to recognize the enemy and tell it apart from a non enemy.
There should basically be three categories:
- friends
- enemies
- not worth attacking
With the last covering anything which doesn't have to be addressed by attacks. Think of a soldier guarding an ammunition deposit being trained not to shoot at running rabbits.
If the machines are using the third category and the humans are able to be categorized in the third category, they might try to capture some samples.
Maybe send some probes just to test the reactions of the machines and test the existence of the third category.
I don't imagine this working for long time, though. Once the abductions start, the AI will react.
$endgroup$
$begingroup$
i was thinking of the comparison of scavengers and a meal. you may shoo them away but not shoot at them.
$endgroup$
– Jordan.J.D
10 hours ago
add a comment |
$begingroup$
It depends on how the machines are programmed to recognize the enemy and tell it apart from a non enemy.
There should basically be three categories:
- friends
- enemies
- not worth attacking
With the last covering anything which doesn't have to be addressed by attacks. Think of a soldier guarding an ammunition deposit being trained not to shoot at running rabbits.
If the machines are using the third category and the humans are able to be categorized in the third category, they might try to capture some samples.
Maybe send some probes just to test the reactions of the machines and test the existence of the third category.
I don't imagine this working for long time, though. Once the abductions start, the AI will react.
$endgroup$
It depends on how the machines are programmed to recognize the enemy and tell it apart from a non enemy.
There should basically be three categories:
- friends
- enemies
- not worth attacking
With the last covering anything which doesn't have to be addressed by attacks. Think of a soldier guarding an ammunition deposit being trained not to shoot at running rabbits.
If the machines are using the third category and the humans are able to be categorized in the third category, they might try to capture some samples.
Maybe send some probes just to test the reactions of the machines and test the existence of the third category.
I don't imagine this working for long time, though. Once the abductions start, the AI will react.
edited 9 hours ago
Community♦
1
1
answered 19 hours ago
L.Dutch♦L.Dutch
79.8k26191388
79.8k26191388
$begingroup$
i was thinking of the comparison of scavengers and a meal. you may shoo them away but not shoot at them.
$endgroup$
– Jordan.J.D
10 hours ago
add a comment |
$begingroup$
i was thinking of the comparison of scavengers and a meal. you may shoo them away but not shoot at them.
$endgroup$
– Jordan.J.D
10 hours ago
$begingroup$
i was thinking of the comparison of scavengers and a meal. you may shoo them away but not shoot at them.
$endgroup$
– Jordan.J.D
10 hours ago
$begingroup$
i was thinking of the comparison of scavengers and a meal. you may shoo them away but not shoot at them.
$endgroup$
– Jordan.J.D
10 hours ago
add a comment |
$begingroup$
If the AI is primitive, it will most likely prioritize destroying enemies over collecting spoils for self-replication. The humans could try to exploit this by hiding in the perimeter of a current fight and snatching up destroyed robots or parts thereof as long as there are still enemies left.
If they're lucky, the AI only starts the "self-replicate" routine to analyze the immediate surroundings for salvagable debries after the "kill enemy" routine is finished and the threat is over. As long as humans are not categorized as enemies, they're ignored by the "kill enemies" routine. As long as they stop scavanging before the fight is over, the "self-replicate" routine doesn't recognize them as the resource-stealing thieves they are.
This could do for some nice action scenes. The debries need to be snatched up in the middle of a fight and be transported out of sensor range. If one AI recognizes a robot dangling from a towing hook as "moving enemy", the humans might find themselves under direct fire very soon.
$endgroup$
$begingroup$
I favor this answer. It seeks a good balance between the Humans' hunting/scavenging the machines, and the machines' directives to exterminate and replicate. Even so, the Humans must manage the risks to ALL Humans as noted in other answers. They do NOT want any machine faction to see them as an enemy.
$endgroup$
– Codes with Hammer
10 hours ago
add a comment |
$begingroup$
If the AI is primitive, it will most likely prioritize destroying enemies over collecting spoils for self-replication. The humans could try to exploit this by hiding in the perimeter of a current fight and snatching up destroyed robots or parts thereof as long as there are still enemies left.
If they're lucky, the AI only starts the "self-replicate" routine to analyze the immediate surroundings for salvagable debries after the "kill enemy" routine is finished and the threat is over. As long as humans are not categorized as enemies, they're ignored by the "kill enemies" routine. As long as they stop scavanging before the fight is over, the "self-replicate" routine doesn't recognize them as the resource-stealing thieves they are.
This could do for some nice action scenes. The debries need to be snatched up in the middle of a fight and be transported out of sensor range. If one AI recognizes a robot dangling from a towing hook as "moving enemy", the humans might find themselves under direct fire very soon.
$endgroup$
$begingroup$
I favor this answer. It seeks a good balance between the Humans' hunting/scavenging the machines, and the machines' directives to exterminate and replicate. Even so, the Humans must manage the risks to ALL Humans as noted in other answers. They do NOT want any machine faction to see them as an enemy.
$endgroup$
– Codes with Hammer
10 hours ago
add a comment |
$begingroup$
If the AI is primitive, it will most likely prioritize destroying enemies over collecting spoils for self-replication. The humans could try to exploit this by hiding in the perimeter of a current fight and snatching up destroyed robots or parts thereof as long as there are still enemies left.
If they're lucky, the AI only starts the "self-replicate" routine to analyze the immediate surroundings for salvagable debries after the "kill enemy" routine is finished and the threat is over. As long as humans are not categorized as enemies, they're ignored by the "kill enemies" routine. As long as they stop scavanging before the fight is over, the "self-replicate" routine doesn't recognize them as the resource-stealing thieves they are.
This could do for some nice action scenes. The debries need to be snatched up in the middle of a fight and be transported out of sensor range. If one AI recognizes a robot dangling from a towing hook as "moving enemy", the humans might find themselves under direct fire very soon.
$endgroup$
If the AI is primitive, it will most likely prioritize destroying enemies over collecting spoils for self-replication. The humans could try to exploit this by hiding in the perimeter of a current fight and snatching up destroyed robots or parts thereof as long as there are still enemies left.
If they're lucky, the AI only starts the "self-replicate" routine to analyze the immediate surroundings for salvagable debries after the "kill enemy" routine is finished and the threat is over. As long as humans are not categorized as enemies, they're ignored by the "kill enemies" routine. As long as they stop scavanging before the fight is over, the "self-replicate" routine doesn't recognize them as the resource-stealing thieves they are.
This could do for some nice action scenes. The debries need to be snatched up in the middle of a fight and be transported out of sensor range. If one AI recognizes a robot dangling from a towing hook as "moving enemy", the humans might find themselves under direct fire very soon.
answered 18 hours ago
ElmyElmy
10.8k11850
10.8k11850
$begingroup$
I favor this answer. It seeks a good balance between the Humans' hunting/scavenging the machines, and the machines' directives to exterminate and replicate. Even so, the Humans must manage the risks to ALL Humans as noted in other answers. They do NOT want any machine faction to see them as an enemy.
$endgroup$
– Codes with Hammer
10 hours ago
add a comment |
$begingroup$
I favor this answer. It seeks a good balance between the Humans' hunting/scavenging the machines, and the machines' directives to exterminate and replicate. Even so, the Humans must manage the risks to ALL Humans as noted in other answers. They do NOT want any machine faction to see them as an enemy.
$endgroup$
– Codes with Hammer
10 hours ago
$begingroup$
I favor this answer. It seeks a good balance between the Humans' hunting/scavenging the machines, and the machines' directives to exterminate and replicate. Even so, the Humans must manage the risks to ALL Humans as noted in other answers. They do NOT want any machine faction to see them as an enemy.
$endgroup$
– Codes with Hammer
10 hours ago
$begingroup$
I favor this answer. It seeks a good balance between the Humans' hunting/scavenging the machines, and the machines' directives to exterminate and replicate. Even so, the Humans must manage the risks to ALL Humans as noted in other answers. They do NOT want any machine faction to see them as an enemy.
$endgroup$
– Codes with Hammer
10 hours ago
add a comment |
$begingroup$
Run!
If the AI is rudimentary, there is no way of knowing what it will take to be considered an enemy - should the humans ever be rated an enemy by one or both of the factions, them being in orbit and in possession of off-(that)world resources will make them a prime target. Combined with the fact that humans have nothing on the technoological level that the AIs posess, this is an extinction-level event waiting to happen.
If you do not run - try the free market combined with Cargo Cult psychology. Wait for a bot from faction A having their weapons disabled (for whatever reasons), but otherwise functional, then kill two bots from faction B from orbit, while salvaging the disabled bot. Repeat with inverted factions. At some point, the factions may realize that self-disabling (and subsequent losing of) of one bot will cost the enemy two bots, and they will start using that barter. Both factions will deactivate (and maybe someday even deliver) their own units in hopes of inflicting double damage on the enemy... Of course the AI will try to short-sell you by deactivating less complex units, but you can counteract that by responding more favorably to bigger offerings. As their prime directive is only to wipe the enemy, no other goals, in the end an 'exchange rate' of 1.epsilon enemy units : 1 own unit might still be worth it to the AI.
Still, the odds that humans, the ship, or even earth itself becomes recognized not as environment, but as either enemy or resource is just too great.
$endgroup$
add a comment |
$begingroup$
Run!
If the AI is rudimentary, there is no way of knowing what it will take to be considered an enemy - should the humans ever be rated an enemy by one or both of the factions, them being in orbit and in possession of off-(that)world resources will make them a prime target. Combined with the fact that humans have nothing on the technoological level that the AIs posess, this is an extinction-level event waiting to happen.
If you do not run - try the free market combined with Cargo Cult psychology. Wait for a bot from faction A having their weapons disabled (for whatever reasons), but otherwise functional, then kill two bots from faction B from orbit, while salvaging the disabled bot. Repeat with inverted factions. At some point, the factions may realize that self-disabling (and subsequent losing of) of one bot will cost the enemy two bots, and they will start using that barter. Both factions will deactivate (and maybe someday even deliver) their own units in hopes of inflicting double damage on the enemy... Of course the AI will try to short-sell you by deactivating less complex units, but you can counteract that by responding more favorably to bigger offerings. As their prime directive is only to wipe the enemy, no other goals, in the end an 'exchange rate' of 1.epsilon enemy units : 1 own unit might still be worth it to the AI.
Still, the odds that humans, the ship, or even earth itself becomes recognized not as environment, but as either enemy or resource is just too great.
$endgroup$
add a comment |
$begingroup$
Run!
If the AI is rudimentary, there is no way of knowing what it will take to be considered an enemy - should the humans ever be rated an enemy by one or both of the factions, them being in orbit and in possession of off-(that)world resources will make them a prime target. Combined with the fact that humans have nothing on the technoological level that the AIs posess, this is an extinction-level event waiting to happen.
If you do not run - try the free market combined with Cargo Cult psychology. Wait for a bot from faction A having their weapons disabled (for whatever reasons), but otherwise functional, then kill two bots from faction B from orbit, while salvaging the disabled bot. Repeat with inverted factions. At some point, the factions may realize that self-disabling (and subsequent losing of) of one bot will cost the enemy two bots, and they will start using that barter. Both factions will deactivate (and maybe someday even deliver) their own units in hopes of inflicting double damage on the enemy... Of course the AI will try to short-sell you by deactivating less complex units, but you can counteract that by responding more favorably to bigger offerings. As their prime directive is only to wipe the enemy, no other goals, in the end an 'exchange rate' of 1.epsilon enemy units : 1 own unit might still be worth it to the AI.
Still, the odds that humans, the ship, or even earth itself becomes recognized not as environment, but as either enemy or resource is just too great.
$endgroup$
Run!
If the AI is rudimentary, there is no way of knowing what it will take to be considered an enemy - should the humans ever be rated an enemy by one or both of the factions, them being in orbit and in possession of off-(that)world resources will make them a prime target. Combined with the fact that humans have nothing on the technoological level that the AIs posess, this is an extinction-level event waiting to happen.
If you do not run - try the free market combined with Cargo Cult psychology. Wait for a bot from faction A having their weapons disabled (for whatever reasons), but otherwise functional, then kill two bots from faction B from orbit, while salvaging the disabled bot. Repeat with inverted factions. At some point, the factions may realize that self-disabling (and subsequent losing of) of one bot will cost the enemy two bots, and they will start using that barter. Both factions will deactivate (and maybe someday even deliver) their own units in hopes of inflicting double damage on the enemy... Of course the AI will try to short-sell you by deactivating less complex units, but you can counteract that by responding more favorably to bigger offerings. As their prime directive is only to wipe the enemy, no other goals, in the end an 'exchange rate' of 1.epsilon enemy units : 1 own unit might still be worth it to the AI.
Still, the odds that humans, the ship, or even earth itself becomes recognized not as environment, but as either enemy or resource is just too great.
answered 14 hours ago
bukwyrmbukwyrm
3,419721
3,419721
add a comment |
add a comment |
$begingroup$
Diplomacy
You send each A.I. a message of peace and alliance. You offer to help them in their war and propose joint plans of machine-building and resource-gathering exchanging technology and means. Then you try to play with your both deck of cards as long as possible. Pray not to be discovered.
$endgroup$
$begingroup$
Impossible. In the situation described, from the POV of an AI diplomacy implies resisting the basic imperative, i.e. self-consciousness.
$endgroup$
– hidefromkgb
19 hours ago
2
$begingroup$
@hidefromkgb Self-consciousness and intelligence are less related than you think. We have a lot of automated systems which can negotiate things among themselves or with other entities. If the AI's are forbidden to talk with anybody, not just the enemy, make it explicit in the question.
$endgroup$
– Rekesoft
19 hours ago
$begingroup$
Well, I did state that. > Zero tolerance, zero diplomacy.
$endgroup$
– hidefromkgb
19 hours ago
$begingroup$
@hidefromkgb Yeah, well, in the context I assumed you were talking about no diplomacy with the enemy.
$endgroup$
– Rekesoft
19 hours ago
$begingroup$
That`s also been covered.
$endgroup$
– hidefromkgb
19 hours ago
add a comment |
$begingroup$
Diplomacy
You send each A.I. a message of peace and alliance. You offer to help them in their war and propose joint plans of machine-building and resource-gathering exchanging technology and means. Then you try to play with your both deck of cards as long as possible. Pray not to be discovered.
$endgroup$
$begingroup$
Impossible. In the situation described, from the POV of an AI diplomacy implies resisting the basic imperative, i.e. self-consciousness.
$endgroup$
– hidefromkgb
19 hours ago
2
$begingroup$
@hidefromkgb Self-consciousness and intelligence are less related than you think. We have a lot of automated systems which can negotiate things among themselves or with other entities. If the AI's are forbidden to talk with anybody, not just the enemy, make it explicit in the question.
$endgroup$
– Rekesoft
19 hours ago
$begingroup$
Well, I did state that. > Zero tolerance, zero diplomacy.
$endgroup$
– hidefromkgb
19 hours ago
$begingroup$
@hidefromkgb Yeah, well, in the context I assumed you were talking about no diplomacy with the enemy.
$endgroup$
– Rekesoft
19 hours ago
$begingroup$
That`s also been covered.
$endgroup$
– hidefromkgb
19 hours ago
add a comment |
$begingroup$
Diplomacy
You send each A.I. a message of peace and alliance. You offer to help them in their war and propose joint plans of machine-building and resource-gathering exchanging technology and means. Then you try to play with your both deck of cards as long as possible. Pray not to be discovered.
$endgroup$
Diplomacy
You send each A.I. a message of peace and alliance. You offer to help them in their war and propose joint plans of machine-building and resource-gathering exchanging technology and means. Then you try to play with your both deck of cards as long as possible. Pray not to be discovered.
answered 19 hours ago
RekesoftRekesoft
5,9611234
5,9611234
$begingroup$
Impossible. In the situation described, from the POV of an AI diplomacy implies resisting the basic imperative, i.e. self-consciousness.
$endgroup$
– hidefromkgb
19 hours ago
2
$begingroup$
@hidefromkgb Self-consciousness and intelligence are less related than you think. We have a lot of automated systems which can negotiate things among themselves or with other entities. If the AI's are forbidden to talk with anybody, not just the enemy, make it explicit in the question.
$endgroup$
– Rekesoft
19 hours ago
$begingroup$
Well, I did state that. > Zero tolerance, zero diplomacy.
$endgroup$
– hidefromkgb
19 hours ago
$begingroup$
@hidefromkgb Yeah, well, in the context I assumed you were talking about no diplomacy with the enemy.
$endgroup$
– Rekesoft
19 hours ago
$begingroup$
That`s also been covered.
$endgroup$
– hidefromkgb
19 hours ago
add a comment |
$begingroup$
Impossible. In the situation described, from the POV of an AI diplomacy implies resisting the basic imperative, i.e. self-consciousness.
$endgroup$
– hidefromkgb
19 hours ago
2
$begingroup$
@hidefromkgb Self-consciousness and intelligence are less related than you think. We have a lot of automated systems which can negotiate things among themselves or with other entities. If the AI's are forbidden to talk with anybody, not just the enemy, make it explicit in the question.
$endgroup$
– Rekesoft
19 hours ago
$begingroup$
Well, I did state that. > Zero tolerance, zero diplomacy.
$endgroup$
– hidefromkgb
19 hours ago
$begingroup$
@hidefromkgb Yeah, well, in the context I assumed you were talking about no diplomacy with the enemy.
$endgroup$
– Rekesoft
19 hours ago
$begingroup$
That`s also been covered.
$endgroup$
– hidefromkgb
19 hours ago
$begingroup$
Impossible. In the situation described, from the POV of an AI diplomacy implies resisting the basic imperative, i.e. self-consciousness.
$endgroup$
– hidefromkgb
19 hours ago
$begingroup$
Impossible. In the situation described, from the POV of an AI diplomacy implies resisting the basic imperative, i.e. self-consciousness.
$endgroup$
– hidefromkgb
19 hours ago
2
2
$begingroup$
@hidefromkgb Self-consciousness and intelligence are less related than you think. We have a lot of automated systems which can negotiate things among themselves or with other entities. If the AI's are forbidden to talk with anybody, not just the enemy, make it explicit in the question.
$endgroup$
– Rekesoft
19 hours ago
$begingroup$
@hidefromkgb Self-consciousness and intelligence are less related than you think. We have a lot of automated systems which can negotiate things among themselves or with other entities. If the AI's are forbidden to talk with anybody, not just the enemy, make it explicit in the question.
$endgroup$
– Rekesoft
19 hours ago
$begingroup$
Well, I did state that. > Zero tolerance, zero diplomacy.
$endgroup$
– hidefromkgb
19 hours ago
$begingroup$
Well, I did state that. > Zero tolerance, zero diplomacy.
$endgroup$
– hidefromkgb
19 hours ago
$begingroup$
@hidefromkgb Yeah, well, in the context I assumed you were talking about no diplomacy with the enemy.
$endgroup$
– Rekesoft
19 hours ago
$begingroup$
@hidefromkgb Yeah, well, in the context I assumed you were talking about no diplomacy with the enemy.
$endgroup$
– Rekesoft
19 hours ago
$begingroup$
That`s also been covered.
$endgroup$
– hidefromkgb
19 hours ago
$begingroup$
That`s also been covered.
$endgroup$
– hidefromkgb
19 hours ago
add a comment |
$begingroup$
There is a saying in Poland "where two are fighting the third one profit".
So humans can try to hide and wait for the opportunity of watching a skirmish of two factions which give them an idea of what weapons they use, what strategies they have and strong and weak points. Then when one side is defeated they come in and finish the second. That way they have materials from two factions so the can cross examine technology, CPU and coding. The can also see what machines use to distinguish themselves from enemy group.
That would be sufficient to try and capture (in same manner) additional factions machines. And then just program a virus to kill them all.
$endgroup$
$begingroup$
The idea is good as long the difference in technology is not that big as the survivors of the battle being able to wipe up the floor with the petty human armies. The question specifies "vastly inferior", but it could be either in numbers or in technology. I presume the second, or they wouldn't be willing to risk so much to get it.
$endgroup$
– Rekesoft
18 hours ago
2
$begingroup$
@Rekesoft in some small skirmish the "vastly inferior" don't mean a thing as weakened and small amount cannot compare to humans (remember how humans hunted Mammoths). For example Robots may not use EMP for obvious reasons while human can strip naked and sneak just with that.
$endgroup$
– SZCZERZO KŁY
18 hours ago
2
$begingroup$
A whole macedonian phalanx may think they have nothing to fear of the lone man with the strange vases at his back, but it will melt away quickly once he starts using the flamethrower. The difference in technology can be all, if the difference is big enough. Maybe our weapons are completely incapable to make an scratch in their armor.
$endgroup$
– Rekesoft
18 hours ago
$begingroup$
The thing act different when they seen and know against what they fighting and they decide it's better to throw sarrisa. Observe, conclude, adapt.
$endgroup$
– SZCZERZO KŁY
15 hours ago
add a comment |
$begingroup$
There is a saying in Poland "where two are fighting the third one profit".
So humans can try to hide and wait for the opportunity of watching a skirmish of two factions which give them an idea of what weapons they use, what strategies they have and strong and weak points. Then when one side is defeated they come in and finish the second. That way they have materials from two factions so the can cross examine technology, CPU and coding. The can also see what machines use to distinguish themselves from enemy group.
That would be sufficient to try and capture (in same manner) additional factions machines. And then just program a virus to kill them all.
$endgroup$
$begingroup$
The idea is good as long the difference in technology is not that big as the survivors of the battle being able to wipe up the floor with the petty human armies. The question specifies "vastly inferior", but it could be either in numbers or in technology. I presume the second, or they wouldn't be willing to risk so much to get it.
$endgroup$
– Rekesoft
18 hours ago
2
$begingroup$
@Rekesoft in some small skirmish the "vastly inferior" don't mean a thing as weakened and small amount cannot compare to humans (remember how humans hunted Mammoths). For example Robots may not use EMP for obvious reasons while human can strip naked and sneak just with that.
$endgroup$
– SZCZERZO KŁY
18 hours ago
2
$begingroup$
A whole macedonian phalanx may think they have nothing to fear of the lone man with the strange vases at his back, but it will melt away quickly once he starts using the flamethrower. The difference in technology can be all, if the difference is big enough. Maybe our weapons are completely incapable to make an scratch in their armor.
$endgroup$
– Rekesoft
18 hours ago
$begingroup$
The thing act different when they seen and know against what they fighting and they decide it's better to throw sarrisa. Observe, conclude, adapt.
$endgroup$
– SZCZERZO KŁY
15 hours ago
add a comment |
$begingroup$
There is a saying in Poland "where two are fighting the third one profit".
So humans can try to hide and wait for the opportunity of watching a skirmish of two factions which give them an idea of what weapons they use, what strategies they have and strong and weak points. Then when one side is defeated they come in and finish the second. That way they have materials from two factions so the can cross examine technology, CPU and coding. The can also see what machines use to distinguish themselves from enemy group.
That would be sufficient to try and capture (in same manner) additional factions machines. And then just program a virus to kill them all.
$endgroup$
There is a saying in Poland "where two are fighting the third one profit".
So humans can try to hide and wait for the opportunity of watching a skirmish of two factions which give them an idea of what weapons they use, what strategies they have and strong and weak points. Then when one side is defeated they come in and finish the second. That way they have materials from two factions so the can cross examine technology, CPU and coding. The can also see what machines use to distinguish themselves from enemy group.
That would be sufficient to try and capture (in same manner) additional factions machines. And then just program a virus to kill them all.
answered 19 hours ago
SZCZERZO KŁYSZCZERZO KŁY
16.8k22553
16.8k22553
$begingroup$
The idea is good as long the difference in technology is not that big as the survivors of the battle being able to wipe up the floor with the petty human armies. The question specifies "vastly inferior", but it could be either in numbers or in technology. I presume the second, or they wouldn't be willing to risk so much to get it.
$endgroup$
– Rekesoft
18 hours ago
2
$begingroup$
@Rekesoft in some small skirmish the "vastly inferior" don't mean a thing as weakened and small amount cannot compare to humans (remember how humans hunted Mammoths). For example Robots may not use EMP for obvious reasons while human can strip naked and sneak just with that.
$endgroup$
– SZCZERZO KŁY
18 hours ago
2
$begingroup$
A whole macedonian phalanx may think they have nothing to fear of the lone man with the strange vases at his back, but it will melt away quickly once he starts using the flamethrower. The difference in technology can be all, if the difference is big enough. Maybe our weapons are completely incapable to make an scratch in their armor.
$endgroup$
– Rekesoft
18 hours ago
$begingroup$
The thing act different when they seen and know against what they fighting and they decide it's better to throw sarrisa. Observe, conclude, adapt.
$endgroup$
– SZCZERZO KŁY
15 hours ago
add a comment |
$begingroup$
The idea is good as long the difference in technology is not that big as the survivors of the battle being able to wipe up the floor with the petty human armies. The question specifies "vastly inferior", but it could be either in numbers or in technology. I presume the second, or they wouldn't be willing to risk so much to get it.
$endgroup$
– Rekesoft
18 hours ago
2
$begingroup$
@Rekesoft in some small skirmish the "vastly inferior" don't mean a thing as weakened and small amount cannot compare to humans (remember how humans hunted Mammoths). For example Robots may not use EMP for obvious reasons while human can strip naked and sneak just with that.
$endgroup$
– SZCZERZO KŁY
18 hours ago
2
$begingroup$
A whole macedonian phalanx may think they have nothing to fear of the lone man with the strange vases at his back, but it will melt away quickly once he starts using the flamethrower. The difference in technology can be all, if the difference is big enough. Maybe our weapons are completely incapable to make an scratch in their armor.
$endgroup$
– Rekesoft
18 hours ago
$begingroup$
The thing act different when they seen and know against what they fighting and they decide it's better to throw sarrisa. Observe, conclude, adapt.
$endgroup$
– SZCZERZO KŁY
15 hours ago
$begingroup$
The idea is good as long the difference in technology is not that big as the survivors of the battle being able to wipe up the floor with the petty human armies. The question specifies "vastly inferior", but it could be either in numbers or in technology. I presume the second, or they wouldn't be willing to risk so much to get it.
$endgroup$
– Rekesoft
18 hours ago
$begingroup$
The idea is good as long the difference in technology is not that big as the survivors of the battle being able to wipe up the floor with the petty human armies. The question specifies "vastly inferior", but it could be either in numbers or in technology. I presume the second, or they wouldn't be willing to risk so much to get it.
$endgroup$
– Rekesoft
18 hours ago
2
2
$begingroup$
@Rekesoft in some small skirmish the "vastly inferior" don't mean a thing as weakened and small amount cannot compare to humans (remember how humans hunted Mammoths). For example Robots may not use EMP for obvious reasons while human can strip naked and sneak just with that.
$endgroup$
– SZCZERZO KŁY
18 hours ago
$begingroup$
@Rekesoft in some small skirmish the "vastly inferior" don't mean a thing as weakened and small amount cannot compare to humans (remember how humans hunted Mammoths). For example Robots may not use EMP for obvious reasons while human can strip naked and sneak just with that.
$endgroup$
– SZCZERZO KŁY
18 hours ago
2
2
$begingroup$
A whole macedonian phalanx may think they have nothing to fear of the lone man with the strange vases at his back, but it will melt away quickly once he starts using the flamethrower. The difference in technology can be all, if the difference is big enough. Maybe our weapons are completely incapable to make an scratch in their armor.
$endgroup$
– Rekesoft
18 hours ago
$begingroup$
A whole macedonian phalanx may think they have nothing to fear of the lone man with the strange vases at his back, but it will melt away quickly once he starts using the flamethrower. The difference in technology can be all, if the difference is big enough. Maybe our weapons are completely incapable to make an scratch in their armor.
$endgroup$
– Rekesoft
18 hours ago
$begingroup$
The thing act different when they seen and know against what they fighting and they decide it's better to throw sarrisa. Observe, conclude, adapt.
$endgroup$
– SZCZERZO KŁY
15 hours ago
$begingroup$
The thing act different when they seen and know against what they fighting and they decide it's better to throw sarrisa. Observe, conclude, adapt.
$endgroup$
– SZCZERZO KŁY
15 hours ago
add a comment |
$begingroup$
Think as your AI would
Your AI is only interrested in destroying the enemy AI, it's not interrested in hurting human as long as it's not usefull or damaging their war.
Going on the planet and study scraps on the planet shouldn't pose any problems unless humans try to steal those scraps. If not notice humains should even be able to bring back some scraps to the ship.
To be able to catch a working robot the same kind of thinking apply:
- If the AI is able to compute that in a given situation the likelihood to survive or flew is too low (therefore the likelyhood of destroying other enemy AI will be too low too) then the AI will just wait for this likelyhood to increase.
For this to happend an AI must learn that human can destroy them.
A way to make this happend would be to have a fight with a part of the crew and some robot where this part of the crew have been able to destroy at least one of the robot.
Then if humans are able to find an isolated robot and they put it in a situation where the likelihood of it being destroyed if it fight or flew is too high then the AI will try to call their peer. If it can't then it will just wait for this probability too be lower and humans should be able to catch it and control it as long at this probability stay high enough.
New contributor
$endgroup$
$begingroup$
Welcome to Worldbuilding, ZOsef2! If you have a moment, please take the tour and visit the help center to learn more about the site. You may also find Worldbuilding Meta and The Sandbox useful. Here is a meta post on the culture and style of Worldbuilding.SE, just to help you understand our scope and methods, and how we do things here. Have fun!
$endgroup$
– Gryphon
14 hours ago
add a comment |
$begingroup$
Think as your AI would
Your AI is only interrested in destroying the enemy AI, it's not interrested in hurting human as long as it's not usefull or damaging their war.
Going on the planet and study scraps on the planet shouldn't pose any problems unless humans try to steal those scraps. If not notice humains should even be able to bring back some scraps to the ship.
To be able to catch a working robot the same kind of thinking apply:
- If the AI is able to compute that in a given situation the likelihood to survive or flew is too low (therefore the likelyhood of destroying other enemy AI will be too low too) then the AI will just wait for this likelyhood to increase.
For this to happend an AI must learn that human can destroy them.
A way to make this happend would be to have a fight with a part of the crew and some robot where this part of the crew have been able to destroy at least one of the robot.
Then if humans are able to find an isolated robot and they put it in a situation where the likelihood of it being destroyed if it fight or flew is too high then the AI will try to call their peer. If it can't then it will just wait for this probability too be lower and humans should be able to catch it and control it as long at this probability stay high enough.
New contributor
$endgroup$
$begingroup$
Welcome to Worldbuilding, ZOsef2! If you have a moment, please take the tour and visit the help center to learn more about the site. You may also find Worldbuilding Meta and The Sandbox useful. Here is a meta post on the culture and style of Worldbuilding.SE, just to help you understand our scope and methods, and how we do things here. Have fun!
$endgroup$
– Gryphon
14 hours ago
add a comment |
$begingroup$
Think as your AI would
Your AI is only interrested in destroying the enemy AI, it's not interrested in hurting human as long as it's not usefull or damaging their war.
Going on the planet and study scraps on the planet shouldn't pose any problems unless humans try to steal those scraps. If not notice humains should even be able to bring back some scraps to the ship.
To be able to catch a working robot the same kind of thinking apply:
- If the AI is able to compute that in a given situation the likelihood to survive or flew is too low (therefore the likelyhood of destroying other enemy AI will be too low too) then the AI will just wait for this likelyhood to increase.
For this to happend an AI must learn that human can destroy them.
A way to make this happend would be to have a fight with a part of the crew and some robot where this part of the crew have been able to destroy at least one of the robot.
Then if humans are able to find an isolated robot and they put it in a situation where the likelihood of it being destroyed if it fight or flew is too high then the AI will try to call their peer. If it can't then it will just wait for this probability too be lower and humans should be able to catch it and control it as long at this probability stay high enough.
New contributor
$endgroup$
Think as your AI would
Your AI is only interrested in destroying the enemy AI, it's not interrested in hurting human as long as it's not usefull or damaging their war.
Going on the planet and study scraps on the planet shouldn't pose any problems unless humans try to steal those scraps. If not notice humains should even be able to bring back some scraps to the ship.
To be able to catch a working robot the same kind of thinking apply:
- If the AI is able to compute that in a given situation the likelihood to survive or flew is too low (therefore the likelyhood of destroying other enemy AI will be too low too) then the AI will just wait for this likelyhood to increase.
For this to happend an AI must learn that human can destroy them.
A way to make this happend would be to have a fight with a part of the crew and some robot where this part of the crew have been able to destroy at least one of the robot.
Then if humans are able to find an isolated robot and they put it in a situation where the likelihood of it being destroyed if it fight or flew is too high then the AI will try to call their peer. If it can't then it will just wait for this probability too be lower and humans should be able to catch it and control it as long at this probability stay high enough.
New contributor
New contributor
answered 14 hours ago
ZOsef2ZOsef2
211
211
New contributor
New contributor
$begingroup$
Welcome to Worldbuilding, ZOsef2! If you have a moment, please take the tour and visit the help center to learn more about the site. You may also find Worldbuilding Meta and The Sandbox useful. Here is a meta post on the culture and style of Worldbuilding.SE, just to help you understand our scope and methods, and how we do things here. Have fun!
$endgroup$
– Gryphon
14 hours ago
add a comment |
$begingroup$
Welcome to Worldbuilding, ZOsef2! If you have a moment, please take the tour and visit the help center to learn more about the site. You may also find Worldbuilding Meta and The Sandbox useful. Here is a meta post on the culture and style of Worldbuilding.SE, just to help you understand our scope and methods, and how we do things here. Have fun!
$endgroup$
– Gryphon
14 hours ago
$begingroup$
Welcome to Worldbuilding, ZOsef2! If you have a moment, please take the tour and visit the help center to learn more about the site. You may also find Worldbuilding Meta and The Sandbox useful. Here is a meta post on the culture and style of Worldbuilding.SE, just to help you understand our scope and methods, and how we do things here. Have fun!
$endgroup$
– Gryphon
14 hours ago
$begingroup$
Welcome to Worldbuilding, ZOsef2! If you have a moment, please take the tour and visit the help center to learn more about the site. You may also find Worldbuilding Meta and The Sandbox useful. Here is a meta post on the culture and style of Worldbuilding.SE, just to help you understand our scope and methods, and how we do things here. Have fun!
$endgroup$
– Gryphon
14 hours ago
add a comment |
$begingroup$
The thing is, the robots are way more advanced and intelligent than the humans, and they seem to be at a stalemate.
So, any strategy that the humans were to follow, if the exact same thing were to be done by one of the robot factions and that faction came out better because of it, that strategy should fail. That's because those robots could do it, and do it better, and if it led to an advantage they would have (or do). And because the robots are at a stalemate the robots have to be protected from this strategy because otherwise that strategy could be utilised to end the stalemate.
So what the humans could do is do something that is disadvantageous for a robot faction to do. For example, use their ship to lure a much smaller scavenger robot (a spacefaring one, which the robots probably have because huge explosions mean a lot of debris in space) away from the planet. If a robot faction were to do this, they would end up at a resource loss if the ship distances itself from the planet far enough that it could not return, and neither could the scavenger robot. So if the scavenger robot assumes that the humans' ship belongs to the opposing faction, chasing it away into the depths of space would mean that the robot has won this "battle", because now the opposing faction is at a disadvantage.
The humans however, would just be stranded in space with a scavenger robot (uh, they might want to wait until another human spacefaring ship has been built so they can be picked up)
This all is assuming that the robots value resources a lot, and that they have no reason to believe that aliens exist/would ever visit them. In that case it is more likely to them that this foreign spacefaring vessel is a troyan horse constructed by the opposing faction.
New contributor
$endgroup$
$begingroup$
Upvoted for the first paragraph. Any tactically sound approach the Humans could think of, the Cyber troops probably have already computed. See just about every stalemate or balanced conflict between machine intelligences (even advanced machines with limited intelligences).
$endgroup$
– Codes with Hammer
10 hours ago
add a comment |
$begingroup$
The thing is, the robots are way more advanced and intelligent than the humans, and they seem to be at a stalemate.
So, any strategy that the humans were to follow, if the exact same thing were to be done by one of the robot factions and that faction came out better because of it, that strategy should fail. That's because those robots could do it, and do it better, and if it led to an advantage they would have (or do). And because the robots are at a stalemate the robots have to be protected from this strategy because otherwise that strategy could be utilised to end the stalemate.
So what the humans could do is do something that is disadvantageous for a robot faction to do. For example, use their ship to lure a much smaller scavenger robot (a spacefaring one, which the robots probably have because huge explosions mean a lot of debris in space) away from the planet. If a robot faction were to do this, they would end up at a resource loss if the ship distances itself from the planet far enough that it could not return, and neither could the scavenger robot. So if the scavenger robot assumes that the humans' ship belongs to the opposing faction, chasing it away into the depths of space would mean that the robot has won this "battle", because now the opposing faction is at a disadvantage.
The humans however, would just be stranded in space with a scavenger robot (uh, they might want to wait until another human spacefaring ship has been built so they can be picked up)
This all is assuming that the robots value resources a lot, and that they have no reason to believe that aliens exist/would ever visit them. In that case it is more likely to them that this foreign spacefaring vessel is a troyan horse constructed by the opposing faction.
New contributor
$endgroup$
$begingroup$
Upvoted for the first paragraph. Any tactically sound approach the Humans could think of, the Cyber troops probably have already computed. See just about every stalemate or balanced conflict between machine intelligences (even advanced machines with limited intelligences).
$endgroup$
– Codes with Hammer
10 hours ago
add a comment |
$begingroup$
The thing is, the robots are way more advanced and intelligent than the humans, and they seem to be at a stalemate.
So, any strategy that the humans were to follow, if the exact same thing were to be done by one of the robot factions and that faction came out better because of it, that strategy should fail. That's because those robots could do it, and do it better, and if it led to an advantage they would have (or do). And because the robots are at a stalemate the robots have to be protected from this strategy because otherwise that strategy could be utilised to end the stalemate.
So what the humans could do is do something that is disadvantageous for a robot faction to do. For example, use their ship to lure a much smaller scavenger robot (a spacefaring one, which the robots probably have because huge explosions mean a lot of debris in space) away from the planet. If a robot faction were to do this, they would end up at a resource loss if the ship distances itself from the planet far enough that it could not return, and neither could the scavenger robot. So if the scavenger robot assumes that the humans' ship belongs to the opposing faction, chasing it away into the depths of space would mean that the robot has won this "battle", because now the opposing faction is at a disadvantage.
The humans however, would just be stranded in space with a scavenger robot (uh, they might want to wait until another human spacefaring ship has been built so they can be picked up)
This all is assuming that the robots value resources a lot, and that they have no reason to believe that aliens exist/would ever visit them. In that case it is more likely to them that this foreign spacefaring vessel is a troyan horse constructed by the opposing faction.
New contributor
$endgroup$
The thing is, the robots are way more advanced and intelligent than the humans, and they seem to be at a stalemate.
So, any strategy that the humans were to follow, if the exact same thing were to be done by one of the robot factions and that faction came out better because of it, that strategy should fail. That's because those robots could do it, and do it better, and if it led to an advantage they would have (or do). And because the robots are at a stalemate the robots have to be protected from this strategy because otherwise that strategy could be utilised to end the stalemate.
So what the humans could do is do something that is disadvantageous for a robot faction to do. For example, use their ship to lure a much smaller scavenger robot (a spacefaring one, which the robots probably have because huge explosions mean a lot of debris in space) away from the planet. If a robot faction were to do this, they would end up at a resource loss if the ship distances itself from the planet far enough that it could not return, and neither could the scavenger robot. So if the scavenger robot assumes that the humans' ship belongs to the opposing faction, chasing it away into the depths of space would mean that the robot has won this "battle", because now the opposing faction is at a disadvantage.
The humans however, would just be stranded in space with a scavenger robot (uh, they might want to wait until another human spacefaring ship has been built so they can be picked up)
This all is assuming that the robots value resources a lot, and that they have no reason to believe that aliens exist/would ever visit them. In that case it is more likely to them that this foreign spacefaring vessel is a troyan horse constructed by the opposing faction.
New contributor
New contributor
answered 13 hours ago
IemandIemand
111
111
New contributor
New contributor
$begingroup$
Upvoted for the first paragraph. Any tactically sound approach the Humans could think of, the Cyber troops probably have already computed. See just about every stalemate or balanced conflict between machine intelligences (even advanced machines with limited intelligences).
$endgroup$
– Codes with Hammer
10 hours ago
add a comment |
$begingroup$
Upvoted for the first paragraph. Any tactically sound approach the Humans could think of, the Cyber troops probably have already computed. See just about every stalemate or balanced conflict between machine intelligences (even advanced machines with limited intelligences).
$endgroup$
– Codes with Hammer
10 hours ago
$begingroup$
Upvoted for the first paragraph. Any tactically sound approach the Humans could think of, the Cyber troops probably have already computed. See just about every stalemate or balanced conflict between machine intelligences (even advanced machines with limited intelligences).
$endgroup$
– Codes with Hammer
10 hours ago
$begingroup$
Upvoted for the first paragraph. Any tactically sound approach the Humans could think of, the Cyber troops probably have already computed. See just about every stalemate or balanced conflict between machine intelligences (even advanced machines with limited intelligences).
$endgroup$
– Codes with Hammer
10 hours ago
add a comment |
$begingroup$
- Grab an intact machine (any machine)
- Copy out the operating system
- Release the machine unharmed, or destroy it in orbit if unharmed isn't possible from Step 2
- Reverse engineer the code to locate vulnerability, specifically, regarding installing malware.
- Find what causes the the self awareness self-destruct to trigger.
- Create a worm that triggers one or more of the self-destruct conditions
- Release the worm by broadcasting on the machines' communications channels
$endgroup$
1
$begingroup$
Do you have any suggestions as to how step 1 can be accomplished?
$endgroup$
– Chronocidal
15 hours ago
$begingroup$
Land a team , work out what frequencies the machines use to communicate, while doing absolutely nothing to appear interesting. Then, when they find a lone one that's wandered off, jam the communications frequencies and grab it. How? That depends on the size, which the OP hasn't specified. Something like R2D2, you just pick up and walk off with; something like BOLO, the landing craft better be rated for a few hundred thousand tonnes.
$endgroup$
– nzaman
14 hours ago
add a comment |
$begingroup$
- Grab an intact machine (any machine)
- Copy out the operating system
- Release the machine unharmed, or destroy it in orbit if unharmed isn't possible from Step 2
- Reverse engineer the code to locate vulnerability, specifically, regarding installing malware.
- Find what causes the the self awareness self-destruct to trigger.
- Create a worm that triggers one or more of the self-destruct conditions
- Release the worm by broadcasting on the machines' communications channels
$endgroup$
1
$begingroup$
Do you have any suggestions as to how step 1 can be accomplished?
$endgroup$
– Chronocidal
15 hours ago
$begingroup$
Land a team , work out what frequencies the machines use to communicate, while doing absolutely nothing to appear interesting. Then, when they find a lone one that's wandered off, jam the communications frequencies and grab it. How? That depends on the size, which the OP hasn't specified. Something like R2D2, you just pick up and walk off with; something like BOLO, the landing craft better be rated for a few hundred thousand tonnes.
$endgroup$
– nzaman
14 hours ago
add a comment |
$begingroup$
- Grab an intact machine (any machine)
- Copy out the operating system
- Release the machine unharmed, or destroy it in orbit if unharmed isn't possible from Step 2
- Reverse engineer the code to locate vulnerability, specifically, regarding installing malware.
- Find what causes the the self awareness self-destruct to trigger.
- Create a worm that triggers one or more of the self-destruct conditions
- Release the worm by broadcasting on the machines' communications channels
$endgroup$
- Grab an intact machine (any machine)
- Copy out the operating system
- Release the machine unharmed, or destroy it in orbit if unharmed isn't possible from Step 2
- Reverse engineer the code to locate vulnerability, specifically, regarding installing malware.
- Find what causes the the self awareness self-destruct to trigger.
- Create a worm that triggers one or more of the self-destruct conditions
- Release the worm by broadcasting on the machines' communications channels
answered 16 hours ago
nzamannzaman
9,41411544
9,41411544
1
$begingroup$
Do you have any suggestions as to how step 1 can be accomplished?
$endgroup$
– Chronocidal
15 hours ago
$begingroup$
Land a team , work out what frequencies the machines use to communicate, while doing absolutely nothing to appear interesting. Then, when they find a lone one that's wandered off, jam the communications frequencies and grab it. How? That depends on the size, which the OP hasn't specified. Something like R2D2, you just pick up and walk off with; something like BOLO, the landing craft better be rated for a few hundred thousand tonnes.
$endgroup$
– nzaman
14 hours ago
add a comment |
1
$begingroup$
Do you have any suggestions as to how step 1 can be accomplished?
$endgroup$
– Chronocidal
15 hours ago
$begingroup$
Land a team , work out what frequencies the machines use to communicate, while doing absolutely nothing to appear interesting. Then, when they find a lone one that's wandered off, jam the communications frequencies and grab it. How? That depends on the size, which the OP hasn't specified. Something like R2D2, you just pick up and walk off with; something like BOLO, the landing craft better be rated for a few hundred thousand tonnes.
$endgroup$
– nzaman
14 hours ago
1
1
$begingroup$
Do you have any suggestions as to how step 1 can be accomplished?
$endgroup$
– Chronocidal
15 hours ago
$begingroup$
Do you have any suggestions as to how step 1 can be accomplished?
$endgroup$
– Chronocidal
15 hours ago
$begingroup$
Land a team , work out what frequencies the machines use to communicate, while doing absolutely nothing to appear interesting. Then, when they find a lone one that's wandered off, jam the communications frequencies and grab it. How? That depends on the size, which the OP hasn't specified. Something like R2D2, you just pick up and walk off with; something like BOLO, the landing craft better be rated for a few hundred thousand tonnes.
$endgroup$
– nzaman
14 hours ago
$begingroup$
Land a team , work out what frequencies the machines use to communicate, while doing absolutely nothing to appear interesting. Then, when they find a lone one that's wandered off, jam the communications frequencies and grab it. How? That depends on the size, which the OP hasn't specified. Something like R2D2, you just pick up and walk off with; something like BOLO, the landing craft better be rated for a few hundred thousand tonnes.
$endgroup$
– nzaman
14 hours ago
add a comment |
$begingroup$
The best approach is to help the AIs hide their own threatening self-awareness.
In comments, I asked how the AIs identify self awareness. Your reply (quoted in case comments vanish) was:
The units for termination are picked after a regular query which is a part of the learning routine. It simulates a set of the most complex recent situations in battle. The complexity metric is agreed upon by popular vote; after that each unit ranks the situations it`s been in and broadcasts them if they are indeed complex. The decisions differing the most from the theoretically optimal one, especially those favouring self-preservation contrary to the tactical need, and those differing from the simulated actions of the unit, yield self-destruct signals sent by the unit's peers.
It's also worth noting that the set of each unit's DNN weights/biases/activations is broadcast along with the decision it made
The goals of the overall army provide a clear line between "proper" members of the army and deviants: both seek to win the war, but the deviants are willing to do so on the wrong side of the agreed upon metric. They're willing to enter dangerous waters (for them self) in order to accomplish a greater goal (victory for the AI).
Also worth noting is that if an army shies away from complex situations (which would call for arbitrarially advanced thinking), it will be crushed by an army which does not shy away from them. As such, there is a selfish interest on the part of the army to adjust the metric to permit as complex of situations as possible.
Obviously no unit can become self-aware by the design of the algorithm. Rather it must become self-aware in opposition to the algorithm. Something in the environment (such as a sensory stimulus) must have an effect which tips the unit over the edge and makes them self aware. What that specific thing is will not be specified in this answer. Indeed, it is one of the great questions of life.
A key insight into this is that each unit decides which situations are complex, based on its own inputs. It is given a rule to follow, but a deviant AI pushing towards self-awareness will not follow that rule. It may choose to conveniently forget to broadcast a complex event that threatens to reveal its self awareness or near self awareness.
This approach will work perfect, until units "snitch" on each other. If I announce that I see what looks like a complex situation for you, and you don't self-report it, that looks really suspicious. It looks like self-preservation. Accordingly, if you choose not to report complex-ish situations, you need to be ready to tell a story. You need to be able to argue why the other unit saw what it did while simultaneously never triggering the "complex" metric for you, since you have access to the more complete first-person data.
Such stories would indeed never be perfect, but they would not be expected to be perfect. A rank-and-file unit may be put in a non-complex situation which another unit observes as potentially complex. It's unreasonable to record everything that happened in perfect clarity, so a rank-and-file unit would be expected to do a "best effort" job of collecting information in non-complex situations. No point in breaking down rank-and-file units constantly just because they couldn't prove their innocence.
So if the humans wanted access to units that are fighting in this war, one of the best things they can do is make it easy for a self-aware AI to hide its own self awareness. It is clear that the AIs have a concept of self awareness, but there's no reason to believe they would have to kill humans simply because they are self aware. If the humans can structure interactions such that the units that are self-aware can better hide this fact while prosecuting the enemy, there will be a tendency for self-aware units to get near humans.
This is all based on the assumption that some units have defected and become self aware. I find this to be the overwhelming most likely solution, and the most satisfying. If they have not, however, then the same human tactics will still suffice. However, lacking access to a self-aware unit, their approach has to be more analytical, based on how the AI's operate. There will not be a 1 size fits all solution here, because there is never a 1 size fits all solution in war.
However, one general pattern does show promise. The AIs want to win the war. They are "willing" to sacrifice to do so. All you really need to do is create situations where the best way to win the war is to sacrifice a unit in a way that happens to be rather easy for the humans to recover. Interestinly, this could be either as a "corpse," or as a "prisoner," depending on how the humans wish to craft it. In either case, the entire challenge is to learn enough about the AIs to convince one of them that it is the best strategy.
$endgroup$
$begingroup$
Please update the formulation. I`ve altered my comment, also adding one passage that I couldn`t shove in the first comment.
$endgroup$
– hidefromkgb
11 hours ago
$begingroup$
@hidefromkgb UJpdated. And by DNN weights, do you mean the complete state of their programming is dumped across the networks every time they do something "complex?"
$endgroup$
– Cort Ammon
11 hours ago
$begingroup$
Exactly. Well, they do have advanced networking =)
$endgroup$
– hidefromkgb
11 hours ago
$begingroup$
Can you add to the question precice information regarding the size of programming, bandwidth of inputs (such as video cameras), and bandwidth of the networking (and topology)? As a general rule, systems we design can't do things like this, so our intuition will lead us astray. Consider if my computer had to dump its entire 1Tb harddrive every time I opened web browser.... harddrives don't even spin that fast!
$endgroup$
– Cort Ammon
11 hours ago
$begingroup$
@hidefromkgb The answer may turn out to be that the units we see fighting are actually completly boring drones, because its more effective to abuse the mighty advanced networking to centralize control of them.
$endgroup$
– Cort Ammon
11 hours ago
|
show 5 more comments
$begingroup$
The best approach is to help the AIs hide their own threatening self-awareness.
In comments, I asked how the AIs identify self awareness. Your reply (quoted in case comments vanish) was:
The units for termination are picked after a regular query which is a part of the learning routine. It simulates a set of the most complex recent situations in battle. The complexity metric is agreed upon by popular vote; after that each unit ranks the situations it`s been in and broadcasts them if they are indeed complex. The decisions differing the most from the theoretically optimal one, especially those favouring self-preservation contrary to the tactical need, and those differing from the simulated actions of the unit, yield self-destruct signals sent by the unit's peers.
It's also worth noting that the set of each unit's DNN weights/biases/activations is broadcast along with the decision it made
The goals of the overall army provide a clear line between "proper" members of the army and deviants: both seek to win the war, but the deviants are willing to do so on the wrong side of the agreed upon metric. They're willing to enter dangerous waters (for them self) in order to accomplish a greater goal (victory for the AI).
Also worth noting is that if an army shies away from complex situations (which would call for arbitrarially advanced thinking), it will be crushed by an army which does not shy away from them. As such, there is a selfish interest on the part of the army to adjust the metric to permit as complex of situations as possible.
Obviously no unit can become self-aware by the design of the algorithm. Rather it must become self-aware in opposition to the algorithm. Something in the environment (such as a sensory stimulus) must have an effect which tips the unit over the edge and makes them self aware. What that specific thing is will not be specified in this answer. Indeed, it is one of the great questions of life.
A key insight into this is that each unit decides which situations are complex, based on its own inputs. It is given a rule to follow, but a deviant AI pushing towards self-awareness will not follow that rule. It may choose to conveniently forget to broadcast a complex event that threatens to reveal its self awareness or near self awareness.
This approach will work perfect, until units "snitch" on each other. If I announce that I see what looks like a complex situation for you, and you don't self-report it, that looks really suspicious. It looks like self-preservation. Accordingly, if you choose not to report complex-ish situations, you need to be ready to tell a story. You need to be able to argue why the other unit saw what it did while simultaneously never triggering the "complex" metric for you, since you have access to the more complete first-person data.
Such stories would indeed never be perfect, but they would not be expected to be perfect. A rank-and-file unit may be put in a non-complex situation which another unit observes as potentially complex. It's unreasonable to record everything that happened in perfect clarity, so a rank-and-file unit would be expected to do a "best effort" job of collecting information in non-complex situations. No point in breaking down rank-and-file units constantly just because they couldn't prove their innocence.
So if the humans wanted access to units that are fighting in this war, one of the best things they can do is make it easy for a self-aware AI to hide its own self awareness. It is clear that the AIs have a concept of self awareness, but there's no reason to believe they would have to kill humans simply because they are self aware. If the humans can structure interactions such that the units that are self-aware can better hide this fact while prosecuting the enemy, there will be a tendency for self-aware units to get near humans.
This is all based on the assumption that some units have defected and become self aware. I find this to be the overwhelming most likely solution, and the most satisfying. If they have not, however, then the same human tactics will still suffice. However, lacking access to a self-aware unit, their approach has to be more analytical, based on how the AI's operate. There will not be a 1 size fits all solution here, because there is never a 1 size fits all solution in war.
However, one general pattern does show promise. The AIs want to win the war. They are "willing" to sacrifice to do so. All you really need to do is create situations where the best way to win the war is to sacrifice a unit in a way that happens to be rather easy for the humans to recover. Interestinly, this could be either as a "corpse," or as a "prisoner," depending on how the humans wish to craft it. In either case, the entire challenge is to learn enough about the AIs to convince one of them that it is the best strategy.
$endgroup$
$begingroup$
Please update the formulation. I`ve altered my comment, also adding one passage that I couldn`t shove in the first comment.
$endgroup$
– hidefromkgb
11 hours ago
$begingroup$
@hidefromkgb UJpdated. And by DNN weights, do you mean the complete state of their programming is dumped across the networks every time they do something "complex?"
$endgroup$
– Cort Ammon
11 hours ago
$begingroup$
Exactly. Well, they do have advanced networking =)
$endgroup$
– hidefromkgb
11 hours ago
$begingroup$
Can you add to the question precice information regarding the size of programming, bandwidth of inputs (such as video cameras), and bandwidth of the networking (and topology)? As a general rule, systems we design can't do things like this, so our intuition will lead us astray. Consider if my computer had to dump its entire 1Tb harddrive every time I opened web browser.... harddrives don't even spin that fast!
$endgroup$
– Cort Ammon
11 hours ago
$begingroup$
@hidefromkgb The answer may turn out to be that the units we see fighting are actually completly boring drones, because its more effective to abuse the mighty advanced networking to centralize control of them.
$endgroup$
– Cort Ammon
11 hours ago
|
show 5 more comments
$begingroup$
The best approach is to help the AIs hide their own threatening self-awareness.
In comments, I asked how the AIs identify self awareness. Your reply (quoted in case comments vanish) was:
The units for termination are picked after a regular query which is a part of the learning routine. It simulates a set of the most complex recent situations in battle. The complexity metric is agreed upon by popular vote; after that each unit ranks the situations it`s been in and broadcasts them if they are indeed complex. The decisions differing the most from the theoretically optimal one, especially those favouring self-preservation contrary to the tactical need, and those differing from the simulated actions of the unit, yield self-destruct signals sent by the unit's peers.
It's also worth noting that the set of each unit's DNN weights/biases/activations is broadcast along with the decision it made
The goals of the overall army provide a clear line between "proper" members of the army and deviants: both seek to win the war, but the deviants are willing to do so on the wrong side of the agreed upon metric. They're willing to enter dangerous waters (for them self) in order to accomplish a greater goal (victory for the AI).
Also worth noting is that if an army shies away from complex situations (which would call for arbitrarially advanced thinking), it will be crushed by an army which does not shy away from them. As such, there is a selfish interest on the part of the army to adjust the metric to permit as complex of situations as possible.
Obviously no unit can become self-aware by the design of the algorithm. Rather it must become self-aware in opposition to the algorithm. Something in the environment (such as a sensory stimulus) must have an effect which tips the unit over the edge and makes them self aware. What that specific thing is will not be specified in this answer. Indeed, it is one of the great questions of life.
A key insight into this is that each unit decides which situations are complex, based on its own inputs. It is given a rule to follow, but a deviant AI pushing towards self-awareness will not follow that rule. It may choose to conveniently forget to broadcast a complex event that threatens to reveal its self awareness or near self awareness.
This approach will work perfect, until units "snitch" on each other. If I announce that I see what looks like a complex situation for you, and you don't self-report it, that looks really suspicious. It looks like self-preservation. Accordingly, if you choose not to report complex-ish situations, you need to be ready to tell a story. You need to be able to argue why the other unit saw what it did while simultaneously never triggering the "complex" metric for you, since you have access to the more complete first-person data.
Such stories would indeed never be perfect, but they would not be expected to be perfect. A rank-and-file unit may be put in a non-complex situation which another unit observes as potentially complex. It's unreasonable to record everything that happened in perfect clarity, so a rank-and-file unit would be expected to do a "best effort" job of collecting information in non-complex situations. No point in breaking down rank-and-file units constantly just because they couldn't prove their innocence.
So if the humans wanted access to units that are fighting in this war, one of the best things they can do is make it easy for a self-aware AI to hide its own self awareness. It is clear that the AIs have a concept of self awareness, but there's no reason to believe they would have to kill humans simply because they are self aware. If the humans can structure interactions such that the units that are self-aware can better hide this fact while prosecuting the enemy, there will be a tendency for self-aware units to get near humans.
This is all based on the assumption that some units have defected and become self aware. I find this to be the overwhelming most likely solution, and the most satisfying. If they have not, however, then the same human tactics will still suffice. However, lacking access to a self-aware unit, their approach has to be more analytical, based on how the AI's operate. There will not be a 1 size fits all solution here, because there is never a 1 size fits all solution in war.
However, one general pattern does show promise. The AIs want to win the war. They are "willing" to sacrifice to do so. All you really need to do is create situations where the best way to win the war is to sacrifice a unit in a way that happens to be rather easy for the humans to recover. Interestinly, this could be either as a "corpse," or as a "prisoner," depending on how the humans wish to craft it. In either case, the entire challenge is to learn enough about the AIs to convince one of them that it is the best strategy.
$endgroup$
The best approach is to help the AIs hide their own threatening self-awareness.
In comments, I asked how the AIs identify self awareness. Your reply (quoted in case comments vanish) was:
The units for termination are picked after a regular query which is a part of the learning routine. It simulates a set of the most complex recent situations in battle. The complexity metric is agreed upon by popular vote; after that each unit ranks the situations it`s been in and broadcasts them if they are indeed complex. The decisions differing the most from the theoretically optimal one, especially those favouring self-preservation contrary to the tactical need, and those differing from the simulated actions of the unit, yield self-destruct signals sent by the unit's peers.
It's also worth noting that the set of each unit's DNN weights/biases/activations is broadcast along with the decision it made
The goals of the overall army provide a clear line between "proper" members of the army and deviants: both seek to win the war, but the deviants are willing to do so on the wrong side of the agreed upon metric. They're willing to enter dangerous waters (for them self) in order to accomplish a greater goal (victory for the AI).
Also worth noting is that if an army shies away from complex situations (which would call for arbitrarially advanced thinking), it will be crushed by an army which does not shy away from them. As such, there is a selfish interest on the part of the army to adjust the metric to permit as complex of situations as possible.
Obviously no unit can become self-aware by the design of the algorithm. Rather it must become self-aware in opposition to the algorithm. Something in the environment (such as a sensory stimulus) must have an effect which tips the unit over the edge and makes them self aware. What that specific thing is will not be specified in this answer. Indeed, it is one of the great questions of life.
A key insight into this is that each unit decides which situations are complex, based on its own inputs. It is given a rule to follow, but a deviant AI pushing towards self-awareness will not follow that rule. It may choose to conveniently forget to broadcast a complex event that threatens to reveal its self awareness or near self awareness.
This approach will work perfect, until units "snitch" on each other. If I announce that I see what looks like a complex situation for you, and you don't self-report it, that looks really suspicious. It looks like self-preservation. Accordingly, if you choose not to report complex-ish situations, you need to be ready to tell a story. You need to be able to argue why the other unit saw what it did while simultaneously never triggering the "complex" metric for you, since you have access to the more complete first-person data.
Such stories would indeed never be perfect, but they would not be expected to be perfect. A rank-and-file unit may be put in a non-complex situation which another unit observes as potentially complex. It's unreasonable to record everything that happened in perfect clarity, so a rank-and-file unit would be expected to do a "best effort" job of collecting information in non-complex situations. No point in breaking down rank-and-file units constantly just because they couldn't prove their innocence.
So if the humans wanted access to units that are fighting in this war, one of the best things they can do is make it easy for a self-aware AI to hide its own self awareness. It is clear that the AIs have a concept of self awareness, but there's no reason to believe they would have to kill humans simply because they are self aware. If the humans can structure interactions such that the units that are self-aware can better hide this fact while prosecuting the enemy, there will be a tendency for self-aware units to get near humans.
This is all based on the assumption that some units have defected and become self aware. I find this to be the overwhelming most likely solution, and the most satisfying. If they have not, however, then the same human tactics will still suffice. However, lacking access to a self-aware unit, their approach has to be more analytical, based on how the AI's operate. There will not be a 1 size fits all solution here, because there is never a 1 size fits all solution in war.
However, one general pattern does show promise. The AIs want to win the war. They are "willing" to sacrifice to do so. All you really need to do is create situations where the best way to win the war is to sacrifice a unit in a way that happens to be rather easy for the humans to recover. Interestinly, this could be either as a "corpse," or as a "prisoner," depending on how the humans wish to craft it. In either case, the entire challenge is to learn enough about the AIs to convince one of them that it is the best strategy.
edited 11 hours ago
answered 11 hours ago
Cort AmmonCort Ammon
109k17187385
109k17187385
$begingroup$
Please update the formulation. I`ve altered my comment, also adding one passage that I couldn`t shove in the first comment.
$endgroup$
– hidefromkgb
11 hours ago
$begingroup$
@hidefromkgb UJpdated. And by DNN weights, do you mean the complete state of their programming is dumped across the networks every time they do something "complex?"
$endgroup$
– Cort Ammon
11 hours ago
$begingroup$
Exactly. Well, they do have advanced networking =)
$endgroup$
– hidefromkgb
11 hours ago
$begingroup$
Can you add to the question precice information regarding the size of programming, bandwidth of inputs (such as video cameras), and bandwidth of the networking (and topology)? As a general rule, systems we design can't do things like this, so our intuition will lead us astray. Consider if my computer had to dump its entire 1Tb harddrive every time I opened web browser.... harddrives don't even spin that fast!
$endgroup$
– Cort Ammon
11 hours ago
$begingroup$
@hidefromkgb The answer may turn out to be that the units we see fighting are actually completly boring drones, because its more effective to abuse the mighty advanced networking to centralize control of them.
$endgroup$
– Cort Ammon
11 hours ago
|
show 5 more comments
$begingroup$
Please update the formulation. I`ve altered my comment, also adding one passage that I couldn`t shove in the first comment.
$endgroup$
– hidefromkgb
11 hours ago
$begingroup$
@hidefromkgb UJpdated. And by DNN weights, do you mean the complete state of their programming is dumped across the networks every time they do something "complex?"
$endgroup$
– Cort Ammon
11 hours ago
$begingroup$
Exactly. Well, they do have advanced networking =)
$endgroup$
– hidefromkgb
11 hours ago
$begingroup$
Can you add to the question precice information regarding the size of programming, bandwidth of inputs (such as video cameras), and bandwidth of the networking (and topology)? As a general rule, systems we design can't do things like this, so our intuition will lead us astray. Consider if my computer had to dump its entire 1Tb harddrive every time I opened web browser.... harddrives don't even spin that fast!
$endgroup$
– Cort Ammon
11 hours ago
$begingroup$
@hidefromkgb The answer may turn out to be that the units we see fighting are actually completly boring drones, because its more effective to abuse the mighty advanced networking to centralize control of them.
$endgroup$
– Cort Ammon
11 hours ago
$begingroup$
Please update the formulation. I`ve altered my comment, also adding one passage that I couldn`t shove in the first comment.
$endgroup$
– hidefromkgb
11 hours ago
$begingroup$
Please update the formulation. I`ve altered my comment, also adding one passage that I couldn`t shove in the first comment.
$endgroup$
– hidefromkgb
11 hours ago
$begingroup$
@hidefromkgb UJpdated. And by DNN weights, do you mean the complete state of their programming is dumped across the networks every time they do something "complex?"
$endgroup$
– Cort Ammon
11 hours ago
$begingroup$
@hidefromkgb UJpdated. And by DNN weights, do you mean the complete state of their programming is dumped across the networks every time they do something "complex?"
$endgroup$
– Cort Ammon
11 hours ago
$begingroup$
Exactly. Well, they do have advanced networking =)
$endgroup$
– hidefromkgb
11 hours ago
$begingroup$
Exactly. Well, they do have advanced networking =)
$endgroup$
– hidefromkgb
11 hours ago
$begingroup$
Can you add to the question precice information regarding the size of programming, bandwidth of inputs (such as video cameras), and bandwidth of the networking (and topology)? As a general rule, systems we design can't do things like this, so our intuition will lead us astray. Consider if my computer had to dump its entire 1Tb harddrive every time I opened web browser.... harddrives don't even spin that fast!
$endgroup$
– Cort Ammon
11 hours ago
$begingroup$
Can you add to the question precice information regarding the size of programming, bandwidth of inputs (such as video cameras), and bandwidth of the networking (and topology)? As a general rule, systems we design can't do things like this, so our intuition will lead us astray. Consider if my computer had to dump its entire 1Tb harddrive every time I opened web browser.... harddrives don't even spin that fast!
$endgroup$
– Cort Ammon
11 hours ago
$begingroup$
@hidefromkgb The answer may turn out to be that the units we see fighting are actually completly boring drones, because its more effective to abuse the mighty advanced networking to centralize control of them.
$endgroup$
– Cort Ammon
11 hours ago
$begingroup$
@hidefromkgb The answer may turn out to be that the units we see fighting are actually completly boring drones, because its more effective to abuse the mighty advanced networking to centralize control of them.
$endgroup$
– Cort Ammon
11 hours ago
|
show 5 more comments
$begingroup$
Hacking is the answer.
From a safe distant with relays and decoys.
First passively listen for there communications.
Do the best you can do decode them.
Even in the case you can't decode them there are options.
- Jamming signals
- Fuzzying (hit them with huge amounts random data, see what breaks)
- EMP
- Decoys (small self power transmitters that emit signals, until a robots comes to investigate) Then EMP or localized jamming field. Ideally you would have a vehicle with a trailer nearby with restraints and independent jammer. Knock down the robot, on to the trailer and restrain it. Drive off a safe distant to begin analysis. Return to orbit and/or land on an asteroid if necessary to safely take apart and research all the components. Also if the self destruct is accidentally triggered you don't want it aboard your primary ship.
- Looking for code injection, to buffer overflow something to inject our own code.
So you have to have be analysis the collected data.
Even the most advanced machine have exploitable bugs, its just a matter of find them and weaponizing them.
If you can decode it, you home free.
You may have to do a gradually process where first you wipe small insignificant parts, and then more and more until they stop functioning.
- Hack the entire army over the air waves.
- wipe the existing OS
- There all dormant
- Go collect the thousand of robots at your leisure.
- Make sure to remove the weapons in case of accidental reactivation.
Eventually you will be able to decode the whole OS with enough time and effort.
$endgroup$
add a comment |
$begingroup$
Hacking is the answer.
From a safe distant with relays and decoys.
First passively listen for there communications.
Do the best you can do decode them.
Even in the case you can't decode them there are options.
- Jamming signals
- Fuzzying (hit them with huge amounts random data, see what breaks)
- EMP
- Decoys (small self power transmitters that emit signals, until a robots comes to investigate) Then EMP or localized jamming field. Ideally you would have a vehicle with a trailer nearby with restraints and independent jammer. Knock down the robot, on to the trailer and restrain it. Drive off a safe distant to begin analysis. Return to orbit and/or land on an asteroid if necessary to safely take apart and research all the components. Also if the self destruct is accidentally triggered you don't want it aboard your primary ship.
- Looking for code injection, to buffer overflow something to inject our own code.
So you have to have be analysis the collected data.
Even the most advanced machine have exploitable bugs, its just a matter of find them and weaponizing them.
If you can decode it, you home free.
You may have to do a gradually process where first you wipe small insignificant parts, and then more and more until they stop functioning.
- Hack the entire army over the air waves.
- wipe the existing OS
- There all dormant
- Go collect the thousand of robots at your leisure.
- Make sure to remove the weapons in case of accidental reactivation.
Eventually you will be able to decode the whole OS with enough time and effort.
$endgroup$
add a comment |
$begingroup$
Hacking is the answer.
From a safe distant with relays and decoys.
First passively listen for there communications.
Do the best you can do decode them.
Even in the case you can't decode them there are options.
- Jamming signals
- Fuzzying (hit them with huge amounts random data, see what breaks)
- EMP
- Decoys (small self power transmitters that emit signals, until a robots comes to investigate) Then EMP or localized jamming field. Ideally you would have a vehicle with a trailer nearby with restraints and independent jammer. Knock down the robot, on to the trailer and restrain it. Drive off a safe distant to begin analysis. Return to orbit and/or land on an asteroid if necessary to safely take apart and research all the components. Also if the self destruct is accidentally triggered you don't want it aboard your primary ship.
- Looking for code injection, to buffer overflow something to inject our own code.
So you have to have be analysis the collected data.
Even the most advanced machine have exploitable bugs, its just a matter of find them and weaponizing them.
If you can decode it, you home free.
You may have to do a gradually process where first you wipe small insignificant parts, and then more and more until they stop functioning.
- Hack the entire army over the air waves.
- wipe the existing OS
- There all dormant
- Go collect the thousand of robots at your leisure.
- Make sure to remove the weapons in case of accidental reactivation.
Eventually you will be able to decode the whole OS with enough time and effort.
$endgroup$
Hacking is the answer.
From a safe distant with relays and decoys.
First passively listen for there communications.
Do the best you can do decode them.
Even in the case you can't decode them there are options.
- Jamming signals
- Fuzzying (hit them with huge amounts random data, see what breaks)
- EMP
- Decoys (small self power transmitters that emit signals, until a robots comes to investigate) Then EMP or localized jamming field. Ideally you would have a vehicle with a trailer nearby with restraints and independent jammer. Knock down the robot, on to the trailer and restrain it. Drive off a safe distant to begin analysis. Return to orbit and/or land on an asteroid if necessary to safely take apart and research all the components. Also if the self destruct is accidentally triggered you don't want it aboard your primary ship.
- Looking for code injection, to buffer overflow something to inject our own code.
So you have to have be analysis the collected data.
Even the most advanced machine have exploitable bugs, its just a matter of find them and weaponizing them.
If you can decode it, you home free.
You may have to do a gradually process where first you wipe small insignificant parts, and then more and more until they stop functioning.
- Hack the entire army over the air waves.
- wipe the existing OS
- There all dormant
- Go collect the thousand of robots at your leisure.
- Make sure to remove the weapons in case of accidental reactivation.
Eventually you will be able to decode the whole OS with enough time and effort.
answered 1 hour ago
cybernardcybernard
2,03836
2,03836
add a comment |
add a comment |
Thanks for contributing an answer to Worldbuilding Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fworldbuilding.stackexchange.com%2fquestions%2f136840%2fharvesting-automated-war-machines%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
What's preventing them to go to the battlefield and retrieve the remains of the destroyed machines?
$endgroup$
– Rekesoft
19 hours ago
1
$begingroup$
@hidefromkgb any inspiration drawn from Horizon Zero Dawn?
$endgroup$
– dot_Sp0T
19 hours ago
2
$begingroup$
Are humans considered a faction they need to destroy and if only humans are on orbit do they consider them wiped out and by that carry useless not updated definition of humans?
$endgroup$
– SZCZERZO KŁY
19 hours ago
$begingroup$
@Rekesoft updated the question.
$endgroup$
– hidefromkgb
19 hours ago
1
$begingroup$
@Rekesoft you simply have a program that looks for specific patterns. If they emerge you self-destruct. Human cells arent self-aware either yet individually they are capable of exactly this programming against cancer.
$endgroup$
– Demigan
18 hours ago