Some thoughts on p2p security
Apr. 20th, 2006 11:23 am![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Second Life, the sprawling hack that it is, has me thinking about multiuser systems lately, particularly how to do them better. One really cool thing would be to have peer-to-peer spaces. For a small group, that would be a huge improvement over the $200/mo Lindens want to maintain an island. Just connect to your local tracker, ala BT, and have all the room your slowest clients can cope with. It goes away when the last person leaves, or someone could save to disk. What could be simpler?
Well assuming you tackled the voluntary coherence issues, there is still no authority involved. It's peer-to-peer, so how could there be? Kinna confusing. Let me outline the situation...
Let's say Alice, Bob and Carol are each hosting a "secure" virtual machine akin to Smalltalk or Java. The machine keeps track of objects, partitioned into distinct spaces, and work gets done when the objects call methods (Java-speak) or send messages (Smalltalk) to each other. Conceptually, all data access goes through messages to the object's own members. You can create and pass your own objects, but not invent pointers to them or get at their members later. Objects reside in spaces, which can be considered nodes on a network.
Alice, Bob, and Carol each have a local space (Sa, Sb, Sc), dedicated to giving them a shiny GUI and a way to manipulate the world space they all share, Sw.
It's possible to communicate between any two spaces on the same machine. For example, messages may be sent between Sa and Sw, since Alice has a copy of Sw. When this happens, Alice has to arrange for Bob and Carol to get a copy of the message or the response (depending on direction). So cross-space messaging is slower.
For simplicity's sake I'll assume messages may not go directly from, e.g., Sa to Sc. Those are private interactions, easier to secure, and Bob (for example) doesn't care about their side-effects. They will be handled if/when they propagate back to Sw.
Now suppose our ever-enterprising Carol hacks apart her VM. The first thing she does is add an instruction to let her see all the objects and their data, particularly those in Sw. Then she uses that to inspect Bob's bank account, and is sorely disappointed because Bob is fresh out of funds. So she builds a camera situated in Sa, that lets her peek right into Alice and Bob's skybox. (Which, we suppose, is what he spent the money on—it is peppered with luxurious textures and neon poseballs. She makes a note to copy her favorites.)
I think it's very important to point out here that, while nobody can violate security (e.g. forge object pointers) inside the VM, with contemporary hardware it will always be possible to hack the OS, machine, security chips, etc etc... and therefore, data extraction is a problem. SL does in fact have the same issues, but LL has been hushed about it. Probably because people never seem to understand this, and love to get upset over it... but really, it's a big fat social problem IMHO, considering what little current tech can do against a very determined Carol.
The solution I think is to use a whitelist (challenge-response), and not to have your skybox, or your banking, in public space (plain sight). Just RL common sense, there. But, but, I'm really getting off track here. The problem I wanted to talk about, and solve, is something different and much worse than peeping toms.
So while I was rambling, Carol has advanced her plans for Sw domination. Now she wants to abuse Alice and Bob's blissfully consistent view of the world. To do this requires no cracking whatsoever—just abuse of the protocol. All she needs to do is send two different messages or responses to Alice and Bob's Sw. The two copies will then slowly desync as the code takes different paths on each side. Obviously this is going to break, but to fix it, we need to know how.
Somewhere along the line, Alice->Sw is going to talk to Sa (e.g. Carol's beach ball bumps Alice's avatar). If things were consistent, Alice would handle it locally, and send the results to Bob, who is waiting and will acknowledge them. But they aren't, and suppose that Bob->Sw never got Carol's beach ball. He won't be expecting it, and he's going to go LOLWTF!, and reject Alice's results.
I can't think of a general way to resync, short of rollback, but Alice certainly knows Bob doesn't agree with her. In fact, if they could compare their cross-space message histories, they would find Carol's discrepancy, and blame her.
Which points to a partial solution. What if we could make this situation arbitrarily likely (e.g. extremely likely) to happen with the smallest perturbation? Then we could roll back an arbitrarily short time to the past (requiring arbitrarily little RAM to store the difference), and nobody loses much work. It would also force Carol to run the exact same data and code, or else be found out, even if she runs a second cracked copy for herself.
Essentially, what I'm proposing, is the Halting Problem put to practical use. Message consistency proves with high probability that you ran the same code.
But how strong is that? Another way to see it is like a data hash, except for code. And the best guarantee for a hash is if every bit of state contributes with equal probability. Hmmm, now what part of the VM constantly touches most of that state...
Well, I forgot to mention that what St and Java both have is a garbage collector. Instead of expecting the program to clean up, they clean up after the program. To do this they recursively scan the object data for pointers and discard all the objects with no pointers to them. This means that at some point they've touched every byte on the heap.
So let's say we modify the collector, as it's scanning, to run hashes on some choice object trees. Presumably, it's deterministic between peers. So a first hack of a protocol (not too sure about the details yet) might go like this, sort of a challenge-response:
1) Alice tells Bob she thinks they've diverged.
2) Bob considers this a bit, and sends N bits of his collected hash, salted with some unique value tied to Alice, so she can verify it, but can't fake a reply to a similar request. Bob also tells her what he hashed, so she can do the same.
3) She takes her hash, salts it with Bob's value and returns it to him.
4) Now Alice and Bob can independently confirm Carol's hijinks, with some high probability.
This would probably be integrated into some sort of "heartbeat" as a way to keep everyone sync'd within some bounded time. Note that it's also symmetrical. If Alice were to deliberately screw up her hash, Bob might think -she- strayed, and vice versa. To this point, any "accusation" is private. What to do if several peers disagree? Start voting people off the island? (Heheh, sorry. ^..^;)
The thing with voting is that it's only good if you play with several friends, or at least neutrals. But you don't want to play with cheaters anyway, I presume. At the least, you would discover something was up and take your toys elsewhere. The real problem, is if a large gang snuck up on your friends, and all of a sudden, voted you out. Sort of a DDoS... unless maybe you deliberately trusted your friends? Sticky issue.
Hrrrrmm. It's interesting to think about anyways. Let me know what I forgot. n..n
Well assuming you tackled the voluntary coherence issues, there is still no authority involved. It's peer-to-peer, so how could there be? Kinna confusing. Let me outline the situation...
Let's say Alice, Bob and Carol are each hosting a "secure" virtual machine akin to Smalltalk or Java. The machine keeps track of objects, partitioned into distinct spaces, and work gets done when the objects call methods (Java-speak) or send messages (Smalltalk) to each other. Conceptually, all data access goes through messages to the object's own members. You can create and pass your own objects, but not invent pointers to them or get at their members later. Objects reside in spaces, which can be considered nodes on a network.
Alice, Bob, and Carol each have a local space (Sa, Sb, Sc), dedicated to giving them a shiny GUI and a way to manipulate the world space they all share, Sw.
It's possible to communicate between any two spaces on the same machine. For example, messages may be sent between Sa and Sw, since Alice has a copy of Sw. When this happens, Alice has to arrange for Bob and Carol to get a copy of the message or the response (depending on direction). So cross-space messaging is slower.
For simplicity's sake I'll assume messages may not go directly from, e.g., Sa to Sc. Those are private interactions, easier to secure, and Bob (for example) doesn't care about their side-effects. They will be handled if/when they propagate back to Sw.
Now suppose our ever-enterprising Carol hacks apart her VM. The first thing she does is add an instruction to let her see all the objects and their data, particularly those in Sw. Then she uses that to inspect Bob's bank account, and is sorely disappointed because Bob is fresh out of funds. So she builds a camera situated in Sa, that lets her peek right into Alice and Bob's skybox. (Which, we suppose, is what he spent the money on—it is peppered with luxurious textures and neon poseballs. She makes a note to copy her favorites.)
I think it's very important to point out here that, while nobody can violate security (e.g. forge object pointers) inside the VM, with contemporary hardware it will always be possible to hack the OS, machine, security chips, etc etc... and therefore, data extraction is a problem. SL does in fact have the same issues, but LL has been hushed about it. Probably because people never seem to understand this, and love to get upset over it... but really, it's a big fat social problem IMHO, considering what little current tech can do against a very determined Carol.
The solution I think is to use a whitelist (challenge-response), and not to have your skybox, or your banking, in public space (plain sight). Just RL common sense, there. But, but, I'm really getting off track here. The problem I wanted to talk about, and solve, is something different and much worse than peeping toms.
So while I was rambling, Carol has advanced her plans for Sw domination. Now she wants to abuse Alice and Bob's blissfully consistent view of the world. To do this requires no cracking whatsoever—just abuse of the protocol. All she needs to do is send two different messages or responses to Alice and Bob's Sw. The two copies will then slowly desync as the code takes different paths on each side. Obviously this is going to break, but to fix it, we need to know how.
Somewhere along the line, Alice->Sw is going to talk to Sa (e.g. Carol's beach ball bumps Alice's avatar). If things were consistent, Alice would handle it locally, and send the results to Bob, who is waiting and will acknowledge them. But they aren't, and suppose that Bob->Sw never got Carol's beach ball. He won't be expecting it, and he's going to go LOLWTF!, and reject Alice's results.
I can't think of a general way to resync, short of rollback, but Alice certainly knows Bob doesn't agree with her. In fact, if they could compare their cross-space message histories, they would find Carol's discrepancy, and blame her.
Which points to a partial solution. What if we could make this situation arbitrarily likely (e.g. extremely likely) to happen with the smallest perturbation? Then we could roll back an arbitrarily short time to the past (requiring arbitrarily little RAM to store the difference), and nobody loses much work. It would also force Carol to run the exact same data and code, or else be found out, even if she runs a second cracked copy for herself.
Essentially, what I'm proposing, is the Halting Problem put to practical use. Message consistency proves with high probability that you ran the same code.
But how strong is that? Another way to see it is like a data hash, except for code. And the best guarantee for a hash is if every bit of state contributes with equal probability. Hmmm, now what part of the VM constantly touches most of that state...
Well, I forgot to mention that what St and Java both have is a garbage collector. Instead of expecting the program to clean up, they clean up after the program. To do this they recursively scan the object data for pointers and discard all the objects with no pointers to them. This means that at some point they've touched every byte on the heap.
So let's say we modify the collector, as it's scanning, to run hashes on some choice object trees. Presumably, it's deterministic between peers. So a first hack of a protocol (not too sure about the details yet) might go like this, sort of a challenge-response:
1) Alice tells Bob she thinks they've diverged.
2) Bob considers this a bit, and sends N bits of his collected hash, salted with some unique value tied to Alice, so she can verify it, but can't fake a reply to a similar request. Bob also tells her what he hashed, so she can do the same.
3) She takes her hash, salts it with Bob's value and returns it to him.
4) Now Alice and Bob can independently confirm Carol's hijinks, with some high probability.
This would probably be integrated into some sort of "heartbeat" as a way to keep everyone sync'd within some bounded time. Note that it's also symmetrical. If Alice were to deliberately screw up her hash, Bob might think -she- strayed, and vice versa. To this point, any "accusation" is private. What to do if several peers disagree? Start voting people off the island? (Heheh, sorry. ^..^;)
The thing with voting is that it's only good if you play with several friends, or at least neutrals. But you don't want to play with cheaters anyway, I presume. At the least, you would discover something was up and take your toys elsewhere. The real problem, is if a large gang snuck up on your friends, and all of a sudden, voted you out. Sort of a DDoS... unless maybe you deliberately trusted your friends? Sticky issue.
Hrrrrmm. It's interesting to think about anyways. Let me know what I forgot. n..n
no subject
Date: 2006-04-21 04:30 am (UTC)no subject
Date: 2006-04-21 05:41 pm (UTC)What I'm trying to figure out is how to do Second Life on a smaller, looser scale, without the servers and sims. E.g. how do you cope with no central authority saying "this is how the world is"? Everyone needs to stay in agreement, but more importantly, it has to be difficult to crack. (Or alternately, easy to catch the cheaters.)
In a way it's a very social issue, since you can't stop people patching their game clients. But maybe you can help the "good" people compare notes, and know when it happens. What I'm discussing is, what's a good, fast way to do this? So you know who's been cheating, and you can decide who your friends really are. ;)
(Of course, as I said, this only covers cheats that change something in the game. If someone wants to spy, they can run on VMWare or Virtual PC, and unfortunately nobody can stop them.)
provides access
Date: 2011-01-16 11:41 pm (UTC)