Quantcast
Channel: Tentacolor » The Best of Tentacolor
Viewing all articles
Browse latest Browse all 10

The Plug with a Hole in It

$
0
0

Gwyneth Llewelyn recently offered a proposal to try to plug “the analogue hole” that makes content theft inevitable. Her proposal drew a lot of criticism, particularly from open source developers, and she has since withdrawn it.

I’m glad to read that she has; I was among those with objections to the proposal. But I’m disappointed by her reaction to the criticism she received:

The current community of developers — and by that I mean non-LL developers — is absolutely not interested in implementing any sort of content protection schemes.

… Their argument is that ultimately any measures taken to implement “trusted clients” that connect to LL’s grid will always be defeated since it’s too easy to create a “fake” trusted client. And that the trouble to go the way of trusted clients will, well, “stifle development” by making it harder, and, ultimately, the gain is poor compared to the hassle of going through a certification procedure.

I won’t fight that argument, since it’s discussing ideologies, not really security. Either the development is made by security-conscious developers, or by people who prefer that content ought to be copied anyway (since you’ll never be able to protect it), and they claim that the focus should be on making development easier, not worrying about how easy content is copied or not.

… “Technicalities” are just a way to cover their ideology: ultimately, they´re strong believers that content (and that includes development efforts to make Second Life better) ought to be free.

Despite what Gwyn suggests, one can object to a specific content protection scheme without being an ideological extremist who believes that everything should be free. Yes, there are individuals who take that viewpoint. Many of them are quite vocal, and some are rather arrogant and obnoxious. (I am of the opinion that this latter kind ought to be swatted hard over the head with a rolled-up newspaper. Repeatedly.)

But to imply that anyone opposing her proposal must be some kind of anticommercial tekkie-hippie is fallacious and juvenile, and just as dismissive as the rudest comments she received. I must admit that I expected better from Gwyn.

Now then, let me explain my opposition and criticism of the proposal. (This is not criticism of Gwyn as a person, nor of any of her other ideas besides this particular proposal.)

While I do appreciate and respect the choice to make one’s own efforts open and free, I do not believe everything should be forced to be free, and I did not oppose the proposal based on my views on that topic. I opposed it because I see three major flaws in the proposed system, two of them purely security-related:

  1. the certificates could be easily forged, which defeats the purpose of having them at all
  2. an effective certification system would put an extraordinary burden on developers
  3. the system does not address the most commonly exploited methods of content theft

I’ll expand on these points so that there can be no confusion about why I objected and still object to such a system. (I’ll give fair warning, though, that this is a rather long and probably dull post by most standards.)

Firstly, as others have said: where there is a certificate, there is a way to forge a certificate. Even a certificate embedded in the executable binary code can be extracted, and an uncertified client created which fools the server into believing it is a certified one.

And the target for such certificate extraction would not be the open source developers with their little custom viewers, it would be Linden Lab’s own released clients. Why? Because even after LL has found out that someone extracted the certificate to one of their official releases, they could not do anything about it.

Would they block all viewers using that certificate? Doing so would block all users who are using LL’s own viewer! Users would be forced to download a new version every day as LL struggled to keep ahead of the malicious users extracting the new certificates. (Plus, they would have to use a different method of embedding certificate for every release, since the old methods would have been figured out.)

Unless LL is willing to try to keep that up, the certificate program would cease to be effective at distinguishing authorized viewers from unauthorized ones.

Secondly, let me explain the burden of such a certification process on open source developers. It is not a matter of developers complaining, “Waah, it would take me 10 more minutes to implement my code! I’m going to go cry in a corner now.” A certification process as Gwyn described would, if effective, put a near-total damper on many of the most important areas of development of the viewer code.

Suppose that you are trying to fix a memory leak, like Nicholaz used to do. In order to test whether you have fixed it, you would need to run the viewer and travel around SL to make it load lots of content into memory. However, because of this certification program, you must have a certificate embedded in your viewer in order to see such content, and thus to test the fix. (I am assuming here the embedded certificate scenario, because keeping the certificate as a separate file would render the system extraordinarily easy to spoof.)

Every time you recompiled your code, you would have to send your source code and compiled viewer off to whatever company does the certification and wait for them to look it over and embed the certificate in it and send it back. Such a process would take at least several days at the start (after all, they have to inspect your source code to make sure you haven’t added something nasty!). That’s several days of waiting between the time you write the code, and the time you can check whether it worked. If you find out that it didn’t work, you will have to wait another several days for them to certify your next attempt.

Now, I said it would take several days at the start. If the company could not keep up with the number of developers requesting certification for their little test versions, then the delay would grow longer and longer. You would soon be waiting a week, two weeks, a month for them to process the certified binary — if you (or they) hadn’t given up completely by then.

A month between testing bug fixes? I can barely even recall my approach to the problem after two days! No developer or other creative person would volunteer to work in such a stifling environment.

Now, I admit, not every bug fix would require a certified viewer to test with. But many of the worst, most disliked kinds of bugs would. I don’t presume to speak on Nicholaz’s behalf, but I doubt his work would have been possible if such a certification process had been required.

Fortunately, though, the weakness of the system means that many developers would soon begin to use forged certificates to circumvent the system and continue their testing work. Unfortunately, such circumvention would, more likely than not, be illegal in the United States due to the DMCA, and in any other countries with similar legislation. Some developers would be willing to take that risk for the sake of improving the viewer, but many not.

Finally, even if we set aside the other issues, the proposal does not address the most common methods of content theft. Even if the process stopped uncertified viewers from being able to see the protected content, the certified viewers still have gaping holes in them.

I will set aside the issue of GL ripping, something that cannot possibly be addressed by Linden Lab, and instead focus on the insecurity of the cache and the data packets that are sent by the sim.

The viewer as it is now does not employ any real encryption of either the cache or (I assume, though I may be wrong) the data going to and from the server. Textures are easily extracted from the cache; prim shapes are (I hear) also cached, and thus can be extracted. Even if they were not cached, textures, prim shapes, avatar shapes, animations, and more could be extracted from the data packets being sent by the sim to your computer.

That is the current state of affairs. Well, what if Linden Lab did start to encrypt data packets and the cache? In such a case, the viewer must have programmed into it the decryption method and key (or else a method of receiving or generating the key). But because the viewer code is being released as open source, Linden Lab would face the difficult choice of whether to release or to withold that piece of code. Either choice would have significant consequences.

If Linden Lab chose to release the code for the decryption and keying methods, the encryption scheme is defeated. By studying the code, a reasonably proficient content thief could figure out how to decrypt the cache and incoming data packets, and thus gain access to the protected content, even while running a certified viewer. For a while, the barrier to content theft would be somewhat higher than it is now, but eventually some malicious user would release a tool to copy content without needing to understand the concepts behind it. We would then be back to the state we are in today, but with an ineffective encryption scheme added on top.

On the other hand, if Linden Lab chose to withold the decryption code, open source viewers would be locked out. Or at least, they would not have access to the protected content. So in that respect, yes, a combination of certifying only LL’s viewers, encrypting the cache and data packets, and keeping the viewer partially closed source might be effective at protecting content, for a while. I’m not a cryptography expert, so I can’t even begin to guess how long it would take to break the encryption. Perhaps long enough that LL could change encryption keys and certificates (and force everyone to upgrade) before the old ones are found out.

But doing so would impose the burdens on developers I described earlier. And worse, as Gwyn so rightly points out, it would offend and alienate most of them — even the ones who aren’t obsessed with freeing everything. I’d expect it to light a fire under OpenSim, too, as developers shifted their focuses to more open, less burdensome systems.

Would Linden Lab be willing to undo the Dia de Liberation, cripple open source development, and turn some of their most effective supporters into resources for the competition, in exchange for the appearance of strong content protection?

Maybe. I hope not, but it’s hard to say anymore.

So is the situation hopeless? Can content theft be stopped only by sacrificing open source development? Can content theft even be stopped?

Certainly, it’s not possible to completely stop 100% of content theft. Gwyn recognizes that, as does just about anybody familiar with the problem. The idea is, instead, to raise the barrier for content theft; to make it more difficult.

But even that is a matter of debate. If you raise the barrier, eventually someone will figure out how to get over it, and they or someone else will create a tool to let other people get over it, even people who have no idea how the tool works or what it’s circumventing. You can raise the barrier again and again, but that only buys you time. And often, that temporary gain in protection comes at the cost of a permanent loss elsewhere — creating extra hassles for users, stifling development, etc.

Can content theft be prevented? Probably not. I wish it could; I’d love to have a punchy ending here, to reveal that if you do X, Y, and Z, all content theft will be stopped. But it doesn’t work that way.

More likely, the key to dealing with content theft is not prevention, but rather detection and enforcement, neither of which are being carried out in a reliable, objective, or effective manner.

Naturally, fixing them is easier said than done. But that’s how it is.


Viewing all articles
Browse latest Browse all 10

Latest Images

Trending Articles





Latest Images