Another month, another show…
Come and visit us at West Coast Labs’ at booth J75 at InfoSec Europe between 24th April and 26th April at Earls Court in London.
Happy New Year.
Traditionally, in early January, there are a raft of predictions from the security industry about what is likely to happen over the next 12 months in terms of emerging threats, consumer and corporate focus in terms of what people will be doing with their technology, and a general amount of crystal ball gazing.
Rather than make predictions that are doubtless going to come back in 12 months’ time and bite us somewhere unpleasant because they haven’t come true, it is perhaps better to focus on a story that has come out early in the year from Japan and is detailed here: http://www.yomiuri.co.jp/dy/national/T120102002799.htm – the headline being that the Japanese Government has entered into a relationship with Fujitsu to create a “good virus”.
Leaving aside the fact that, although this has been widely reported through several channels, there only appears to be one main source for this – the above site – and journalists are normally shy of putting out stories without corroboration from an independent source, this raises a number of questions.
There is, of course, a long standing debate going back many years as to whether there is such a thing as a good virus, and if so how it is defined, for example here: http://www.people.frisk-software.com/~bontchev/papers/goodvir.html. We’ll not get into that here, but looking at the story it seems to be rather light on technical details. Perhaps this is understandable, given that the parties involved would not want the financial investment to go to waste, but there are a few things that can be implied – note that this is supposition on our parts and should not be taken as any insider knowledge!
Firstly, the story reports that the code (let’s call it code, as calling it a weapon gives it some sort of legitimacy, an issue that we’ll get onto momentarily) is capable of identifying both the sources of the attacks and the intermediary hosts used, and indeed states later in the article that this is used for looking at DDoS attacks. Once a host is identified, it would seem as if the code then copies itself to the infected host before running operations to disable the host from being part of the attack – whether this is by disabling a particular executable or by terminating the hosts’ internet connection isn’t specified.
The important part of this is that the code copies itself to the hosts. This means exploiting a vulnerability, presumably the same one that the original code exploited to get itself onto the box in the first place, or the command and control channel that is used by the malware itself. One of the things that both operating system patches and anti-malware vendors try to deal with is ensuring that the vulnerabilities are not exploitable, so that means that (in the case where people have good patching procedures, and let’s be honest don’t we all? Erm…) the vulnerability could no longer exist, and where anti-malware signature updates are applied, and scans are run regularly, the vulnerability may have been flagged already or the malware may have been already removed.
This leads to a situation where the code could be trying to get onto a machine that is already cleaned up or, at the very least, has had the vulnerability patched, and doesn’t even touch on whether there are any self-protection mechanisms written into the malware itself.
Then there arises the question as to whether the methods used by the code will themselves be determined as malicious and stopped by anti-malware vendors – the general gut feeling around WCL is that it probably will be – after all, it is a “virus”.
The testing has taken place in a “closed environment”. No details here are given, but let’s assume that it is mostly Windows based. The first questions that should be being asked about this are: Was the environment used homogenous (ie all the same type of operating system) or heterogenous (different variants of Windows, different patch levels on each)? In order to simulate a large scale DDoS, how large was the environment (number of hosts)? Were they real hosts or virtual hosts? How many of the botnet variants were used? How adaptable is the code to new types of code used in these attacks? How adaptable is the code to non-botnet malware?
In order to get a seriously large replication of a DDoS attack, obviously none of the major industry tools for traffic creation can be used, as they don’t have “real hosts” (including virtual) for the code to go back to and “clean up”. This implies that it works on a small scale and, for something as specific as the operation of this code and the type of malware that it appears to be targeting, there really is no substitute for seeing how it works in the real world.
Once we get past the technical issues, there are other more holistic issues to consider – will AV companies be subjected to pressure by the Japanese government to exclude detection for this code? That has, when tried previously, normally failed, and authorities using “viruses” was recently in the news in Germany in October last year when Federal police admitted to using code to monitor Skype (http://www.theregister.co.uk/2011/10/12/bundestrojaner/).
What are the legal implications for this? After all, the intention seems to be to put a piece of code onto a users’ machine in the same way as malware, without asking the user first, and given that there is no legal jurisdiction over the internet as it is a global network, there are potentially issues if this code gets onto machines in, for example, the US, Russia, China, any of the EU countries, and so on. Fujitsu and the Japanese government could find themselves at the center of a lot of legislation very quickly. Surely, when this project was mooted in 2008, somebody in either Fujistu or the government should have considered that there might be legal implications and started preparing for it then, rather than trying to sort it out after the code is written and ready to be released – any delays here (from a purely technical point of view) mean that the code will be outdated and potentially useless by the time that it actually gets released.
This will be an interesting time as the lawmakers try to sort out whether they can use the code and then, if they can, what subsequently happens with the AV industry and whether the code itself can make any inroads at all into reducing the number of DDoS attacks. Perhaps the one prediction we should make is that we’ll be watching this story with interest.
Here’s a gem from a book written in 1981, which predicts that the only crime in the future would be computer crime.
Obviously crime is additive, not subtractive.
Reading this, I started out thinking of it as the usual sort of “hey, where’s my flying car!” future-gazing. But by the end, their description of the current state of malware was not too far off the mark.
Except that “cassette” bit. That made me giggle.
Because clearly storage media wasn’t going to advance past (very easily destroyed) 1970s technology.
Has your life been too quiet and peaceful since the World Cup has ended? Have you found yourself yearning for the dulcet tones of the vuvuzela?
Instead, how about a hefty dose of irritation to underline the incredible quantity of information that gets sent to Google as you surf the web?
I couldn’t help but giggle when I saw the video attached to this article on Mashable. Oh, the horror!! The honking!! Much like the “Autotune Double Rainbow Remix” it combines two totally heinous things in such a way that I can’t help but laugh hysterically.
Nothing is really suggested to help you avoid the information seepage, so maybe the video itself is enough of a wake-up call for people that the alarm would be unnecessary. But if you’d like to illustrate the point to any disbelievers in your house or office, maybe you could do an extended demonstration. I’d suggest you bring earplugs for yourself though.