Not signed in (Sign In)
    • CommentTimeMay 11th 2010 edited
    Lots of things about in the news lately, including more earnest speculation that serious incidents such as oil rig explosions or stock market crashes could be engineered by "cyberwarfare" (Richard Clarke, the 90's is calling) or information security warfare more broadly.

    To start, a classic:

    The Six Dumbest Ideas in Computer Security
    If you or anyone you know has been malwared by a drive-by browser attack lately, the first point, "Default Permit," tells you why:
    Another place where "Default Permit" crops up is in how we typically approach code execution on our systems. The default is to permit anything on your machine to execute if you click on it, unless its execution is denied by something like an antivirus program or a spyware blocker. If you think about that for a few seconds, you'll realize what a dumb idea that is. On my computer here I run about 15 different applications on a regular basis. There are probably another 20 or 30 installed that I use every couple of months or so. I still don't understand why operating systems are so dumb that they let any old virus or piece of spyware execute without even asking me. That's "Default Permit."

    However, the implications when extended more broadly, can be disturbing. Should the Internet as a whole be "Default Deny"? If only 'licensed' users and applications accessed the Internet, then abuse would decrease.
    • CommentAuthorJiveKitty
    • CommentTimeMay 12th 2010
    There's a point where the cost of doing something like that would outweigh the benefit.
    • CommentTimeMay 12th 2010
    I read the title as "Information Society Dump Thread", and I got excited.

    And yes, allowing a program to just go ahead and do whatever is lame.
    • CommentAuthorFan
    • CommentTimeMay 12th 2010
    > Should the Internet as a whole be "Default Deny"?

    Currently, software developers don't need a license to develop software.

    There is such a thing as (cryptographically) "signing" applications, such that its author can be identified (and, when as a developer you get your signature from the signing authority, you promise fwiw that you won't be using it to sign malware); and a web browser will tell/warn you if you're about to run an unsigned executable.

    > If only 'licensed' users and applications accessed the Internet, then abuse would decrease.

    If only 'licensed' humans accessed the planet, then abuse would decrease?

    You don't have to connect your computer to the internet. You don't have to connect it without a firewall. You don't have to download and then run dubious executables.
    • CommentTimeMay 12th 2010 edited
    You don't have to connect your computer to the internet. You don't have to connect it without a firewall. You don't have to download and then run dubious executables.
    Furthermore, you don't have to click on a link with a suspicious address from someone you've never heard of, you don't have to enable html in your email, and so on.

    The one virus I've had in the past ten years was thanks to a torrented, pirated game, and as such, is my own stupid fault. Past that, I've never had Windows Firewall up, I've never even had antivirus. Because I don't open shit I shouldn't.

    Now, this is on a private, home computer, that actually has no real information on it. If I were actually concerned about the security of my computer, I would definitely do something about it. As it stands, I have nothing of any worth that doesn't get cleaned out when I'm done using it (my browser cache) or stored on my computer to begin with.

    Honestly, the worst breaches of security are things like giving people laptops with tens of thousands of items of other people's personal information or details and just letting them bugger off to Bermuda or whatever so they can telecommute on their vacations, only to have the laptop go missing at some point. But, like the article you mention said, educating people will only get so far. No matter how many times you tell people not to click on a link even if it supposedly comes from someone you know (usually saying something like OMG I'M STUCK IN EUROPE IN A BIZARRE TWIST OF SOCIO-POLITICAL ESPIONAGE), they're still going to fucking do it. They're usually the same people that leave their houses or cars unlocked just because "nothing will happen to them."

    (Yes yes, I'm doing the same thing with my computer, but again, I make the point that the only pertinent information on my computer is audio and graphic projects that have no bearing on anything. I'm not running a bank, here. "Nothing will happen to my car" is quite a bit different from "I don't care if something happens to my car because it's a piece of shit and I honestly hope someone steals it so I'm forced to buy a new one.")
    • CommentAuthorRenThing
    • CommentTimeMay 12th 2010

    No matter how many times you tell people not to click on a link even if it supposedly comes from someone you know (usually saying something like OMG I'M STUCK IN EUROPE IN A BIZARRE TWIST OF SOCIO-POLITICAL ESPIONAGE), they're still going to fucking do it

    I can't tell you how many times my IT department has had to give the lobby computer a thorough scrubbing because the person I share it with continues to insist on downloading those fsking Smilies off of pop-up ads.
    • CommentAuthoricelandbob
    • CommentTimeMay 12th 2010

    I third that. The only times my computer has ever been infected is when i´ve gone on holiday and people (my younger brother & Nephew) have housesat for us. I Come back to a frozen, fucked up PC and all i can get off them is "urr yeah i went on this website and they, like, said i needed to download this to see the... uh... video". Then I´M the one who gets the dirty looks when i get a mate to help fix it and we find out what they been trying to look at. Dirty little gits.....
    • CommentTimeMay 16th 2010 edited
    Couple days old, but a good overview:

    Google Says It Collected Private Data by Mistake

    SAN FRANCISCO — Google said on Friday that for more than three years it had inadvertently collected snippets of private information that people send over unencrypted wireless networks.

    he admission, made in an official blog post by Alan Eustace, Google’s engineering chief, comes a month after regulators in Europe started asking the search giant pointed questions about Street View, the layer of real-world photographs accessible from Google Maps. Regulators wanted to know what data Google collected as its camera-laden cars methodically trolled through neighborhoods, and what Google did with that data.

    Google’s Street View misstep adds to the widespread anxiety about privacy in the digital age and the apparent willingness of Silicon Valley engineers to collect people’s private data without permission.
    • CommentAuthorRenThing
    • CommentTimeMay 16th 2010
    • CommentTimeMay 29th 2010
    I would be curious to know at this point whether the recent Facebook flap has caused anyone to go and review what information of theirs is being leaked by their various website memberships?

    I recently did so, and caught that LinkedIn has also recently changed their stance to make most information public by default.

    As a larger question - do people generally regard "public by default" to be an actual problem? Are you concerned about your personal data for your shopping and whatnot being exposed, or do you just shrug it off if it can't lead directly to identity theft?
  1.  (8209.11)
    I don't mind "public by default" when I know that's the case going in, like on twitter. I signed up for that with no expectation of privacy for anything I posted on it. With something like Facebook, the expectations at the start were very different than the current state of affairs, and I think that's the difference.
    • CommentAuthorJiveKitty
    • CommentTimeMay 29th 2010
    And the effort required to change settings once in place.
    • CommentTimeJul 29th 2010 edited
    This is some fascinating reading from Bruce Schneier's blog. It is interesting on a number of points:

    1. The folks who actually run the Internet took a significant step recently to reduce the threat of horrific cyberattack caused by weaknesses in the DNS protocol, which resolves domain names into IP addresses.
    2. The security ultimately depends on the good old fashioned method of a bunch of folks having to physically possess something and get together in a room.
    3. The mention of Shamir's Secret Sharing deserves a look.

    This whole thing is really essential background reading for anyone who is delving into any sort of thriller/cyberpunk fiction territorry in the modern day without sounding hopelessly out date - and what makes it tasty and interesting is the secret key to the Internet, secretly divided amongst a cabal of seven people, who must come together again in the event that the Internet needs to be rebooted.

    DNSSEC Root Key Split Among Seven People

    The DNSSEC root key has been divided among seven people:
    Part of ICANN's security scheme is the Domain Name System Security, a security protocol that ensures Web sites are registered and "signed" (this is the security measure built into the Web that ensures when you go to a URL you arrive at a real site and not an identical pirate site). Most major servers are a part of DNSSEC, as it's known, and during a major international attack, the system might sever connections between important servers to contain the damage.
    A minimum of five of the seven keyholders -- one each from Britain, the U.S., Burkina Faso, Trinidad and Tobago, Canada, China, and the Czech Republic -- would have to converge at a U.S. base with their keys to restart the system and connect everything once again.
    That's a secret sharing scheme they're using, most likely Shamir's Secret Sharing.
    We know the names of some of them.
    Paul Kane -- who lives in the Bradford-on-Avon area -- has been chosen to look after one of seven keys, which will 'restart the world wide web' in the event of a catastrophic event.
    Dan Kaminsky is another.
    I don't know how they picked those countries.
    • CommentTimeAug 3rd 2010 edited
    There have been new developments in location-based quantum encryption. I find the possibility of encryption and security based on one's physical location to be really interesting - one can imagine situations where someone receives an email or other message that can only be decrypted in one's house, or a person needing to travel to a certain 'drop' location in order to get a secure connection for a data dump. Very interesting potential.
    Location-Based Quantum Encryption
    Location-based encryption -- a system by which only a recipient in a specific location can decrypt the message -- fails because location can be spoofed. Now a group of researchers has solved the problem in a quantum cryptography setting:
    The research group has recently shown that if one sends quantum bits -- the quantum equivalent of a bit -- instead of only classical bits, a secure protocol can be obtained such that the location of a device cannot be spoofed. This, in turn, leads to a key-exchange protocol based solely on location.
    The core idea behind the protocol is the "no-cloning" principle of quantum mechanics. By making a device give the responses of random challenges to several verifiers, the protocol ensures that multiple colluding devices cannot falsely prove any location. This is because an adversarial device can either store the quantum state of the challenge or send it to a colluding adversary, but not both.

    Don't expect this in a product anytime soon. Quantum cryptography is mostly theoretical and almost entirely laboratory-only. But as research, it's great stuff. Paper here.

      CommentAuthorDoc Ocassi
    • CommentTimeAug 3rd 2010
    Going back to the first post, I would argue that the biggest threat on any of these things would be the mistakes that happen when complicated systems malfunction rather than any external influence.

    Focusing on the stock market there are documented cases where the actions of a system has caused serious consequences, The crash of 1987, and where these markets have had a detrimental impact on the world the chances of it being caused by an entity that any nation would consider standing up against is pretty unlikely.

    As far as systems that would cause things like oil rig explosions, just because two things are in the news at the same time there is no need to link them, unless you have something to gain. The strange one is the Stuxnet Worm which attacks Siemens SCADA system and steals the project files. I haven't looked closely at this but I have seen no reports of it being connected to any kind of botnet, so it seems to be simply information scraping. BTW I wouldn't advise anyone to connect a windows based SCADA or HMI to the internet, though some business types don't seen to understand the risks (need to be in the loop and other bollocks).

    I would make a guess that there are a lot of ex-IT people floating around at the moment, who may be interested in getting a slice of the defence pie, in these hard economic times, not to say there is no threat but we in the developed world are at as much risk from cyber-terrorists as we are from terrorists, I am more worried about google.
    • CommentTimeAug 3rd 2010 edited
    ^ Agreed, although one can take it a step further and argue about whether a complex system 'malfunctions' in a true sense - one would want to differentiate between at least three cases: (1) pure breakage, as in the Andromeda strain, versus (2) hacking and malware/hostile code and the truly dangerous situation (3), emergent complexity - where the system's complexity is actually working as designed, but outside of human controls and in unforseen ways.

    Just taking the first case as an example, one wonders how long it is before product liability law is (again, and systematically) attempted to be applied to software malfunction and failure. The case of the Therac-25 is instructive here, as the first recorded example of a software bug that actually killed somebody. Generally speaking nowadays, software is an exception to product liability law, but I don't think it will be long before software companies will start to be challenged for gross negligence in cases of systems that are in the critical path of human infrastructure, and are proven to be insecure or buggy by design.
      CommentAuthorDoc Ocassi
    • CommentTimeAug 6th 2010 edited
    I had a look through Ranum's site and, although I something feel he is missing something there is a lot of good analysis, have a look at the Rearguard Podcasts, especially #4: The Problem With "Cyberwar" for a fairly comprehensive analysis of your case (2).

    I hadn't heard of the Therac-25, and the case of liability regarding these types of systems can be a problem when there are multiple companies involved in the building of a system.

    With regards to emergent complexity, I do worry about the the communication technology we use. it's design and implementation can have a subtle but a powerful effect on how we relate to other people, and even on how we perceive the world around us. With use and familiarity comes complacency and dependence, we just need to look at how people cling to their mobile phones like they are the only method to interact with the outside world. This would seem to be the fear that is being played upon when and type of cyber(insert negative connotation) is invoked. When you end in a relationship with technology where you are dependent but quite unable to affect it, this puts you in an incredibly weak position, where changes can cause feelings of insecurity and inadequacy.

    I would like to know what everyone else feels, if you have very little real control over these things how can you really depend on them for security, and more basically, if this really is your information, how sure are you that it is secure?

    Edited for illiteracy.
    • CommentTimeAug 23rd 2010 edited
    Breaking news from Bruce Schneier's blog. I don't want to make *too* much of this, but this is the first time that malware has been openly attributed by a governmental source as a contributing cause to a disaster.

    It isn't yet appropriate to go screaming, "Spyware can crash airplanes!", but. Schneier adds in a postscript that he's long suspected malware as a contributing cause of the 2003 Northeast American blackout, and I'm inclined to agree after observing the behavior of the Stuxnet virus.

    Something that is too often ignored is the proclivity of malicious folks to *not* act in a directed manner, and also to tend to lose control of their tools with unforseen consequences. If a bank robber cannot hit a given bank, they will wait for another day. If a terrorist cannot hit the desired hardened target, they often will simply adjust their plans to move to a softer target, which may seem to have nothing to do at all with the original goal, and yet still achieves the desired result of infrastructure jamming.

    The sky isn't falling as of yet, but there are a large number of people waking around ignoring the pebbles bouncing off the sidewalk.
    The airline's central computer which registered technical problems on planes was infected by Trojans at the time of the fatal crash and this resulted in a failure to raise an alarm over multiple problems with the plane, according to Spanish daily El Pais (report here). The plane took off with flaps and slats retracted, something that should in any case have been picked up by the pilots during pre-flight checks or triggered an internal warning on the plane. Neither happened, with tragic consequences, according to a report by independent crash investigators.

    - Source
    • CommentTimeAug 24th 2010 edited
    The news just keeps getting worse on the Information Security front for consumers. I know we're all tired of hearing how the latest threat is going to eat everyone's computers alive and cause the end of the world, but this is truly significant:

    Microsoft DLL Hijacking Vulnerability

    My summary for humans:
    The Microsoft operating systems check several folders by default for system libraries (DLL files) that contain common utility routines. Essentially, Microsoft has left it open so that sloppy programmers can drop the same DLL in several different places on the system. Researchers have confirmed that in multiple common applications - 200+ *very* common applications, such as Microsoft Office, all known web browsers, and so on - this vulnerability is present.

    Essentially, a rogue website can drop one file into a certain location on the system easily, and have complete and unnoticable control over and access to all data and applications that use this library. With very little effort.

    This applies to all version of Windows and is inherent in the architecture of the product, so really there's not much you can do about it.
    Keep your firewalls and antivirus up to date, and don't make the mistake of thinking that the Mac is more secure, either.
    • CommentAuthorFan
    • CommentTimeAug 24th 2010
    > really there's not much you can do about it

    The Microsoft Security Advisory (2269637) says it happens as follows:

    * Open a document from a network share
    * Document's application is launched, with the network share as the curent directory
    * Document's application may look in the current directory (i.e. the network share) for its DLLs

    What you can do about it, I think, is to not attach to remote untrusted file systems.