|
OpenBSD Security Functionally paranoid! |
|
Thread Tools | Display Modes |
|
|||
Possible data leakage from OpenBSD workstation
State agencies can retrieve data from your internet connected OpenBSD workstation.
Do you care about it? Would you use additional air gapped computer to handle your personal data? |
|
||||
I take it you didn't hear about the malware propagated by sound waves then?
http://arstechnica.com/security/2013...jumps-airgaps/ Of particular interest from that article: Quote:
|
|
||||
Quote:
--- * This excludes devices residing on private networks, which may be "behind" a NAT router -- but includes any such router, as it is Internet facing. |
|
|||
Quote:
Quote:
Quote:
Loic Duflot's and Yves-Alexis on remotely attacking network cards. http://www.ssi.gouv.fr/uploads/IMG/p...etworkcard.pdf and some articles from this domain: http://theinvisiblethings.blogspot.c...rnels-and.html |
|
||||
There is always the potential for someone to break into a system via the internet. If one has files that one wants to remain secret, the solution is to not store them on a computer with internet access. If I was in possession of files I did not want government agencies to discover, I would hide them, not put them somewhere that those agencies can easily find and try to protect them from unwanted access. Many people think about security the wrong way. Treat data the same way as as a person. If someone does not want to be taken into custody, which is better; living in the open and surrounding oneself with security measures or hiding from the authorities?
|
|
||||
Quote:
Does that mean my systems are 100% "secure"? No, there are no guarantees. Attack vectors might exist of which I am unaware. And threats are ever evolving. I've not stated this often enough, so I'll state it again: Security is not a product. Security is not something you can buy, or something you can download and install, or something you can turn on, or something you can enable. Instead, security is a process: of applying risk mitigations based on risk awareness, and the process must evolve as one's scope of awareness changes. The hard part is ensuring one's risk awareness remains accurate, meaningful, current, and appropriate. Last edited by jggimi; 6th January 2016 at 02:21 AM. Reason: clarity |
|
|||
Quote:
Is there open & secure hardware that can be verified & trusted to? Last edited by alex_b83; 6th January 2016 at 09:35 AM. |
|
|||
Quote:
Then rootkit can prepare environment suitable for spying, before OS kernel gets loaded into RAM. After that, OS kernel code can be changed by rootkit to allow attacker to retrieve:
Quote:
"Out-of-band (OOB) or hardware-based management is different from software-based (or in-band) management and software management agents. Hardware-based management works at a different level than software applications, uses a communication channel (through the TCP/IP stack) that is different from software-based communication (which is through the software stack in the operating system). Hardware-based management does not depend on the presence of an OS or locally installed management agent. AMT is designed into a secondary (service) processor located on the motherboard, and uses TLS-secured communication and strong encryption to provide additional security. AMT is part of the Intel Management Engine, which is built into PCs with Intel vPro technology. ... AMT provides similar functionality to IPMI, although AMT is designed for client computing systems as compared with the typically server-based IPMI." OS will not even notice out-of-band hardware-based "probes" directed to the server, and malicious low-level firmware will use this covert channel to transmit stolen data. Last edited by alex_b83; 6th January 2016 at 10:32 AM. |
|
|||
This is exactly what I wanted to say.
Last edited by alex_b83; 6th January 2016 at 10:46 AM. Reason: misspelling |
|
||||
My Internet-exposed devices don't have these service management capabilities, so these particular attack vectors don't exist. On *those* systems.
I'm typing at the moment on an HP laptop with Intel processors. But this workstation is not on the Internet. At the moment it's not even on an Internet connected network -- my only access to the Internet from my current location is via an HTTP proxy. This limited access also mitigates these out-of-band attack vectors. If I connect via public WiFi, I must trust that out-of-band management services are not possible. If I connect an Ethernet cable, it will either be to a network I control, or, to a network I trust. |
|
|||
I thank everyone for answers!
Last edited by alex_b83; 6th January 2016 at 03:35 PM. |
|
|||
I would like to add some quotes:
Quote:
Quote:
|
|
||||
I think you need to put Bruce's comments into context. I could find one of the two. The transcript includes the typo you quoted (too for tool), so I believe I've found the right citation.
In regards to Strong vs. weak, Bruce was differentiating between cryptographic primitives and their implementations in software systems. The ciphers are secure, but their use is often insecure. That's due to the inability of non-cryptographers to take those primitives and design secure cryptographic implementations with them, or for IT applications to be developed and deployed with them. This particular quote was regarding risk assessment of implementing "back door" facilities into cryptographic systems, which many governments want. Quote:
|
|
|||
I was and I am still thinking, that this first quote is in regard to hacking, pwning operating systems. I think it mean that not only primitives but also implementation of offline encryption (i.e. symmetric encryption using GnuPG) is so good, that government agencies can not break them, but they can get inside OS that is actually working on encrypted copy of that files.
Second quote is from: How to Remain Secure Against the NSA Last edited by e1-531g; 6th January 2016 at 06:35 PM. |
|
||||
Quote:
Let's use your example of a file using a strong cipher. It's encrypted. Without the keys, a brute force attack would take years, perhaps hundreds or thousands of years. And that would be true, *if* there was no known plaintext, in whole or in part, within your encrypted file. Let's suppose, for this example, that the file contains millions of userids and passwords, because you run a public service of some kind, and this is your password database. Let us also assume that you set a password policy that requires at least one upper case character, one lower case character, and at least one number, and a minimum length of 7 characters. Now, the encrypted file is obtained by an attacker. Its a collection of random bits to them. Or ... is it? You have, in that file, many passwords with known plaintext, because you have millions of records from end users with userids and passwords. How many records will have "Password1" in the password field? Probably more than the number of records with "October31" but I'll bet among the millions of records will be thousands of passwords with birthdays, pet names, children's names, and other plaintext that can be predicted. Brute force attacks against that file won't take years. They'll take days or even hours. Your cipher may be very strong, but two flaws with my example implementation undermine it. The first is a policy weakness -- the password policy permits strings of text, which human beings will fill with words. Most do, in whole or in part. The second is a technology weakness of the application where this file is used -- a single cipher was used for all passwords. As you may notice from this simple example, passwords are a fairly terrible security weakness, as there is really no such thing as a "strong" password when human beings need to remember them, and risk mitigations (such as unique salts or unique keys for each record) are complicated and difficult to get right. Many of the publicly disclosed data breaches that we read about were (and still are) attacks against password implementations, which is why I chose this as my example weakness. Last edited by jggimi; 6th January 2016 at 09:10 PM. Reason: typos, a thinko, and clarity |
|
|||
AFAIK there is not publicly known key obtaining attack on AES256 ciphertext, even if you know whole plaintext, in reasonable amount of time using conventional computer. I understand that NSA employs a lot of mathematicians, so maybe they have it, but even Bruce Schneier thinks, that more probable is that they don't know such method to attack well known symmetric encryption methods. More probable is they know methods to decrypt asymmetric methods for example RSA. And RSA can be easily cracked using quantum computers.
Nevertheless GnuPG can use AES256 in CFB mode and it is considered really good. I don't know how GnuPG uses passwords, but some other tools use passwords to encrypt key and data is encrypted with key. Data is not encrypted with password, even if user needs to provide it to decrypt data. |
|
||||
I don't know about AES CFB weaknesses, if any, as I'm neither a cryptanalyst nor a cryptographer. I used the file as a relatively simple example, because you mentioned it. And I will admit that the Ashley Madison user database breach was what I was describing when I wrote my example above, though they used bcrypt rather than AES for their cipher. I tried to pick a simple example of an error of IT deployment, rather than in application or operating system, without regard to the specific cipher.
History is full of ciphers which may have been perfectly fine but were deployed with weaknesses. In particular this seems to happen with networking. I'm sure you have heard about weaknesses in WEP and PPTP. (if not, the links discuss them.) These were encryption systems that had significant implementation errors, though the selected cipher chosen (RC4) for these was perfectly adequate at the time. The cipher was not a factor of their weaknesses, if I recall correctly. It's been a few years since I studied the errors of both when I took a a cryptography class, and I only recall that key reuse was the root weakness with PPTP. SSH version 1 allowed the communicating parties to select from a suite of ciphers. It's vulnerabilities were all in the protocol itself, and most interestingly to me, a fix for one weakness introduced a new weakness. My point to all of this? I'm trying to support Bruce's thesis: ciphers can be mathematically inspected. Software has bugs. Our deployments can have mistakes. Last edited by jggimi; 7th January 2016 at 12:53 AM. Reason: clarity |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Generic TLDs Threaten Name Collisions, Information Leakage | J65nko | News | 0 | 16th July 2013 08:23 AM |
New Workstation hardware question. | tedeumjorge | OpenBSD Installation and Upgrading | 7 | 16th November 2012 02:37 AM |
NetBSD as a workstation OS | laconic | NetBSD General | 16 | 3rd May 2010 09:54 PM |
VMWare Workstation 7 with OpenBSD 4.6 i386 guest | There0 | Guides | 5 | 16th February 2010 03:13 PM |
Dual-head OpenBSD workstation? | DraconianTimes | OpenBSD General | 6 | 7th October 2008 04:22 PM |