Hashing a MD5 digest with a private key, then XOR[ing] the digest and private key [migrated]

TL;DR
Does hashing a MD5 digest with a private key, then XOR[ing] the digest (twice recursively, with offset) have any glaring issues? Would this mean MD5 is predictable?
Is partial MD5 recognition possible?
General Overview
I convinced my… Continue reading Hashing a MD5 digest with a private key, then XOR[ing] the digest and private key [migrated]

Why is it wrong to *implement* myself a known, published, widely believed to be secure crypto algorithm?

I know the general advice that we should never design¹ a cryptographic algorithm. It has been talked about very extensively on this site and on the websites of professionals of such caliber as Bruce Schneier.

However, the g… Continue reading Why is it wrong to *implement* myself a known, published, widely believed to be secure crypto algorithm?

How much security expertise does a general application programmer need to develop software ethically?

I am curious about this. I saw this thread:

Why shouldn’t we roll our own?

and this answer:

https://security.stackexchange.com/a/18198/144241

which had on it this comment, which had the second highest number of votes and that can’t just be ignored:

“The biggest problem I notice among beginning coders, is that they don’t take security risks seriously. If I mention what kind of weaknesses some system has, the common response is rolling their eyes… The attacks are real and do happen, and if you can’t even apply proper security at school then how’s your application supposed to fare when it’s live for hundreds of customers? :/ The only way to convince them is usually providing examples, and best of all a live demo. So if you have examples… I think it’d be a nice addition!”

This makes me wonder, then. The context of this discussion was about “rolling one’s own” cryptographic software or algorithms, and why that’s generally a bad idea due to the numerous security vulnerabilities that one may create without the proper expertise (and there seemed to be a strong air that this doesn’t just apply to creating your own ciphers per se but also implementing existing, vetted ciphers in your own code) which may make the product considerably less secure than it should be, or advertises to be. And the concern is, of course, eminently reasonable. Moreover, the amount of expertise required is apparently quite large – according to one story I heard on this a very, very long time ago, someone once said (I think it was Bruce Schneier?) that they would not (approximating) “trust any crypto written by anyone who had not first ‘earned their bones’ by spending a lot of time breaking codes”.

The problem, however, is that while I understand this in regard to cryptographic software, the comment above raises a point that suggests this carries implications that are vastly more general in that security issues apply to all software, and thus the developer of any software needs to take security into account no matter what it is, even if it is not explicitly cryptographical or we are not writing explicitly cryptographic parts (e.g. consider how often a buffer exploit pops up on various general-purpose softwares like web browsers and some cracker hits it to deal damage, esp. stealing information and/or money.). It seems like that, intuitively, there is a basic ethical obligation on the part of any software developer who is going to release code that will be used by the wider public, to ensure that it meets some “reasonable” standard of security, even if it’s not specifically ‘security’ code. If one releases code one knows may be insecure, then one is acting in a manner that again, very reasonably, could be construed as containing unethical negligence.

And that’s where I am concerned. Because, ultimately, since you cannot just get all software “pre-made” – as then what’d be the point of developing any software – you will have to roll at least SOME security-embodying code on your own. You have to roll you own, like it or not, and that means it takes expertise. So the question is: if we follow the dictum to not “roll one’s own” insofar as it applies to specifically security-related code such as encryption, BUT we have to roll our own code as part of ALL application development, then how much expertise in security do we as a general application programmer need under our belt to meet that intuitive ethical bound? It seems like the amount has to be “more than zero” but (hopefully!) “less than that of a computer security expert” of the calibre who develop the actual encryption algorithms (like AES, etc.) that become world standards. But what is that amount, and which end of that spectrum is it closer to, and what exactly is needed to learn it?

Continue reading How much security expertise does a general application programmer need to develop software ethically?